A Multifaceted Case

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
A multifaceted case

Box

1

The approach to clinical conundrums by an expert clinician is revealed through the presentation of an actual patient's case in an approach typical of a morning report. Similarly to patient care, sequential pieces of information are provided to the clinician, who is unfamiliar with the case. The focus is on the thought processes of both the clinical team caring for the patient and the discussant.

Box

2

This icon represents the patient's case. Each paragraph that follows represents the discussant's thoughts.

A 67‐year‐old male presented to an outside hospital with a 1‐day history of fevers up to 39.4C, bilateral upper extremity weakness, and confusion. Forty‐eight hours prior to his presentation he had undergone uncomplicated bilateral carpal tunnel release surgery for the complaint of bilateral upper extremity paresthesias.

Bilateral carpal tunnel syndrome should prompt consideration of systemic diseases that infiltrate or impinge both canals (eg, rheumatoid arthritis, acromegaly, hypothyroidism, amyloidosis), although it is most frequently explained by a bilateral repetitive stress (eg, workplace typing). The development of upper extremity weakness suggests that an alternative condition such as cervical myelopathy, bilateral radiculopathy, or a rapidly progressive peripheral neuropathy may be responsible for his paresthesias. It would be unusual for a central nervous system process to selectively cause bilateral upper extremity weakness. Occasionally, patients emerge from surgery with limb weakness caused by peripheral nerve injury sustained from malpositioning of the extremity, but this would have been evident immediately following the operation.

Postoperative fevers are frequently unexplained, but require a search for common healthcare‐associated infections, such as pneumonia, urinary tract infection, intravenous catheter thrombophlebitis, wound infection, or Clostridium difficile colitis. However, such complications are unlikely following an ambulatory procedure. Confusion and fever together point to a central nervous system infection (meningoencephalitis or brain abscess) or a systemic infection that has impaired cognition. Malignancies can cause fever and altered mental status, but these are typically asynchronous events.

His past medical history was notable for hypertension, dyslipidemia, gout, actinic keratosis, and gastroesophageal reflux. His surgical history included bilateral knee replacements, repair of a left rotator cuff injury, and a herniorrhaphy. He was a nonsmoker who consumed 4 to 6 beers daily. His medications included clonidine, colchicine, atorvastatin, extended release metoprolol, triamterene‐hydrochlorothiazide, probenecid, and as‐needed ibuprofen and omeprazole.

Upon presentation he was cooperative and in no distress. Temperature was 38.9C, pulse 119 beats per minute, blood pressure 140/90 mm Hg, and oxygen saturation 94% on room air. He was noted to have logical thinking but impaired concentration. His upper extremity movement was restricted because of postoperative discomfort and swelling rather than true weakness. The rest of the exam was normal.

Metabolic, infectious, structural (intracranial), and toxic disorders can cause altered mental status. His heavy alcohol use puts him at risk for alcohol withdrawal and infections (such as Listeria meningitis), both of which may explain his fever and altered mental status. Signs and symptoms of meningitis are absent at this time. His knee prostheses could have harbored an infection preoperatively and therefore warrant close examination. Patients sometimes have adverse reactions to medications they have been prescribed but are not exposed to until hospitalization, although his surgical procedure was likely done on an outpatient basis. Empiric thiamine should be administered early given his confusion and alcohol habits.

Basic laboratories revealed a hemoglobin of 11.2 g/dL, white blood cell (WBC) count of 6,900/mm3 with 75% neutrophils, platelets of 206,000/mm3. Mean corpuscular volume was 97 mm3. Serum albumin was 2.4 g/dl, sodium 134 mmol/L, potassium 3.9 mmol/L, blood urea nitrogen 12 mg/dL, and creatinine 0.9 mg/dL. The aspartate aminotransferase was 93 U/L, alanine aminotransferase 73 U/L, alkaline phosphatase 254 U/L, and total bilirubin 1.0 mg/dL. Urinalysis was normal. Over the next 16 days fevers and waxing and waning mentation continued. The following studies were normal or negative: blood and urine cultures; transthoracic echocardiogram, antinuclear antibodies, hepatitis B surface antigen, hepatitis C antibody, and human immunodeficiency virus antibody; magnetic resonance imaging of the brain, electroencephalogram, and lower extremity venous ultrasound.

Hypoalbuminemia may signal chronic illness, hypoproduction from liver disease (caused by his heavy alcohol use), or losses from the kidney or gastrointestinal tract. His anemia may reflect chronic disease or point toward a specific underlying disorder. For example, fever and anemia could arise from hemolytic processes such as thrombotic thrombocytopenic purpura or clostridial infections.

An extensive workup has not revealed a cause for his prolonged fever (eg, infection, malignancy, autoimmune condition, or toxin). Likewise, an explanation for confusion is lacking. Because systemic illness and structural brain disease have not been uncovered, a lumbar puncture is indicated.

A lumbar puncture under fluoroscopic guidance revealed a cerebrospinal fluid (CSF) WBC count of 6/mm3, red blood cell count (RBC) 2255/mm3, protein 49 mg/dL, and glucose 54 mg/dL. The WBC differential was not reported. No growth was reported on bacterial cultures. Polymerase chain reactions for enterovirus and herpes simplex viruses 1 and 2 were negative. Cryptococcal antigen and Venereal Disease Research Laboratory serologies were also negative.

A CSF WBC count of 6 is out of the normal range, but could be explained by a traumatic tap given the elevated RBC; the protein and glucose are likewise at the border of normal. Collectively, these are nonspecific findings that could point to an infectious or noninfectious cause of intrathecal or paraspinous inflammation, but are not suggestive of bacterial meningitis.

The patient developed pneumonia, for which he received ertapenem. On hospital day 17 he was intubated for hypoxia and respiratory distress and was extubated after 4 days of mechanical ventilation. Increasing weakness in all extremities prompted magnetic resonance imaging of the spine, which revealed fluid and enhancement involving the soft tissues around C3‐C4 and C5‐C6, raising concerns for discitis and osteomyelitis. Possible septic arthritis at the C3‐C4 and C4‐C5 facets was noted. Ring enhancing fluid collections from T2‐T8 compatible with an epidural abscess with cord compression at T4‐T5 and T6‐T7 were seen. Enhancement and fluid involving the facet joints between T2‐T7 was also consistent with septic arthritis (Figure 1).

Figure 1
Magnetic resonance imaging of the spine showing abnormal soft tissue adjacent to the right costovertebral junction with extension through the neural foramen and cord compression at T5.

His pneumonia appears to have developed many days into his hospitalization, and therefore is unlikely to account for his initial fever and confusion. Blood cultures and echocardiogram have not suggested an endovascular infection that could account for such widespread vertebral and epidural deposition. A wide number of bacteria can cause epidural abscesses and septic arthritis, most commonly Staphylococcus aureus. Less common pathogens with a predilection for osteoarticular involvement, such as Brucella species, warrant consideration when there is appropriate epidemiologic risk.

Systemic bacterial infection remains a concern with his alcoholism rendering him partially immunosuppressed. However, a large number of adjacent spinal joints harboring a bacterial infection is unusual, and a working diagnosis of multilevel spinal infection, therefore, should prompt consideration of noninfectious processes. When a patient develops a swollen peripheral joint and fever in the postoperative setting, gout or pseudogout is a leading consideration. That same thinking should be applied to the vertebrae, where spinal gout can manifest. Surgery itself or associated changes in alcohol consumption patterns or changes in medications (at least 4 of which are relevant to goutcolchicine, hydrochlorothiazide, probenecid, and ibuprofen) could predispose him to a flare.

Aspiration of the epidural collection yielded a negative Gram stain and culture. He developed swelling in the bilateral proximal interphalangeal joints and was treated with steroids and colchicine for suspected gout flare. Vancomycin and piperacillin‐tazobactam were initiated, and on hospital day 22 the patient was transferred to another hospital for further evaluation by neurosurgery.

The negative Gram stain and culture argues against septic arthritis, but these are imperfect tests and will not detect atypical pathogens (eg, spinal tuberculosis). Reexamination of the aspirate for urate and calcium pyrophosphate crystals would be useful. Initiation of steroids in the setting of potentially undiagnosed infection requires a careful risk/benefit analysis. It may be reasonable to treat the patient with colchicine alone while withholding steroids and avoiding nonsteroidal agents in case invasive procedures are planned.

On exam his temperature was 36C, blood pressure 156/92 mm Hg, pulse 100 beats per minute, respirations 21 per minute, and oxygenation 97% on room air. He was not in acute distress and was only oriented to self. Bilateral 2+ lower extremity pitting edema up to the knees was noted. Examination of the heart and lungs was unremarkable. Gouty tophi were noted over both elbows. His joints were normal.

Cranial nerves IIXII were normal. Motor exam revealed normal muscle tone and bulk. Muscle strength was approximately 3/5 in the right upper extremity and 4+/5 in the left upper extremity. Bilateral lower extremity strength was 3/5 in hip flexion, knee flexion, and knee extension. Dorsiflexion and plantar flexion were approximately 2/5 bilaterally. Sensation was intact to light touch and pinprick, and proprioception was normal. Gait was not tested. A Foley catheter was in place.

This examination confirms ongoing encephalopathy and incomplete quadriplegia. The lower extremity weakness is nearly equal proximally and distally, which can be seen with an advanced peripheral neuropathy but is more characteristic of myelopathy. The expected concomitant sensory deficit of myelopathy is not present, although this may be difficult to detect in a confused patient. Reflex testing would help in distinguishing myelopathy (favored because of the imaging findings) from a rapid progressive peripheral motor neuropathy (eg, acute inflammatory demyelinating polyneuropathy or acute intermittent porphyria).

The pitting edema likely represents fluid overload, which can be nonspecific after prolonged immobility during hospitalization; hypoalbuminemia is oftentimes speculated to play a role when this develops. His alcohol use puts him at risk for heart failure (although there is no evidence of this on exam) and liver disease (which his liver function tests suggest). The tophi speak to the extent and chronicity of his hyperuricemia.

On arrival he reported recent onset diarrhea. Medications at transfer included metoprolol, omeprazole, prednisone, piperacillin/tazobactam, vancomycin, and colchicine; acetaminophen, bisacodyl, diphenhydramine, fentanyl, subcutaneous insulin, and labetalol were administered as needed. Laboratory studies included a hemoglobin of 9.5 g/dL, WBC count of 7,300/mm3 with 95% neutrophils, platelets 301,000/mm3, sodium 151 mmol/L, potassium 2.9 mmol/L, blood urea nitrogen 76 mg/dL, creatinine 2.0 mg/dL, aspartate aminotransferase 171 U/L, and alanine aminotransferase 127 U/L. Serum albumin was 1.7 g/dL.

At least 3 of his medicationsdiphenhydramine, fentanyl, and prednisonemay be contributing to his ongoing altered mental status, which may be further compounded by hypernatremia. Although his liver disease remains uncharacterized, hepatic encephalopathy may be contributing to his confusion as well.

Colchicine is likely responsible for his diarrhea, which would be the most readily available explanation for his hypernatremia, hypokalemia, and acute kidney injury (AKI). Acute kidney injury could result from progressive liver disease (hepatorenal syndrome), decreased arterial perfusion (suggested by third spacing or his diarrhea), acute tubular necrosis (from infection or medication), or urinary retention secondary to catheter obstruction. Acute hyperuricemia can also cause AKI (urate nephropathy).

Anemia has progressed and requires evaluation for blood loss as well as hemolysis. Hepatotoxicity from any of his medications (eg, acetaminophen) must be considered. Coagulation studies and review of the previous abdominal computed tomography would help determine the extent of his liver disease.

Neurosurgical consultation was obtained and the patient and his family elected to proceed with a thoracic laminectomy. Cheesy fluid was identified at the facet joints at T6‐T7, which was found to contain rare deposits of monosodium urate crystals. Surgical specimen cultures were sterile. His mental status and strength slowly improved to baseline following the surgery. He was discharged on postoperative day 7 to a rehabilitation facility. On the telephone follow‐up he reported that he has regained his strength completely.

The fluid analysis and clinical course confirms spinal gout. The presenting encephalopathy remains unexplained; I am unaware of gout leading to altered mental status.

COMMENTARY

Gout is an inflammatory condition triggered by the deposition of monosodium urate crystals in tissues in association with hyperuricemia.[1] Based on the 20072008 National Health and Nutrition Examination Survey, the prevalence of gout among US adults was 3.9% (8.3 million individuals).[2] These rates are increasing and are thought to be spurred by the aging population, increasing rates of obesity, and changing dietary habits including increases in the consumption of soft drinks and red meat.[3, 4, 5] The development of gout during hospitalization can prolong length of stay, and the implementation of a management protocol appears to help decrease treatment delays and the inappropriate discontinuation of gout prophylaxis.[6, 7] Surgery, with its associated physiologic stressors, can trigger gout, which is often polyarticular and presents with fever leading to testing and consultations for the febrile episode.[8]

Gout is an ancient disease that is familiar to most clinicians. In 1666, Daniel Sennert, a German physician, described gout as the physician's shame because of its infrequent recognition.[9] Clinical gout spans 3 stages: asymptomatic hyperuricemia, acute and intercritical gout, and chronic gouty arthritis. The typical acute presentation is monoarticular with the abrupt onset of pain, swelling, warmth, and erythema in a peripheral joint. It manifests most characteristically in the first metatarsophalangeal joint (podagra), but also frequently involves the midfoot, ankle, knee, and wrist and sometimes affects multiple joints simultaneously (polyarticular gout).[1, 10] The visualization of monosodium urate crystals either in synovial fluid or from a tophus is diagnostic of gout; however, guidelines recognize that a classic presentation of gout may be diagnosed based on clinical criteria alone.[11] Dual energy computerized tomography and ultrasonography are emerging as techniques for the visualization of monosodium urate crystals; however, they are not currently routinely recommended.[12]

There are many unusual presentations of gout, with an increase in such reports paralleling both the overall increase in the prevalence of gout and improvements in available imaging techniques.[13] Atypical presentations present diagnostic challenges and are often caused by tophaceous deposits in unusual locations. Reports of atypical gout have described entrapment neuropathies (eg, gouty deposits inducing carpal tunnel syndrome), ocular gout manifested as conjunctival deposits and uveitis, pancreatic gout presenting as a mass, and dermatologic manifestations including panniculitis.[13, 14]

Spinal gout (also known as axial gout) manifests when crystal‐induced inflammation, erosive arthritis, and tophaceous deposits occur along the spinal column. A cross‐sectional study of patients with poorly controlled gout reported the prevalence of spinal gout diagnosed by computerized tomography to be 35%. These radiographic findings were not consistently correlated with back pain.[15] Imaging features that are suggestive of spinal gout include intra‐articular and juxta‐articular erosions with sclerotic margins and density greater than the surrounding muscle. Periosteal new bone formation adjacent to bony destruction can form overhanging edges.[16] When retrospectively presented with the final diagnosis, the radiologist at our institution noted that the appearance was typical gout in an atypical location.

Spinal gout can be confused with spinal metastasis, infection, and stenosis. It can remain asymptomatic or present with back pain, radiculopathy, or cord compression. The lumbar spine is the most frequently affected site.[17, 18] Many patients with spinal gout have had chronic tophaceous gout with radiologic evidence of erosions in the peripheral joints.[15] Patients with spinal gout also have elevated urate levels and markers of inflammation.[18] Surgical decompression and stabilization is recommended when there is frank cord compression, progressive neurologic compromise, or lack of improvement with gout therapy alone.[18]

This patient's male gender, history of gout, hypertension, alcohol consumption, and thiazide diuretic use placed him at an increased risk of a gout attack.[19, 20] The possible interruption of urate‐lowering therapy for the surgical procedure and surgery itself further heightened his risk of suffering acute gouty arthritis in the perioperative period.[21] The patient's encephalopathy may have masked back pain and precluded an accurate neurologic exam. There is one case report to our knowledge describing encephalopathy that improved with colchicine and was possibly related to gout.[22] This patient's encephalopathy was deemed multifactorial and attributed to alcohol withdrawal, medications (including opioids and steroids), and infection (pneumonia).

Gout is best known for its peripheral arthritis and is rarely invoked in the consideration of spinal and myelopathic processes where more pressing competing diagnoses, such as infection and malignancy, are typically considered. In addition, when surgical specimens are submitted for examination for pathology in formaldehyde (rather than alcohol), monosodium urate crystals are dissolved and are thus difficult to identify in the specimen.

This case reminds us that gout remains a diagnostic challenge and should be considered in the differential of an inflammatory process. Recognition of the multifaceted nature of gout can allow for the earlier recognition and treatment of the less typical presentations of this ancient malady.

KEY TEACHING POINTS

  1. Crystalline disease is a common cause of postoperative arthritis.
  2. Gout (and pseudogout) should be considered in cases of focal inflammation (detected by examination or imaging) when the evidence or predisposition for infection is limited or nonexistent.
  3. Spinal gout presents with back pain, radiculopathy, or cord compression and may be confused with spinal metastasis, infection, and stenosis.

Acknowledgements

The authors thank Dr. Kari Waddell and Elaine Bammerlin for their assistance in the preparation of this manuscript.

Disclosure: Nothing to report.

Files
References
  1. Burns CM, Wortmann RL. Clinical features and treatment of gout. In: Firestein GS, Budd RC, Gabriel SE, McInnes IB, O'Dell JR, eds. Kelley's Textbook of Rheumatology. Vol 2. 9th ed. Philadelphia, PA: Elsevier/Saunders; 2013:15441575.
  2. Zhu Y, Pandya BJ, Choi HK. Prevalence of gout and hyperuricemia in the US general population: the National Health and Nutrition Examination Survey 2007–2008. Arthritis Rheum. 2011;63(10):31363141.
  3. Wallace KL, Riedel AA, Joseph‐Ridge N, Wortmann R. Increasing prevalence of gout and hyperuricemia over 10 years among older adults in a managed care population. J Rheumatol. 2004;31(8):15821587.
  4. Choi HK, Atkinson K, Karlson EW, Willett W, Curhan G. Purine‐rich foods, dairy and protein intake, and the risk of gout in men. New Engl J Med. 2004;350(11):10931103.
  5. Choi HK, Willett W, Curhan G. Fructose‐rich beverages and risk of gout in women. JAMA. 2010;304(20):22702278.
  6. Lee G, Roberts L. Healthcare burden of in‐hospital gout. Intern Med J. 2012;42(11):12611263.
  7. Kamalaraj N, Gnanenthiran SR, Kathirgamanathan T, Hassett GM, Gibson KA, McNeil HP. Improved management of acute gout during hospitalization following introduction of a protocol. Int J Rheum Dis. 2012;15(6):512520.
  8. Craig MH, Poole GV, Hauser CJ. Postsurgical gout. Am Surg. 1995;61(1):5659.
  9. Karsh R, McCarthy JD. Evolution of modern medicine. Arch Intern Med. 1960;105(4):640644.
  10. Neogi T. Clinical practice. Gout. N Engl J Med. 2011;364(5):443452.
  11. Shmerling RH. Management of gout: a 57‐year‐old man with a history of podagra, hyperuricemia, and mild renal insufficiency. JAMA. 2012;308(20):21332141.
  12. Rettenbacher T, Ennemoser S, Weirich H, et al. Diagnostic imaging of gout: comparison of high‐resolution US versus conventional X‐ray. Eur Radiol. 2008;18(3):621630.
  13. Forbess LJ, Fields TR. The broad spectrum of urate crystal deposition: unusual presentations of gouty tophi. Semin Arthritis Rheum. 2012;42(2):146154.
  14. Ning TC, Keenan RT. Unusual clinical presentations of gout. Curr Opin Rheumatol. 2010;22(2):181187.
  15. Konatalapalli RM, Lumezanu E, Jelinek JS, Murphey MD, Wang H, Weinstein A. Correlates of axial gout: a cross‐sectional study. J Rheumatol. 2012;39(7):14451449.
  16. Saketkoo LA, Robertson HJ, Dyer HR, Virk Z‐U, Ferreyro HR, Espinoza LR. Axial gouty arthropathy. Am J Med Sci. 2009;338(2):140146.
  17. Lumezanu E, Konatalapalli R, Weinstein A. Axial (spinal) gout. Curr Rheumatol Rep. 2012;14(2):161164.
  18. Hou LC, Hsu AR, Veeravagu A, Boakye M. Spinal gout in a renal transplant patient: a case report and literature review. Surg Neurol. 2007;67(1):6573.
  19. Zhang Y, Woods R, Chaisson CE, et al. Alcohol consumption as a trigger of recurrent gout attacks. Am J Med. 2006;119(9):800.e11800.e16.
  20. Hunter D, York M, Chaisson CE, Woods R, Niu J, Zhang Y. Recent diuretic use and the risk of recurrent gout attacks: the online case‐crossover gout study. J Rheumatol. 2006;33(7):13411345.
  21. Kang EH, Lee EY, Lee YJ, Song YW, Lee EB. Clinical features and risk factors of postsurgical gout. Ann Rheum Dis. 2008;67(9):12711275.
  22. Alla P, Carli P, Cellarier G, Paris JF. Gouty encephalopathy: myth or reality [in French]? Rev Med Interne. 1997;18(6):474476.
Article PDF
Issue
Journal of Hospital Medicine - 8(5)
Page Number
267-270
Sections
Files
Files
Article PDF
Article PDF

Box

1

The approach to clinical conundrums by an expert clinician is revealed through the presentation of an actual patient's case in an approach typical of a morning report. Similarly to patient care, sequential pieces of information are provided to the clinician, who is unfamiliar with the case. The focus is on the thought processes of both the clinical team caring for the patient and the discussant.

Box

2

This icon represents the patient's case. Each paragraph that follows represents the discussant's thoughts.

A 67‐year‐old male presented to an outside hospital with a 1‐day history of fevers up to 39.4C, bilateral upper extremity weakness, and confusion. Forty‐eight hours prior to his presentation he had undergone uncomplicated bilateral carpal tunnel release surgery for the complaint of bilateral upper extremity paresthesias.

Bilateral carpal tunnel syndrome should prompt consideration of systemic diseases that infiltrate or impinge both canals (eg, rheumatoid arthritis, acromegaly, hypothyroidism, amyloidosis), although it is most frequently explained by a bilateral repetitive stress (eg, workplace typing). The development of upper extremity weakness suggests that an alternative condition such as cervical myelopathy, bilateral radiculopathy, or a rapidly progressive peripheral neuropathy may be responsible for his paresthesias. It would be unusual for a central nervous system process to selectively cause bilateral upper extremity weakness. Occasionally, patients emerge from surgery with limb weakness caused by peripheral nerve injury sustained from malpositioning of the extremity, but this would have been evident immediately following the operation.

Postoperative fevers are frequently unexplained, but require a search for common healthcare‐associated infections, such as pneumonia, urinary tract infection, intravenous catheter thrombophlebitis, wound infection, or Clostridium difficile colitis. However, such complications are unlikely following an ambulatory procedure. Confusion and fever together point to a central nervous system infection (meningoencephalitis or brain abscess) or a systemic infection that has impaired cognition. Malignancies can cause fever and altered mental status, but these are typically asynchronous events.

His past medical history was notable for hypertension, dyslipidemia, gout, actinic keratosis, and gastroesophageal reflux. His surgical history included bilateral knee replacements, repair of a left rotator cuff injury, and a herniorrhaphy. He was a nonsmoker who consumed 4 to 6 beers daily. His medications included clonidine, colchicine, atorvastatin, extended release metoprolol, triamterene‐hydrochlorothiazide, probenecid, and as‐needed ibuprofen and omeprazole.

Upon presentation he was cooperative and in no distress. Temperature was 38.9C, pulse 119 beats per minute, blood pressure 140/90 mm Hg, and oxygen saturation 94% on room air. He was noted to have logical thinking but impaired concentration. His upper extremity movement was restricted because of postoperative discomfort and swelling rather than true weakness. The rest of the exam was normal.

Metabolic, infectious, structural (intracranial), and toxic disorders can cause altered mental status. His heavy alcohol use puts him at risk for alcohol withdrawal and infections (such as Listeria meningitis), both of which may explain his fever and altered mental status. Signs and symptoms of meningitis are absent at this time. His knee prostheses could have harbored an infection preoperatively and therefore warrant close examination. Patients sometimes have adverse reactions to medications they have been prescribed but are not exposed to until hospitalization, although his surgical procedure was likely done on an outpatient basis. Empiric thiamine should be administered early given his confusion and alcohol habits.

Basic laboratories revealed a hemoglobin of 11.2 g/dL, white blood cell (WBC) count of 6,900/mm3 with 75% neutrophils, platelets of 206,000/mm3. Mean corpuscular volume was 97 mm3. Serum albumin was 2.4 g/dl, sodium 134 mmol/L, potassium 3.9 mmol/L, blood urea nitrogen 12 mg/dL, and creatinine 0.9 mg/dL. The aspartate aminotransferase was 93 U/L, alanine aminotransferase 73 U/L, alkaline phosphatase 254 U/L, and total bilirubin 1.0 mg/dL. Urinalysis was normal. Over the next 16 days fevers and waxing and waning mentation continued. The following studies were normal or negative: blood and urine cultures; transthoracic echocardiogram, antinuclear antibodies, hepatitis B surface antigen, hepatitis C antibody, and human immunodeficiency virus antibody; magnetic resonance imaging of the brain, electroencephalogram, and lower extremity venous ultrasound.

Hypoalbuminemia may signal chronic illness, hypoproduction from liver disease (caused by his heavy alcohol use), or losses from the kidney or gastrointestinal tract. His anemia may reflect chronic disease or point toward a specific underlying disorder. For example, fever and anemia could arise from hemolytic processes such as thrombotic thrombocytopenic purpura or clostridial infections.

An extensive workup has not revealed a cause for his prolonged fever (eg, infection, malignancy, autoimmune condition, or toxin). Likewise, an explanation for confusion is lacking. Because systemic illness and structural brain disease have not been uncovered, a lumbar puncture is indicated.

A lumbar puncture under fluoroscopic guidance revealed a cerebrospinal fluid (CSF) WBC count of 6/mm3, red blood cell count (RBC) 2255/mm3, protein 49 mg/dL, and glucose 54 mg/dL. The WBC differential was not reported. No growth was reported on bacterial cultures. Polymerase chain reactions for enterovirus and herpes simplex viruses 1 and 2 were negative. Cryptococcal antigen and Venereal Disease Research Laboratory serologies were also negative.

A CSF WBC count of 6 is out of the normal range, but could be explained by a traumatic tap given the elevated RBC; the protein and glucose are likewise at the border of normal. Collectively, these are nonspecific findings that could point to an infectious or noninfectious cause of intrathecal or paraspinous inflammation, but are not suggestive of bacterial meningitis.

The patient developed pneumonia, for which he received ertapenem. On hospital day 17 he was intubated for hypoxia and respiratory distress and was extubated after 4 days of mechanical ventilation. Increasing weakness in all extremities prompted magnetic resonance imaging of the spine, which revealed fluid and enhancement involving the soft tissues around C3‐C4 and C5‐C6, raising concerns for discitis and osteomyelitis. Possible septic arthritis at the C3‐C4 and C4‐C5 facets was noted. Ring enhancing fluid collections from T2‐T8 compatible with an epidural abscess with cord compression at T4‐T5 and T6‐T7 were seen. Enhancement and fluid involving the facet joints between T2‐T7 was also consistent with septic arthritis (Figure 1).

Figure 1
Magnetic resonance imaging of the spine showing abnormal soft tissue adjacent to the right costovertebral junction with extension through the neural foramen and cord compression at T5.

His pneumonia appears to have developed many days into his hospitalization, and therefore is unlikely to account for his initial fever and confusion. Blood cultures and echocardiogram have not suggested an endovascular infection that could account for such widespread vertebral and epidural deposition. A wide number of bacteria can cause epidural abscesses and septic arthritis, most commonly Staphylococcus aureus. Less common pathogens with a predilection for osteoarticular involvement, such as Brucella species, warrant consideration when there is appropriate epidemiologic risk.

Systemic bacterial infection remains a concern with his alcoholism rendering him partially immunosuppressed. However, a large number of adjacent spinal joints harboring a bacterial infection is unusual, and a working diagnosis of multilevel spinal infection, therefore, should prompt consideration of noninfectious processes. When a patient develops a swollen peripheral joint and fever in the postoperative setting, gout or pseudogout is a leading consideration. That same thinking should be applied to the vertebrae, where spinal gout can manifest. Surgery itself or associated changes in alcohol consumption patterns or changes in medications (at least 4 of which are relevant to goutcolchicine, hydrochlorothiazide, probenecid, and ibuprofen) could predispose him to a flare.

Aspiration of the epidural collection yielded a negative Gram stain and culture. He developed swelling in the bilateral proximal interphalangeal joints and was treated with steroids and colchicine for suspected gout flare. Vancomycin and piperacillin‐tazobactam were initiated, and on hospital day 22 the patient was transferred to another hospital for further evaluation by neurosurgery.

The negative Gram stain and culture argues against septic arthritis, but these are imperfect tests and will not detect atypical pathogens (eg, spinal tuberculosis). Reexamination of the aspirate for urate and calcium pyrophosphate crystals would be useful. Initiation of steroids in the setting of potentially undiagnosed infection requires a careful risk/benefit analysis. It may be reasonable to treat the patient with colchicine alone while withholding steroids and avoiding nonsteroidal agents in case invasive procedures are planned.

On exam his temperature was 36C, blood pressure 156/92 mm Hg, pulse 100 beats per minute, respirations 21 per minute, and oxygenation 97% on room air. He was not in acute distress and was only oriented to self. Bilateral 2+ lower extremity pitting edema up to the knees was noted. Examination of the heart and lungs was unremarkable. Gouty tophi were noted over both elbows. His joints were normal.

Cranial nerves IIXII were normal. Motor exam revealed normal muscle tone and bulk. Muscle strength was approximately 3/5 in the right upper extremity and 4+/5 in the left upper extremity. Bilateral lower extremity strength was 3/5 in hip flexion, knee flexion, and knee extension. Dorsiflexion and plantar flexion were approximately 2/5 bilaterally. Sensation was intact to light touch and pinprick, and proprioception was normal. Gait was not tested. A Foley catheter was in place.

This examination confirms ongoing encephalopathy and incomplete quadriplegia. The lower extremity weakness is nearly equal proximally and distally, which can be seen with an advanced peripheral neuropathy but is more characteristic of myelopathy. The expected concomitant sensory deficit of myelopathy is not present, although this may be difficult to detect in a confused patient. Reflex testing would help in distinguishing myelopathy (favored because of the imaging findings) from a rapid progressive peripheral motor neuropathy (eg, acute inflammatory demyelinating polyneuropathy or acute intermittent porphyria).

The pitting edema likely represents fluid overload, which can be nonspecific after prolonged immobility during hospitalization; hypoalbuminemia is oftentimes speculated to play a role when this develops. His alcohol use puts him at risk for heart failure (although there is no evidence of this on exam) and liver disease (which his liver function tests suggest). The tophi speak to the extent and chronicity of his hyperuricemia.

On arrival he reported recent onset diarrhea. Medications at transfer included metoprolol, omeprazole, prednisone, piperacillin/tazobactam, vancomycin, and colchicine; acetaminophen, bisacodyl, diphenhydramine, fentanyl, subcutaneous insulin, and labetalol were administered as needed. Laboratory studies included a hemoglobin of 9.5 g/dL, WBC count of 7,300/mm3 with 95% neutrophils, platelets 301,000/mm3, sodium 151 mmol/L, potassium 2.9 mmol/L, blood urea nitrogen 76 mg/dL, creatinine 2.0 mg/dL, aspartate aminotransferase 171 U/L, and alanine aminotransferase 127 U/L. Serum albumin was 1.7 g/dL.

At least 3 of his medicationsdiphenhydramine, fentanyl, and prednisonemay be contributing to his ongoing altered mental status, which may be further compounded by hypernatremia. Although his liver disease remains uncharacterized, hepatic encephalopathy may be contributing to his confusion as well.

Colchicine is likely responsible for his diarrhea, which would be the most readily available explanation for his hypernatremia, hypokalemia, and acute kidney injury (AKI). Acute kidney injury could result from progressive liver disease (hepatorenal syndrome), decreased arterial perfusion (suggested by third spacing or his diarrhea), acute tubular necrosis (from infection or medication), or urinary retention secondary to catheter obstruction. Acute hyperuricemia can also cause AKI (urate nephropathy).

Anemia has progressed and requires evaluation for blood loss as well as hemolysis. Hepatotoxicity from any of his medications (eg, acetaminophen) must be considered. Coagulation studies and review of the previous abdominal computed tomography would help determine the extent of his liver disease.

Neurosurgical consultation was obtained and the patient and his family elected to proceed with a thoracic laminectomy. Cheesy fluid was identified at the facet joints at T6‐T7, which was found to contain rare deposits of monosodium urate crystals. Surgical specimen cultures were sterile. His mental status and strength slowly improved to baseline following the surgery. He was discharged on postoperative day 7 to a rehabilitation facility. On the telephone follow‐up he reported that he has regained his strength completely.

The fluid analysis and clinical course confirms spinal gout. The presenting encephalopathy remains unexplained; I am unaware of gout leading to altered mental status.

COMMENTARY

Gout is an inflammatory condition triggered by the deposition of monosodium urate crystals in tissues in association with hyperuricemia.[1] Based on the 20072008 National Health and Nutrition Examination Survey, the prevalence of gout among US adults was 3.9% (8.3 million individuals).[2] These rates are increasing and are thought to be spurred by the aging population, increasing rates of obesity, and changing dietary habits including increases in the consumption of soft drinks and red meat.[3, 4, 5] The development of gout during hospitalization can prolong length of stay, and the implementation of a management protocol appears to help decrease treatment delays and the inappropriate discontinuation of gout prophylaxis.[6, 7] Surgery, with its associated physiologic stressors, can trigger gout, which is often polyarticular and presents with fever leading to testing and consultations for the febrile episode.[8]

Gout is an ancient disease that is familiar to most clinicians. In 1666, Daniel Sennert, a German physician, described gout as the physician's shame because of its infrequent recognition.[9] Clinical gout spans 3 stages: asymptomatic hyperuricemia, acute and intercritical gout, and chronic gouty arthritis. The typical acute presentation is monoarticular with the abrupt onset of pain, swelling, warmth, and erythema in a peripheral joint. It manifests most characteristically in the first metatarsophalangeal joint (podagra), but also frequently involves the midfoot, ankle, knee, and wrist and sometimes affects multiple joints simultaneously (polyarticular gout).[1, 10] The visualization of monosodium urate crystals either in synovial fluid or from a tophus is diagnostic of gout; however, guidelines recognize that a classic presentation of gout may be diagnosed based on clinical criteria alone.[11] Dual energy computerized tomography and ultrasonography are emerging as techniques for the visualization of monosodium urate crystals; however, they are not currently routinely recommended.[12]

There are many unusual presentations of gout, with an increase in such reports paralleling both the overall increase in the prevalence of gout and improvements in available imaging techniques.[13] Atypical presentations present diagnostic challenges and are often caused by tophaceous deposits in unusual locations. Reports of atypical gout have described entrapment neuropathies (eg, gouty deposits inducing carpal tunnel syndrome), ocular gout manifested as conjunctival deposits and uveitis, pancreatic gout presenting as a mass, and dermatologic manifestations including panniculitis.[13, 14]

Spinal gout (also known as axial gout) manifests when crystal‐induced inflammation, erosive arthritis, and tophaceous deposits occur along the spinal column. A cross‐sectional study of patients with poorly controlled gout reported the prevalence of spinal gout diagnosed by computerized tomography to be 35%. These radiographic findings were not consistently correlated with back pain.[15] Imaging features that are suggestive of spinal gout include intra‐articular and juxta‐articular erosions with sclerotic margins and density greater than the surrounding muscle. Periosteal new bone formation adjacent to bony destruction can form overhanging edges.[16] When retrospectively presented with the final diagnosis, the radiologist at our institution noted that the appearance was typical gout in an atypical location.

Spinal gout can be confused with spinal metastasis, infection, and stenosis. It can remain asymptomatic or present with back pain, radiculopathy, or cord compression. The lumbar spine is the most frequently affected site.[17, 18] Many patients with spinal gout have had chronic tophaceous gout with radiologic evidence of erosions in the peripheral joints.[15] Patients with spinal gout also have elevated urate levels and markers of inflammation.[18] Surgical decompression and stabilization is recommended when there is frank cord compression, progressive neurologic compromise, or lack of improvement with gout therapy alone.[18]

This patient's male gender, history of gout, hypertension, alcohol consumption, and thiazide diuretic use placed him at an increased risk of a gout attack.[19, 20] The possible interruption of urate‐lowering therapy for the surgical procedure and surgery itself further heightened his risk of suffering acute gouty arthritis in the perioperative period.[21] The patient's encephalopathy may have masked back pain and precluded an accurate neurologic exam. There is one case report to our knowledge describing encephalopathy that improved with colchicine and was possibly related to gout.[22] This patient's encephalopathy was deemed multifactorial and attributed to alcohol withdrawal, medications (including opioids and steroids), and infection (pneumonia).

Gout is best known for its peripheral arthritis and is rarely invoked in the consideration of spinal and myelopathic processes where more pressing competing diagnoses, such as infection and malignancy, are typically considered. In addition, when surgical specimens are submitted for examination for pathology in formaldehyde (rather than alcohol), monosodium urate crystals are dissolved and are thus difficult to identify in the specimen.

This case reminds us that gout remains a diagnostic challenge and should be considered in the differential of an inflammatory process. Recognition of the multifaceted nature of gout can allow for the earlier recognition and treatment of the less typical presentations of this ancient malady.

KEY TEACHING POINTS

  1. Crystalline disease is a common cause of postoperative arthritis.
  2. Gout (and pseudogout) should be considered in cases of focal inflammation (detected by examination or imaging) when the evidence or predisposition for infection is limited or nonexistent.
  3. Spinal gout presents with back pain, radiculopathy, or cord compression and may be confused with spinal metastasis, infection, and stenosis.

Acknowledgements

The authors thank Dr. Kari Waddell and Elaine Bammerlin for their assistance in the preparation of this manuscript.

Disclosure: Nothing to report.

Box

1

The approach to clinical conundrums by an expert clinician is revealed through the presentation of an actual patient's case in an approach typical of a morning report. Similarly to patient care, sequential pieces of information are provided to the clinician, who is unfamiliar with the case. The focus is on the thought processes of both the clinical team caring for the patient and the discussant.

Box

2

This icon represents the patient's case. Each paragraph that follows represents the discussant's thoughts.

A 67‐year‐old male presented to an outside hospital with a 1‐day history of fevers up to 39.4C, bilateral upper extremity weakness, and confusion. Forty‐eight hours prior to his presentation he had undergone uncomplicated bilateral carpal tunnel release surgery for the complaint of bilateral upper extremity paresthesias.

Bilateral carpal tunnel syndrome should prompt consideration of systemic diseases that infiltrate or impinge both canals (eg, rheumatoid arthritis, acromegaly, hypothyroidism, amyloidosis), although it is most frequently explained by a bilateral repetitive stress (eg, workplace typing). The development of upper extremity weakness suggests that an alternative condition such as cervical myelopathy, bilateral radiculopathy, or a rapidly progressive peripheral neuropathy may be responsible for his paresthesias. It would be unusual for a central nervous system process to selectively cause bilateral upper extremity weakness. Occasionally, patients emerge from surgery with limb weakness caused by peripheral nerve injury sustained from malpositioning of the extremity, but this would have been evident immediately following the operation.

Postoperative fevers are frequently unexplained, but require a search for common healthcare‐associated infections, such as pneumonia, urinary tract infection, intravenous catheter thrombophlebitis, wound infection, or Clostridium difficile colitis. However, such complications are unlikely following an ambulatory procedure. Confusion and fever together point to a central nervous system infection (meningoencephalitis or brain abscess) or a systemic infection that has impaired cognition. Malignancies can cause fever and altered mental status, but these are typically asynchronous events.

His past medical history was notable for hypertension, dyslipidemia, gout, actinic keratosis, and gastroesophageal reflux. His surgical history included bilateral knee replacements, repair of a left rotator cuff injury, and a herniorrhaphy. He was a nonsmoker who consumed 4 to 6 beers daily. His medications included clonidine, colchicine, atorvastatin, extended release metoprolol, triamterene‐hydrochlorothiazide, probenecid, and as‐needed ibuprofen and omeprazole.

Upon presentation he was cooperative and in no distress. Temperature was 38.9C, pulse 119 beats per minute, blood pressure 140/90 mm Hg, and oxygen saturation 94% on room air. He was noted to have logical thinking but impaired concentration. His upper extremity movement was restricted because of postoperative discomfort and swelling rather than true weakness. The rest of the exam was normal.

Metabolic, infectious, structural (intracranial), and toxic disorders can cause altered mental status. His heavy alcohol use puts him at risk for alcohol withdrawal and infections (such as Listeria meningitis), both of which may explain his fever and altered mental status. Signs and symptoms of meningitis are absent at this time. His knee prostheses could have harbored an infection preoperatively and therefore warrant close examination. Patients sometimes have adverse reactions to medications they have been prescribed but are not exposed to until hospitalization, although his surgical procedure was likely done on an outpatient basis. Empiric thiamine should be administered early given his confusion and alcohol habits.

Basic laboratories revealed a hemoglobin of 11.2 g/dL, white blood cell (WBC) count of 6,900/mm3 with 75% neutrophils, platelets of 206,000/mm3. Mean corpuscular volume was 97 mm3. Serum albumin was 2.4 g/dl, sodium 134 mmol/L, potassium 3.9 mmol/L, blood urea nitrogen 12 mg/dL, and creatinine 0.9 mg/dL. The aspartate aminotransferase was 93 U/L, alanine aminotransferase 73 U/L, alkaline phosphatase 254 U/L, and total bilirubin 1.0 mg/dL. Urinalysis was normal. Over the next 16 days fevers and waxing and waning mentation continued. The following studies were normal or negative: blood and urine cultures; transthoracic echocardiogram, antinuclear antibodies, hepatitis B surface antigen, hepatitis C antibody, and human immunodeficiency virus antibody; magnetic resonance imaging of the brain, electroencephalogram, and lower extremity venous ultrasound.

Hypoalbuminemia may signal chronic illness, hypoproduction from liver disease (caused by his heavy alcohol use), or losses from the kidney or gastrointestinal tract. His anemia may reflect chronic disease or point toward a specific underlying disorder. For example, fever and anemia could arise from hemolytic processes such as thrombotic thrombocytopenic purpura or clostridial infections.

An extensive workup has not revealed a cause for his prolonged fever (eg, infection, malignancy, autoimmune condition, or toxin). Likewise, an explanation for confusion is lacking. Because systemic illness and structural brain disease have not been uncovered, a lumbar puncture is indicated.

A lumbar puncture under fluoroscopic guidance revealed a cerebrospinal fluid (CSF) WBC count of 6/mm3, red blood cell count (RBC) 2255/mm3, protein 49 mg/dL, and glucose 54 mg/dL. The WBC differential was not reported. No growth was reported on bacterial cultures. Polymerase chain reactions for enterovirus and herpes simplex viruses 1 and 2 were negative. Cryptococcal antigen and Venereal Disease Research Laboratory serologies were also negative.

A CSF WBC count of 6 is out of the normal range, but could be explained by a traumatic tap given the elevated RBC; the protein and glucose are likewise at the border of normal. Collectively, these are nonspecific findings that could point to an infectious or noninfectious cause of intrathecal or paraspinous inflammation, but are not suggestive of bacterial meningitis.

The patient developed pneumonia, for which he received ertapenem. On hospital day 17 he was intubated for hypoxia and respiratory distress and was extubated after 4 days of mechanical ventilation. Increasing weakness in all extremities prompted magnetic resonance imaging of the spine, which revealed fluid and enhancement involving the soft tissues around C3‐C4 and C5‐C6, raising concerns for discitis and osteomyelitis. Possible septic arthritis at the C3‐C4 and C4‐C5 facets was noted. Ring enhancing fluid collections from T2‐T8 compatible with an epidural abscess with cord compression at T4‐T5 and T6‐T7 were seen. Enhancement and fluid involving the facet joints between T2‐T7 was also consistent with septic arthritis (Figure 1).

Figure 1
Magnetic resonance imaging of the spine showing abnormal soft tissue adjacent to the right costovertebral junction with extension through the neural foramen and cord compression at T5.

His pneumonia appears to have developed many days into his hospitalization, and therefore is unlikely to account for his initial fever and confusion. Blood cultures and echocardiogram have not suggested an endovascular infection that could account for such widespread vertebral and epidural deposition. A wide number of bacteria can cause epidural abscesses and septic arthritis, most commonly Staphylococcus aureus. Less common pathogens with a predilection for osteoarticular involvement, such as Brucella species, warrant consideration when there is appropriate epidemiologic risk.

Systemic bacterial infection remains a concern with his alcoholism rendering him partially immunosuppressed. However, a large number of adjacent spinal joints harboring a bacterial infection is unusual, and a working diagnosis of multilevel spinal infection, therefore, should prompt consideration of noninfectious processes. When a patient develops a swollen peripheral joint and fever in the postoperative setting, gout or pseudogout is a leading consideration. That same thinking should be applied to the vertebrae, where spinal gout can manifest. Surgery itself or associated changes in alcohol consumption patterns or changes in medications (at least 4 of which are relevant to goutcolchicine, hydrochlorothiazide, probenecid, and ibuprofen) could predispose him to a flare.

Aspiration of the epidural collection yielded a negative Gram stain and culture. He developed swelling in the bilateral proximal interphalangeal joints and was treated with steroids and colchicine for suspected gout flare. Vancomycin and piperacillin‐tazobactam were initiated, and on hospital day 22 the patient was transferred to another hospital for further evaluation by neurosurgery.

The negative Gram stain and culture argues against septic arthritis, but these are imperfect tests and will not detect atypical pathogens (eg, spinal tuberculosis). Reexamination of the aspirate for urate and calcium pyrophosphate crystals would be useful. Initiation of steroids in the setting of potentially undiagnosed infection requires a careful risk/benefit analysis. It may be reasonable to treat the patient with colchicine alone while withholding steroids and avoiding nonsteroidal agents in case invasive procedures are planned.

On exam his temperature was 36C, blood pressure 156/92 mm Hg, pulse 100 beats per minute, respirations 21 per minute, and oxygenation 97% on room air. He was not in acute distress and was only oriented to self. Bilateral 2+ lower extremity pitting edema up to the knees was noted. Examination of the heart and lungs was unremarkable. Gouty tophi were noted over both elbows. His joints were normal.

Cranial nerves IIXII were normal. Motor exam revealed normal muscle tone and bulk. Muscle strength was approximately 3/5 in the right upper extremity and 4+/5 in the left upper extremity. Bilateral lower extremity strength was 3/5 in hip flexion, knee flexion, and knee extension. Dorsiflexion and plantar flexion were approximately 2/5 bilaterally. Sensation was intact to light touch and pinprick, and proprioception was normal. Gait was not tested. A Foley catheter was in place.

This examination confirms ongoing encephalopathy and incomplete quadriplegia. The lower extremity weakness is nearly equal proximally and distally, which can be seen with an advanced peripheral neuropathy but is more characteristic of myelopathy. The expected concomitant sensory deficit of myelopathy is not present, although this may be difficult to detect in a confused patient. Reflex testing would help in distinguishing myelopathy (favored because of the imaging findings) from a rapid progressive peripheral motor neuropathy (eg, acute inflammatory demyelinating polyneuropathy or acute intermittent porphyria).

The pitting edema likely represents fluid overload, which can be nonspecific after prolonged immobility during hospitalization; hypoalbuminemia is oftentimes speculated to play a role when this develops. His alcohol use puts him at risk for heart failure (although there is no evidence of this on exam) and liver disease (which his liver function tests suggest). The tophi speak to the extent and chronicity of his hyperuricemia.

On arrival he reported recent onset diarrhea. Medications at transfer included metoprolol, omeprazole, prednisone, piperacillin/tazobactam, vancomycin, and colchicine; acetaminophen, bisacodyl, diphenhydramine, fentanyl, subcutaneous insulin, and labetalol were administered as needed. Laboratory studies included a hemoglobin of 9.5 g/dL, WBC count of 7,300/mm3 with 95% neutrophils, platelets 301,000/mm3, sodium 151 mmol/L, potassium 2.9 mmol/L, blood urea nitrogen 76 mg/dL, creatinine 2.0 mg/dL, aspartate aminotransferase 171 U/L, and alanine aminotransferase 127 U/L. Serum albumin was 1.7 g/dL.

At least 3 of his medicationsdiphenhydramine, fentanyl, and prednisonemay be contributing to his ongoing altered mental status, which may be further compounded by hypernatremia. Although his liver disease remains uncharacterized, hepatic encephalopathy may be contributing to his confusion as well.

Colchicine is likely responsible for his diarrhea, which would be the most readily available explanation for his hypernatremia, hypokalemia, and acute kidney injury (AKI). Acute kidney injury could result from progressive liver disease (hepatorenal syndrome), decreased arterial perfusion (suggested by third spacing or his diarrhea), acute tubular necrosis (from infection or medication), or urinary retention secondary to catheter obstruction. Acute hyperuricemia can also cause AKI (urate nephropathy).

Anemia has progressed and requires evaluation for blood loss as well as hemolysis. Hepatotoxicity from any of his medications (eg, acetaminophen) must be considered. Coagulation studies and review of the previous abdominal computed tomography would help determine the extent of his liver disease.

Neurosurgical consultation was obtained and the patient and his family elected to proceed with a thoracic laminectomy. Cheesy fluid was identified at the facet joints at T6‐T7, which was found to contain rare deposits of monosodium urate crystals. Surgical specimen cultures were sterile. His mental status and strength slowly improved to baseline following the surgery. He was discharged on postoperative day 7 to a rehabilitation facility. On the telephone follow‐up he reported that he has regained his strength completely.

The fluid analysis and clinical course confirms spinal gout. The presenting encephalopathy remains unexplained; I am unaware of gout leading to altered mental status.

COMMENTARY

Gout is an inflammatory condition triggered by the deposition of monosodium urate crystals in tissues in association with hyperuricemia.[1] Based on the 20072008 National Health and Nutrition Examination Survey, the prevalence of gout among US adults was 3.9% (8.3 million individuals).[2] These rates are increasing and are thought to be spurred by the aging population, increasing rates of obesity, and changing dietary habits including increases in the consumption of soft drinks and red meat.[3, 4, 5] The development of gout during hospitalization can prolong length of stay, and the implementation of a management protocol appears to help decrease treatment delays and the inappropriate discontinuation of gout prophylaxis.[6, 7] Surgery, with its associated physiologic stressors, can trigger gout, which is often polyarticular and presents with fever leading to testing and consultations for the febrile episode.[8]

Gout is an ancient disease that is familiar to most clinicians. In 1666, Daniel Sennert, a German physician, described gout as the physician's shame because of its infrequent recognition.[9] Clinical gout spans 3 stages: asymptomatic hyperuricemia, acute and intercritical gout, and chronic gouty arthritis. The typical acute presentation is monoarticular with the abrupt onset of pain, swelling, warmth, and erythema in a peripheral joint. It manifests most characteristically in the first metatarsophalangeal joint (podagra), but also frequently involves the midfoot, ankle, knee, and wrist and sometimes affects multiple joints simultaneously (polyarticular gout).[1, 10] The visualization of monosodium urate crystals either in synovial fluid or from a tophus is diagnostic of gout; however, guidelines recognize that a classic presentation of gout may be diagnosed based on clinical criteria alone.[11] Dual energy computerized tomography and ultrasonography are emerging as techniques for the visualization of monosodium urate crystals; however, they are not currently routinely recommended.[12]

There are many unusual presentations of gout, with an increase in such reports paralleling both the overall increase in the prevalence of gout and improvements in available imaging techniques.[13] Atypical presentations present diagnostic challenges and are often caused by tophaceous deposits in unusual locations. Reports of atypical gout have described entrapment neuropathies (eg, gouty deposits inducing carpal tunnel syndrome), ocular gout manifested as conjunctival deposits and uveitis, pancreatic gout presenting as a mass, and dermatologic manifestations including panniculitis.[13, 14]

Spinal gout (also known as axial gout) manifests when crystal‐induced inflammation, erosive arthritis, and tophaceous deposits occur along the spinal column. A cross‐sectional study of patients with poorly controlled gout reported the prevalence of spinal gout diagnosed by computerized tomography to be 35%. These radiographic findings were not consistently correlated with back pain.[15] Imaging features that are suggestive of spinal gout include intra‐articular and juxta‐articular erosions with sclerotic margins and density greater than the surrounding muscle. Periosteal new bone formation adjacent to bony destruction can form overhanging edges.[16] When retrospectively presented with the final diagnosis, the radiologist at our institution noted that the appearance was typical gout in an atypical location.

Spinal gout can be confused with spinal metastasis, infection, and stenosis. It can remain asymptomatic or present with back pain, radiculopathy, or cord compression. The lumbar spine is the most frequently affected site.[17, 18] Many patients with spinal gout have had chronic tophaceous gout with radiologic evidence of erosions in the peripheral joints.[15] Patients with spinal gout also have elevated urate levels and markers of inflammation.[18] Surgical decompression and stabilization is recommended when there is frank cord compression, progressive neurologic compromise, or lack of improvement with gout therapy alone.[18]

This patient's male gender, history of gout, hypertension, alcohol consumption, and thiazide diuretic use placed him at an increased risk of a gout attack.[19, 20] The possible interruption of urate‐lowering therapy for the surgical procedure and surgery itself further heightened his risk of suffering acute gouty arthritis in the perioperative period.[21] The patient's encephalopathy may have masked back pain and precluded an accurate neurologic exam. There is one case report to our knowledge describing encephalopathy that improved with colchicine and was possibly related to gout.[22] This patient's encephalopathy was deemed multifactorial and attributed to alcohol withdrawal, medications (including opioids and steroids), and infection (pneumonia).

Gout is best known for its peripheral arthritis and is rarely invoked in the consideration of spinal and myelopathic processes where more pressing competing diagnoses, such as infection and malignancy, are typically considered. In addition, when surgical specimens are submitted for examination for pathology in formaldehyde (rather than alcohol), monosodium urate crystals are dissolved and are thus difficult to identify in the specimen.

This case reminds us that gout remains a diagnostic challenge and should be considered in the differential of an inflammatory process. Recognition of the multifaceted nature of gout can allow for the earlier recognition and treatment of the less typical presentations of this ancient malady.

KEY TEACHING POINTS

  1. Crystalline disease is a common cause of postoperative arthritis.
  2. Gout (and pseudogout) should be considered in cases of focal inflammation (detected by examination or imaging) when the evidence or predisposition for infection is limited or nonexistent.
  3. Spinal gout presents with back pain, radiculopathy, or cord compression and may be confused with spinal metastasis, infection, and stenosis.

Acknowledgements

The authors thank Dr. Kari Waddell and Elaine Bammerlin for their assistance in the preparation of this manuscript.

Disclosure: Nothing to report.

References
  1. Burns CM, Wortmann RL. Clinical features and treatment of gout. In: Firestein GS, Budd RC, Gabriel SE, McInnes IB, O'Dell JR, eds. Kelley's Textbook of Rheumatology. Vol 2. 9th ed. Philadelphia, PA: Elsevier/Saunders; 2013:15441575.
  2. Zhu Y, Pandya BJ, Choi HK. Prevalence of gout and hyperuricemia in the US general population: the National Health and Nutrition Examination Survey 2007–2008. Arthritis Rheum. 2011;63(10):31363141.
  3. Wallace KL, Riedel AA, Joseph‐Ridge N, Wortmann R. Increasing prevalence of gout and hyperuricemia over 10 years among older adults in a managed care population. J Rheumatol. 2004;31(8):15821587.
  4. Choi HK, Atkinson K, Karlson EW, Willett W, Curhan G. Purine‐rich foods, dairy and protein intake, and the risk of gout in men. New Engl J Med. 2004;350(11):10931103.
  5. Choi HK, Willett W, Curhan G. Fructose‐rich beverages and risk of gout in women. JAMA. 2010;304(20):22702278.
  6. Lee G, Roberts L. Healthcare burden of in‐hospital gout. Intern Med J. 2012;42(11):12611263.
  7. Kamalaraj N, Gnanenthiran SR, Kathirgamanathan T, Hassett GM, Gibson KA, McNeil HP. Improved management of acute gout during hospitalization following introduction of a protocol. Int J Rheum Dis. 2012;15(6):512520.
  8. Craig MH, Poole GV, Hauser CJ. Postsurgical gout. Am Surg. 1995;61(1):5659.
  9. Karsh R, McCarthy JD. Evolution of modern medicine. Arch Intern Med. 1960;105(4):640644.
  10. Neogi T. Clinical practice. Gout. N Engl J Med. 2011;364(5):443452.
  11. Shmerling RH. Management of gout: a 57‐year‐old man with a history of podagra, hyperuricemia, and mild renal insufficiency. JAMA. 2012;308(20):21332141.
  12. Rettenbacher T, Ennemoser S, Weirich H, et al. Diagnostic imaging of gout: comparison of high‐resolution US versus conventional X‐ray. Eur Radiol. 2008;18(3):621630.
  13. Forbess LJ, Fields TR. The broad spectrum of urate crystal deposition: unusual presentations of gouty tophi. Semin Arthritis Rheum. 2012;42(2):146154.
  14. Ning TC, Keenan RT. Unusual clinical presentations of gout. Curr Opin Rheumatol. 2010;22(2):181187.
  15. Konatalapalli RM, Lumezanu E, Jelinek JS, Murphey MD, Wang H, Weinstein A. Correlates of axial gout: a cross‐sectional study. J Rheumatol. 2012;39(7):14451449.
  16. Saketkoo LA, Robertson HJ, Dyer HR, Virk Z‐U, Ferreyro HR, Espinoza LR. Axial gouty arthropathy. Am J Med Sci. 2009;338(2):140146.
  17. Lumezanu E, Konatalapalli R, Weinstein A. Axial (spinal) gout. Curr Rheumatol Rep. 2012;14(2):161164.
  18. Hou LC, Hsu AR, Veeravagu A, Boakye M. Spinal gout in a renal transplant patient: a case report and literature review. Surg Neurol. 2007;67(1):6573.
  19. Zhang Y, Woods R, Chaisson CE, et al. Alcohol consumption as a trigger of recurrent gout attacks. Am J Med. 2006;119(9):800.e11800.e16.
  20. Hunter D, York M, Chaisson CE, Woods R, Niu J, Zhang Y. Recent diuretic use and the risk of recurrent gout attacks: the online case‐crossover gout study. J Rheumatol. 2006;33(7):13411345.
  21. Kang EH, Lee EY, Lee YJ, Song YW, Lee EB. Clinical features and risk factors of postsurgical gout. Ann Rheum Dis. 2008;67(9):12711275.
  22. Alla P, Carli P, Cellarier G, Paris JF. Gouty encephalopathy: myth or reality [in French]? Rev Med Interne. 1997;18(6):474476.
References
  1. Burns CM, Wortmann RL. Clinical features and treatment of gout. In: Firestein GS, Budd RC, Gabriel SE, McInnes IB, O'Dell JR, eds. Kelley's Textbook of Rheumatology. Vol 2. 9th ed. Philadelphia, PA: Elsevier/Saunders; 2013:15441575.
  2. Zhu Y, Pandya BJ, Choi HK. Prevalence of gout and hyperuricemia in the US general population: the National Health and Nutrition Examination Survey 2007–2008. Arthritis Rheum. 2011;63(10):31363141.
  3. Wallace KL, Riedel AA, Joseph‐Ridge N, Wortmann R. Increasing prevalence of gout and hyperuricemia over 10 years among older adults in a managed care population. J Rheumatol. 2004;31(8):15821587.
  4. Choi HK, Atkinson K, Karlson EW, Willett W, Curhan G. Purine‐rich foods, dairy and protein intake, and the risk of gout in men. New Engl J Med. 2004;350(11):10931103.
  5. Choi HK, Willett W, Curhan G. Fructose‐rich beverages and risk of gout in women. JAMA. 2010;304(20):22702278.
  6. Lee G, Roberts L. Healthcare burden of in‐hospital gout. Intern Med J. 2012;42(11):12611263.
  7. Kamalaraj N, Gnanenthiran SR, Kathirgamanathan T, Hassett GM, Gibson KA, McNeil HP. Improved management of acute gout during hospitalization following introduction of a protocol. Int J Rheum Dis. 2012;15(6):512520.
  8. Craig MH, Poole GV, Hauser CJ. Postsurgical gout. Am Surg. 1995;61(1):5659.
  9. Karsh R, McCarthy JD. Evolution of modern medicine. Arch Intern Med. 1960;105(4):640644.
  10. Neogi T. Clinical practice. Gout. N Engl J Med. 2011;364(5):443452.
  11. Shmerling RH. Management of gout: a 57‐year‐old man with a history of podagra, hyperuricemia, and mild renal insufficiency. JAMA. 2012;308(20):21332141.
  12. Rettenbacher T, Ennemoser S, Weirich H, et al. Diagnostic imaging of gout: comparison of high‐resolution US versus conventional X‐ray. Eur Radiol. 2008;18(3):621630.
  13. Forbess LJ, Fields TR. The broad spectrum of urate crystal deposition: unusual presentations of gouty tophi. Semin Arthritis Rheum. 2012;42(2):146154.
  14. Ning TC, Keenan RT. Unusual clinical presentations of gout. Curr Opin Rheumatol. 2010;22(2):181187.
  15. Konatalapalli RM, Lumezanu E, Jelinek JS, Murphey MD, Wang H, Weinstein A. Correlates of axial gout: a cross‐sectional study. J Rheumatol. 2012;39(7):14451449.
  16. Saketkoo LA, Robertson HJ, Dyer HR, Virk Z‐U, Ferreyro HR, Espinoza LR. Axial gouty arthropathy. Am J Med Sci. 2009;338(2):140146.
  17. Lumezanu E, Konatalapalli R, Weinstein A. Axial (spinal) gout. Curr Rheumatol Rep. 2012;14(2):161164.
  18. Hou LC, Hsu AR, Veeravagu A, Boakye M. Spinal gout in a renal transplant patient: a case report and literature review. Surg Neurol. 2007;67(1):6573.
  19. Zhang Y, Woods R, Chaisson CE, et al. Alcohol consumption as a trigger of recurrent gout attacks. Am J Med. 2006;119(9):800.e11800.e16.
  20. Hunter D, York M, Chaisson CE, Woods R, Niu J, Zhang Y. Recent diuretic use and the risk of recurrent gout attacks: the online case‐crossover gout study. J Rheumatol. 2006;33(7):13411345.
  21. Kang EH, Lee EY, Lee YJ, Song YW, Lee EB. Clinical features and risk factors of postsurgical gout. Ann Rheum Dis. 2008;67(9):12711275.
  22. Alla P, Carli P, Cellarier G, Paris JF. Gouty encephalopathy: myth or reality [in French]? Rev Med Interne. 1997;18(6):474476.
Issue
Journal of Hospital Medicine - 8(5)
Issue
Journal of Hospital Medicine - 8(5)
Page Number
267-270
Page Number
267-270
Article Type
Display Headline
A multifaceted case
Display Headline
A multifaceted case
Sections
Article Source
Copyright © 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Areeba Kara, MD, Assistant Professor of Clinical Medicine, Department of Inpatient Medicine, Indiana University Health Physicians, 1633 N Capitol Avenue, Indianapolis, IN 46202; Telephone: 317‐962‐1889; Fax: 317‐962‐0838; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Rapid Response Systems

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Rapid response systems: Should we still question their implementation?

In 2006,[1] we questioned whether rapid response systems (RRSs) were an effective strategy for detecting and managing deteriorating general ward patients. Since then, the implementation of RRSs has flourished, especially in the United States where accreditors (Joint Commission)[2] and patient‐safety organizations (Institute for Healthcare Improvement 100,000 Live Campaign)[3] have strongly supported RRSs. Decades of evidence show that general ward patients often experience unrecognized deterioration and cardiorespiratory arrest (CA). The low sensitivity and accuracy of periodic assessments by staff are thought to be a major reason for these lapses, as are imbalances between patient needs and clinician (primarily nursing) resources. Additionally, a medical culture that punishes speaking up or bypassing the chain of command are also likely contributors to the problem. A system that effectively recognizes the early signs of deterioration and quickly responds should catch problems before they become life threatening. Over the last decade, RRSs have been the primary intervention implemented to do this. The potential for RRSs to improve outcomes has strong face validity, but researchers have struggled to demonstrate consistent improvements in outcomes across institutions. Given this, are RRSs the best intervention to prevent this failure to rescue? In this editorial we examine the progress of RRSs, how they compare to other options, and we consider whether we should continue to question their implementation.

In our 2007 systematic review,[4] we concluded there was weak to moderate evidence supporting RRSs. Since then, 6 other systematic reviews of the effectiveness or implementation of RRSs have been published. One high‐quality review of effectiveness studies published through 2008 by Chan et al.[5] found that RRSs significantly reduced non‐intensive care unit (ICU) CA (relative risk [RR], 0.66; 95% confidence interval [CI], 0.54‐0.80), but not total hospital mortality (RR, 0.96; 95% CI, 0.84‐1.09) in adult inpatients. In pediatric inpatients, RRSs led to significant improvements in both non‐ICU CA (RR, 0.62; 95% CI, 0.46 to 0.84) and total hospital mortality (RR, 0.79; 95% CI, 0.63 to 0.98). Subsequent to 2008, a structured search[6] finds 26 additional studies.[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] Although the benefit for CA in both adults and children has remained robust, even more so since Chan's review, mortality reductions in adult patients appear to have had the most notable shift. In aggregate, the point estimate (for those studies providing analyzable data), for adult mortality has strengthened to 0.88, with a confidence interval of 0.82‐0.96 in favor of the RRS strategy.

This change has occurred as the analyzable studies since 2008 have all had favorable point estimates, and 4 have had statistically significant confidence intervals. Prior to 2008, 5 had unfavorable point estimates, and only 2 had favorable confidence intervals. As RRSs expand, the benefits, although not universal (some hospitals still experience no improvement in outcomes), seem to be getting stronger and more consistent. This may be secondary to maturation of the intervention and implementation strategies, or it may be the result of secular trends outside of the RRS intervention, although studies controlling for this found it not to be the case.[10] The factors associated with successful implementation of the RRS or improved outcomes include knowledge of activation criteria, communication, teamwork, lack of criticism for activating the RRS, and better attitudes about the team's positive effect on nurses and patients. Many of these factors relate to an improved safety culture in general. Additionally, activation rates may have increased in more recent studies, as greater utilization is associated with improved outcomes.[31] Finally, RRSs, like other patient‐safety and quality interventions, mature with time, often taking several years before they have a full effect on outcomes.[31, 32]

Despite these more favorable results for RRSs, we still see a large discrepancy between the magnitude of benefit for CA and mortality. This may partly be because the exposure groups are different; most studies examined non‐ICU CA, yet studies reporting mortality used total hospital mortality (ICU and non‐ICU). Additionally, although RRSs may effectively prevent CA, this intervention may have a more limited effect in preventing the patient's ultimate demise (particularly in the ICU).

We also still see that effectiveness reports for RRSs continue to be of low to moderate quality. Many reports give no statistics or denominator data or have missing data. Few control for secular trends in providers, outcomes, and confounders. Outcome measures vary widely, and none conducted blinded outcome assessments. Most studies use a pre‐post design without concurrent controls, substantially increasing the risk of bias. The better‐designed studies that use concurrent controls or cluster randomization (Priestley,[33] Bristow,[34] and the MERIT trial[35]) tend to show lower treatment effects, although interestingly in the MERIT trial, while the cluster‐randomized data showed no benefit, the pre‐post data showed significant improvement in the RRS intervention hospitals. These results have been attributed to the control hospitals using their code teams for RRS activities,[36] negating a comparative improvement in the intervention hospitals.

Can we improve RRS research? Likely, yes. We can begin by being more careful about defining the exposure group. Ideally, studies should not include data from the ICU or the emergency department because these patient populations are not part of the exposure group. Although most studies removed ICU and emergency department data for CA, they did not do so for hospital mortality. ICU mortality is likely biased, because only a small proportion of ICU patients have been exposed to an RRS. Definitions also need to be stringent and uniform. For example, CA may be defined in a variety of ways such as calling the code team versus documented cardiopulmonary resuscitation. Unexpected hospital mortality is often defined as excluding patients with do not resuscitate (DNR) orders, but this may or may not accurately exclude expected deaths. We also need to better attempt to control for confounders and secular trends. Outcomes such as CA and mortality are strongly influenced by changes in patient case‐mix over time, the frequency of care limitation/DNR orders, or by poor triage decisions.[37] Outcomes such as unanticipated ICU admission are indirect and may be heavily influenced by local cultural factors. Finally, authors need to provide robust statistical data and clear numerators and denominators to support their conclusions.

Although we need to do our best to improve the quality of the RRS literature, the near ubiquitous presence of this patient‐safety intervention in North American hospitals raises a crucial question, Do we even need more effectiveness studies and if so what kind? Randomized controlled trials are not likely. It is hard to argue that we still sit at a position of equipoise, and randomizing patients who are deteriorating to standard care versus an RRS is neither practical nor ethical. Finding appropriate concurrent control hospitals that have not implemented some type of RRS would also be very difficult.

We should, however, continue to test the effectiveness of RRSs but in a more diverse manner. RRSs should be more directly compared to other interventions that can improve the problem of failure to rescue such as increased nurse staffing[38, 39, 40] and hospitalist staffing.[41] The low sensitivity and accuracy of monitoring vital signs on general wards by staff is also an area strongly deserving of investigation, as it is likely central to the problem. Researchers have sought to use various combinations of vital signs, including aggregated or weighted scoring systems, and recent data suggest some approaches may be superior to others.[42] Many have advocated for continuous monitoring of a limited set of vital signs similar to the ICU, and there are some recent data indicating that this might be effective.[43, 44] This work is in the early stages, and we do not yet know whether this strategy will affect outcomes. It is conceivable that if the false alarm rate can be kept very low and we can minimize the failure to recognize deteriorating patients (good sensitivity, specificity, and positive predictive value), the need for the RRS response team may be reduced or even eliminated. Additionally, as electronic medical records (EMRs) have expanded, there has been growing interest in leveraging these systems to improve the effectiveness of RRSs.[45] There is a tremendous amount of information within the EMRs that can be used to complement vital‐sign monitoring (manual or continuous), because baseline medical problems, laboratory values, and recent history may have a strong impact on the predictive value of changes in vital signs.

Research should also focus on the possible unintended consequences, costs, and the cost‐effectiveness of RRSs compared with other interventions that can or may reduce the rate of failure to rescue. Certainly, establishing RRSs has costs including staff time and the need to pull staff from other clinical duties to respond. Unintended harm, such as diversion of ICU staff from their usual care, are often mentioned but never rigorously evaluated. Increasing nurse staffing has very substantial costs, but how these costs compare to the costs of the RRS are unclear, although likely the comparison would be very favorable to the RRS, because staffing typically relies on existing employees with expertise in caring for the critically ill as opposed to workforce expansion. Given the current healthcare economic climate, any model that relies on additional employees is not likely to gain support. Establishing continuous monitoring systems have up‐front capital costs, although they may reduce other costs in the long run (eg, staff, medical liability). They also have intangible costs for provider workload if the false alarm rates are too high. Again, this strategy is too new to know the answers to these concerns. As we move forward, such evaluations are needed to guide policy decisions.

We also need more evaluation of RRS implementation science. The optimal way to organize, train, and staff RRSs is unknown. Most programs use physician‐led teams, although some use nurse‐led teams. Few studies have compared the various models, although 1 study that compared a resident‐led to an attending‐led team found no difference.[17] Education is ubiquitous, although actual staff training (simulation for example) is not commonly described. In addition, there is wide variation in the frequency of RRS activation. We know nurses and residents often feel pressured not to activate RRSs, and much of the success of the RRS relies on nurses identifying deteriorating patients and calling the response team. The use of continuous monitoring combined with automatic notification of staff may reduce the barriers to activating RRSs, increasing activation rates, but until then we need more understanding of how to break down these barriers. Family/patient access to activation has also gained ground (1 program demonstrated outcome improvement only after this was established[13]), but is not yet widespread.

The role of the RRS in improving processes of care, such as the appropriate institution of DNR orders, end of life/palliative care discussions, and early goal‐directed therapy for sepsis, have been presented in several studies[46, 47] but remain inadequately evaluated. Here too, there is much to learn about how we might realize the full effectiveness of this patient‐safety strategy beyond outcomes such as CA and hospital mortality. Ideally, if all appropriate patients had DNR orders and we stopped failing to recognize and respond to deteriorating ward patients, CAs on general hospital wards could be nearly eliminated.

RRSs have been described as a band‐aid for a failed model of general ward care.[37] What is clear is that many patients suffer preventable harm from unrecognized deterioration. This needs to be challenged, but are RRSs the best intervention? Despite the Joint Commission's Patient Safety Goal 16, should we still question their implementation? Should we (and the Joint Commission) reconsider our approach and prioritize our efforts elsewhere or should we feel comfortable with the investment that we have made in these systems? Even though there are many unknowns, and the quality of RRS studies needs improvement, the literature is accumulating that RRSs do reduce non‐ICU CA and improve hospital mortality. Without direct comparison studies demonstrating superiority of other expensive strategies, there is little reason to reconsider the RRS concept or question their implementation and our investment. We should instead invest further in this foundational patient‐safety strategy to make it as effective as it can be.

Disclosures: Dr. Pronovost reports the following potential conflicts of interest: grant or contract support from the Agency for Healthcare Research and Quality, and the Gordon and Betty Moore Foundation (research related to patient safety and quality of care), and the National Institutes of Health (acute lung injury research); consulting fees from the Association of Professionals in Infection Control and Epidemiology, Inc.; honoraria from various hospitals, health systems, and the Leigh Bureau to speak on quality and patient safety; book royalties from the Penguin Group; and board membership for the Cantel Medical Group. Dr. Winters reports the following potential conflicts of interest: contract or grant support from Masimo Corporation, honoraria from 3M Corporation and various hospitals and health systems, royalties from Lippincott Williams &Wilkins (UptoDate), and consulting fees from several legal firms for medical legal consulting.

Files
References
  1. Winters BD, Pham J, Pronovost PJ. Rapid response teams: walk, don't run. JAMA. 2006;296:16451647.
  2. Joint Commission requirement: The Joint Commission announces the 2008 National Patient Safety Goals and Requirements. Jt Comm Perspect. 2007;27(7):122.
  3. Institute for Healthcare Improvement. 5 million lives campaign: overview. Available at: http://www.ihi.org/offerings/Initiatives/PastStrategicInitiatives/5MillionLivesCampaign/Pages/default.aspx. Accessed November 28, 2012.
  4. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35:12381243.
  5. Chan PS, Jain R, Nallmothu BK, Berg RA, Sasson C. Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170:1826.
  6. Winters BD, Weaver SJ, Pfoh ER, Yang T, Pham JC, Dy SM. Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:417425.
  7. Chan PS, Khalid A, Longmore LS, Berg RA, Kosiborod M, Spertus JA. Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300:25062513.
  8. Anwar ul Haque H, Saleem AF, Zaidi S, Haider SR. Experience of pediatric rapid response team in a tertiary care hospital in Pakistan. Indian J Pediatr. 2010;77:273276.
  9. Bader MK, Neal B, Johnson L, et al. Rescue me: saving the vulnerable non‐ICU patient population. Jt Comm J Qual Patient Saf. 2009;35:199205.
  10. Beitler JR, Link N, Bails DB, Hurdle K, Chong DH. Reduction in hospital‐wide mortality after implementation of a rapid response team: a long‐term cohort study. Crit Care. 2011;15:R269.
  11. Benson L, Mitchell C, Link M, Carlson G, Fisher J. Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf. 2008;34:743747.
  12. Campello G, Granja C, Carvalho F, Dias C, Azevedo LF, Costa‐Pereira A. Immediate and long‐term impact of medical emergency teams on cardiac arrest prevalence and mortality: a plea for periodic basic life‐support training programs. Crit Care Med. 2009;37:30543061.
  13. Gerdik C, Vallish RO, Miles K, Godwin SA, Wludyka PS, Panni MK. Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81:16761681.
  14. Hanson CC, Randolph GD, Erickson JA, et al. A reduction in cardiac arrests and duration of clinical instability after implementation of a paediatric rapid response system. Qual Saf Health Care. 2009;18:500504.
  15. Hatler C, Mast D, Bedker D, et al. Implementing a rapid response team to decrease emergencies outside the ICU: one hospital's experience. Medsurg Nurs. 2009;18:8490, 126.
  16. Howell MD, Ngo L, Folcarelli P, et al. Sustained effectiveness of a primary‐team‐based rapid response system. Crit Care Med. 2012;40:25622568.
  17. Karvellas CJ, Souza IA, Gibney RT, Bagshaw SM. Association between implementation of an intensivist‐led medical emergency team and mortality. BMJ Qual Saf. 2012;21:152159.
  18. Konrad D, Jaderling G, Bell M, Granath F, Ekbom A, Martling CR. Reducing in‐hospital cardiac arrests and hospital mortality by introducing a medical emergency team. Intensive Care Med. 2010;36:100106.
  19. Kotsakis A, Lobos AT, Parshuram C, et al. Implementation of a multicenter rapid response system in pediatric academic hospitals is effective. Pediatrics. 2011;128:7278.
  20. Laurens N, Dwyer T. The impact of medical emergency teams on ICU admission rates, cardiopulmonary arrests and mortality in a regional hospital. Resuscitation. 2011;82:707712.
  21. Lighthall GK, Parast LM, Rapoport L, Wagner TH. Introduction of a rapid response system at a United States veterans affairs hospital reduced cardiac arrests. Anesth Analg. 2010;111:679686.
  22. Medina‐Rivera B, Campos‐Santiago Z, Palacios AT, Rodriguez‐Cintron W. The effect of the medical emergency team on unexpected cardiac arrest and death at the VA Caribbean healthcare system: a retrospective study. Crit Care Shock. 2010;13:98105.
  23. Rothberg MB, Belforti R, Fitzgerald J, Friderici J, Keyes M. Four years' experience with a hospitalist‐led medical emergency team: an interrupted time series. J Hosp Med. 2012;7:98103.
  24. Santamaria J, Tobin A, Holmes J. Changing cardiac arrest and hospital mortality rates through a medical emergency team takes time and constant review. Crit Care Med. 2010;38:445450.
  25. Sarani B, Palilonis E, Sonnad S, et al. Clinical emergencies and outcomes in patients admitted to a surgical versus medical service. Resuscitation. 2011;82:415418.
  26. Scherr K, Wilson DM, Wagner J, Haughian M. Evaluating a new rapid response team: NP‐led versus intensivist‐led comparisons. AACN Adv Crit Care. 2012;23:3242.
  27. Scott SS, Elliott S. Implementation of a rapid response team: a success story. Crit Care Nurse. 2009;29:6675.
  28. Shah SK, Cardenas VJ, Kuo YF, Sharma G. Rapid response team in an academic institution: does it make a difference? Chest. 2011;139:13611367.
  29. Tibballs J, Kinney S. Reduction of hospital mortality and of preventable cardiac arrest and death on introduction of a pediatric medical emergency team. Ped Crit Care Med. 2009;10:306312.
  30. Tobin AE, Santamaria JD. Medical emergency teams are associated with reduced mortality across a major metropolitan health network after two years service: a retrospective study using government administrative data. Crit Care. 2012;16:R210.
  31. Jones D, Bellomo R, Bates S, Warrillow S, et al. Long term effect of a medical emergency team on cardiac arrests in a teaching hospital. Crit Care. 2005;9:R808R815.
  32. Buist M, Harrison J, Abaloz E, Dyke S. Six year audit of cardiac arrests and medical emergency team calls in an Australian outer metropolitan teaching hospital. BMJ. 2007;335:12101212.
  33. Priestley G, Watson W, Rashidian R, et al. Introducing Critical Care Outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30:13981404.
  34. Bristow PJ, Hillman KM, Chey T, et al. Rates of in‐hospital arrests, deaths, and intensive care admissions: the effect of a medical emergency team. Med J Aust. 2000;173:236240.
  35. Hillman K, Chen J, Cretikos M, et al. Introduction of the medical emergency team (MET) system: a cluster randomised controlled trial. Lancet. 2005;365:20912097.
  36. Cretikos MA, Chen J, Hillman KM, Bellomo R, Finfer SR, Flabouris A. The effectiveness of implementation of the medical emergency team (MET) system and factors associated with use during the MERIT study. Crit Care Resusc. 2007;9:206212.
  37. Litvak E, Pronovost PJ. Rethinking rapid response teams. JAMA. 2010;304:13751376.
  38. Wiltse Nicely KL, Sloane DM, Aiken LH. Lower mortality for abdominal aortic aneurysm repair in high‐volume hospitals is contingent upon nurse staffing [published online ahead of print October 22, 2012]. Health Serv Res. doi: 10.1111/1475–6773.12004.
  39. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse‐staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:17151722.
  40. Kane RL. The association of registered nurse staffing levels and patient outcomes: systematic review and meta‐analysis. Med Care. 2007;45:11951204.
  41. Lindenauer PK, Rothberg MB, Pekow PS, Kenwood C, Benjamin EM, Auerbach AD. Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:25892600.
  42. Smith GB, Prytherch DR, Meredith P, Schmidt PE, Featherstone PI. The ability of the National Early Warning Score (NEWS) to discriminate patients at risk of early cardiac arrest, unanticipated intensive care unit admission, and death. Resuscitation. 2013;84:465470.
  43. Taenzer AH, Pyke JB, McGrath SP, Blike GT. Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112:282287.
  44. Bellomo R, Ackerman M, Bailey M, et al. A controlled trial of electronic automated advisory vital signs monitoring in general hospital wards. Crit Care Med. 2012;40:23492361.
  45. Agency for Healthcare Research and Quality. Early warning scoring system proactively identifies patients at risk of deterioration, leading to fewer cardiopulmonary emergencies and deaths. Available at: http://www.innovations.ahrq.gov/content.aspx?id=2607. Accessed March 26, 2013.
  46. Sebat F, Musthafa AA, Johnson D, et al. Effect of a rapid response system for patients in shock on time to treatment and mortality during 5 years. Crit Care Med. 2007;35:25682575.
  47. Jones DA, McIntyre T, Baldwin I, Mercer I, Kattula A, Bellomo R. The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9:151156.
Article PDF
Issue
Journal of Hospital Medicine - 8(5)
Page Number
278-281
Sections
Files
Files
Article PDF
Article PDF

In 2006,[1] we questioned whether rapid response systems (RRSs) were an effective strategy for detecting and managing deteriorating general ward patients. Since then, the implementation of RRSs has flourished, especially in the United States where accreditors (Joint Commission)[2] and patient‐safety organizations (Institute for Healthcare Improvement 100,000 Live Campaign)[3] have strongly supported RRSs. Decades of evidence show that general ward patients often experience unrecognized deterioration and cardiorespiratory arrest (CA). The low sensitivity and accuracy of periodic assessments by staff are thought to be a major reason for these lapses, as are imbalances between patient needs and clinician (primarily nursing) resources. Additionally, a medical culture that punishes speaking up or bypassing the chain of command are also likely contributors to the problem. A system that effectively recognizes the early signs of deterioration and quickly responds should catch problems before they become life threatening. Over the last decade, RRSs have been the primary intervention implemented to do this. The potential for RRSs to improve outcomes has strong face validity, but researchers have struggled to demonstrate consistent improvements in outcomes across institutions. Given this, are RRSs the best intervention to prevent this failure to rescue? In this editorial we examine the progress of RRSs, how they compare to other options, and we consider whether we should continue to question their implementation.

In our 2007 systematic review,[4] we concluded there was weak to moderate evidence supporting RRSs. Since then, 6 other systematic reviews of the effectiveness or implementation of RRSs have been published. One high‐quality review of effectiveness studies published through 2008 by Chan et al.[5] found that RRSs significantly reduced non‐intensive care unit (ICU) CA (relative risk [RR], 0.66; 95% confidence interval [CI], 0.54‐0.80), but not total hospital mortality (RR, 0.96; 95% CI, 0.84‐1.09) in adult inpatients. In pediatric inpatients, RRSs led to significant improvements in both non‐ICU CA (RR, 0.62; 95% CI, 0.46 to 0.84) and total hospital mortality (RR, 0.79; 95% CI, 0.63 to 0.98). Subsequent to 2008, a structured search[6] finds 26 additional studies.[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] Although the benefit for CA in both adults and children has remained robust, even more so since Chan's review, mortality reductions in adult patients appear to have had the most notable shift. In aggregate, the point estimate (for those studies providing analyzable data), for adult mortality has strengthened to 0.88, with a confidence interval of 0.82‐0.96 in favor of the RRS strategy.

This change has occurred as the analyzable studies since 2008 have all had favorable point estimates, and 4 have had statistically significant confidence intervals. Prior to 2008, 5 had unfavorable point estimates, and only 2 had favorable confidence intervals. As RRSs expand, the benefits, although not universal (some hospitals still experience no improvement in outcomes), seem to be getting stronger and more consistent. This may be secondary to maturation of the intervention and implementation strategies, or it may be the result of secular trends outside of the RRS intervention, although studies controlling for this found it not to be the case.[10] The factors associated with successful implementation of the RRS or improved outcomes include knowledge of activation criteria, communication, teamwork, lack of criticism for activating the RRS, and better attitudes about the team's positive effect on nurses and patients. Many of these factors relate to an improved safety culture in general. Additionally, activation rates may have increased in more recent studies, as greater utilization is associated with improved outcomes.[31] Finally, RRSs, like other patient‐safety and quality interventions, mature with time, often taking several years before they have a full effect on outcomes.[31, 32]

Despite these more favorable results for RRSs, we still see a large discrepancy between the magnitude of benefit for CA and mortality. This may partly be because the exposure groups are different; most studies examined non‐ICU CA, yet studies reporting mortality used total hospital mortality (ICU and non‐ICU). Additionally, although RRSs may effectively prevent CA, this intervention may have a more limited effect in preventing the patient's ultimate demise (particularly in the ICU).

We also still see that effectiveness reports for RRSs continue to be of low to moderate quality. Many reports give no statistics or denominator data or have missing data. Few control for secular trends in providers, outcomes, and confounders. Outcome measures vary widely, and none conducted blinded outcome assessments. Most studies use a pre‐post design without concurrent controls, substantially increasing the risk of bias. The better‐designed studies that use concurrent controls or cluster randomization (Priestley,[33] Bristow,[34] and the MERIT trial[35]) tend to show lower treatment effects, although interestingly in the MERIT trial, while the cluster‐randomized data showed no benefit, the pre‐post data showed significant improvement in the RRS intervention hospitals. These results have been attributed to the control hospitals using their code teams for RRS activities,[36] negating a comparative improvement in the intervention hospitals.

Can we improve RRS research? Likely, yes. We can begin by being more careful about defining the exposure group. Ideally, studies should not include data from the ICU or the emergency department because these patient populations are not part of the exposure group. Although most studies removed ICU and emergency department data for CA, they did not do so for hospital mortality. ICU mortality is likely biased, because only a small proportion of ICU patients have been exposed to an RRS. Definitions also need to be stringent and uniform. For example, CA may be defined in a variety of ways such as calling the code team versus documented cardiopulmonary resuscitation. Unexpected hospital mortality is often defined as excluding patients with do not resuscitate (DNR) orders, but this may or may not accurately exclude expected deaths. We also need to better attempt to control for confounders and secular trends. Outcomes such as CA and mortality are strongly influenced by changes in patient case‐mix over time, the frequency of care limitation/DNR orders, or by poor triage decisions.[37] Outcomes such as unanticipated ICU admission are indirect and may be heavily influenced by local cultural factors. Finally, authors need to provide robust statistical data and clear numerators and denominators to support their conclusions.

Although we need to do our best to improve the quality of the RRS literature, the near ubiquitous presence of this patient‐safety intervention in North American hospitals raises a crucial question, Do we even need more effectiveness studies and if so what kind? Randomized controlled trials are not likely. It is hard to argue that we still sit at a position of equipoise, and randomizing patients who are deteriorating to standard care versus an RRS is neither practical nor ethical. Finding appropriate concurrent control hospitals that have not implemented some type of RRS would also be very difficult.

We should, however, continue to test the effectiveness of RRSs but in a more diverse manner. RRSs should be more directly compared to other interventions that can improve the problem of failure to rescue such as increased nurse staffing[38, 39, 40] and hospitalist staffing.[41] The low sensitivity and accuracy of monitoring vital signs on general wards by staff is also an area strongly deserving of investigation, as it is likely central to the problem. Researchers have sought to use various combinations of vital signs, including aggregated or weighted scoring systems, and recent data suggest some approaches may be superior to others.[42] Many have advocated for continuous monitoring of a limited set of vital signs similar to the ICU, and there are some recent data indicating that this might be effective.[43, 44] This work is in the early stages, and we do not yet know whether this strategy will affect outcomes. It is conceivable that if the false alarm rate can be kept very low and we can minimize the failure to recognize deteriorating patients (good sensitivity, specificity, and positive predictive value), the need for the RRS response team may be reduced or even eliminated. Additionally, as electronic medical records (EMRs) have expanded, there has been growing interest in leveraging these systems to improve the effectiveness of RRSs.[45] There is a tremendous amount of information within the EMRs that can be used to complement vital‐sign monitoring (manual or continuous), because baseline medical problems, laboratory values, and recent history may have a strong impact on the predictive value of changes in vital signs.

Research should also focus on the possible unintended consequences, costs, and the cost‐effectiveness of RRSs compared with other interventions that can or may reduce the rate of failure to rescue. Certainly, establishing RRSs has costs including staff time and the need to pull staff from other clinical duties to respond. Unintended harm, such as diversion of ICU staff from their usual care, are often mentioned but never rigorously evaluated. Increasing nurse staffing has very substantial costs, but how these costs compare to the costs of the RRS are unclear, although likely the comparison would be very favorable to the RRS, because staffing typically relies on existing employees with expertise in caring for the critically ill as opposed to workforce expansion. Given the current healthcare economic climate, any model that relies on additional employees is not likely to gain support. Establishing continuous monitoring systems have up‐front capital costs, although they may reduce other costs in the long run (eg, staff, medical liability). They also have intangible costs for provider workload if the false alarm rates are too high. Again, this strategy is too new to know the answers to these concerns. As we move forward, such evaluations are needed to guide policy decisions.

We also need more evaluation of RRS implementation science. The optimal way to organize, train, and staff RRSs is unknown. Most programs use physician‐led teams, although some use nurse‐led teams. Few studies have compared the various models, although 1 study that compared a resident‐led to an attending‐led team found no difference.[17] Education is ubiquitous, although actual staff training (simulation for example) is not commonly described. In addition, there is wide variation in the frequency of RRS activation. We know nurses and residents often feel pressured not to activate RRSs, and much of the success of the RRS relies on nurses identifying deteriorating patients and calling the response team. The use of continuous monitoring combined with automatic notification of staff may reduce the barriers to activating RRSs, increasing activation rates, but until then we need more understanding of how to break down these barriers. Family/patient access to activation has also gained ground (1 program demonstrated outcome improvement only after this was established[13]), but is not yet widespread.

The role of the RRS in improving processes of care, such as the appropriate institution of DNR orders, end of life/palliative care discussions, and early goal‐directed therapy for sepsis, have been presented in several studies[46, 47] but remain inadequately evaluated. Here too, there is much to learn about how we might realize the full effectiveness of this patient‐safety strategy beyond outcomes such as CA and hospital mortality. Ideally, if all appropriate patients had DNR orders and we stopped failing to recognize and respond to deteriorating ward patients, CAs on general hospital wards could be nearly eliminated.

RRSs have been described as a band‐aid for a failed model of general ward care.[37] What is clear is that many patients suffer preventable harm from unrecognized deterioration. This needs to be challenged, but are RRSs the best intervention? Despite the Joint Commission's Patient Safety Goal 16, should we still question their implementation? Should we (and the Joint Commission) reconsider our approach and prioritize our efforts elsewhere or should we feel comfortable with the investment that we have made in these systems? Even though there are many unknowns, and the quality of RRS studies needs improvement, the literature is accumulating that RRSs do reduce non‐ICU CA and improve hospital mortality. Without direct comparison studies demonstrating superiority of other expensive strategies, there is little reason to reconsider the RRS concept or question their implementation and our investment. We should instead invest further in this foundational patient‐safety strategy to make it as effective as it can be.

Disclosures: Dr. Pronovost reports the following potential conflicts of interest: grant or contract support from the Agency for Healthcare Research and Quality, and the Gordon and Betty Moore Foundation (research related to patient safety and quality of care), and the National Institutes of Health (acute lung injury research); consulting fees from the Association of Professionals in Infection Control and Epidemiology, Inc.; honoraria from various hospitals, health systems, and the Leigh Bureau to speak on quality and patient safety; book royalties from the Penguin Group; and board membership for the Cantel Medical Group. Dr. Winters reports the following potential conflicts of interest: contract or grant support from Masimo Corporation, honoraria from 3M Corporation and various hospitals and health systems, royalties from Lippincott Williams &Wilkins (UptoDate), and consulting fees from several legal firms for medical legal consulting.

In 2006,[1] we questioned whether rapid response systems (RRSs) were an effective strategy for detecting and managing deteriorating general ward patients. Since then, the implementation of RRSs has flourished, especially in the United States where accreditors (Joint Commission)[2] and patient‐safety organizations (Institute for Healthcare Improvement 100,000 Live Campaign)[3] have strongly supported RRSs. Decades of evidence show that general ward patients often experience unrecognized deterioration and cardiorespiratory arrest (CA). The low sensitivity and accuracy of periodic assessments by staff are thought to be a major reason for these lapses, as are imbalances between patient needs and clinician (primarily nursing) resources. Additionally, a medical culture that punishes speaking up or bypassing the chain of command are also likely contributors to the problem. A system that effectively recognizes the early signs of deterioration and quickly responds should catch problems before they become life threatening. Over the last decade, RRSs have been the primary intervention implemented to do this. The potential for RRSs to improve outcomes has strong face validity, but researchers have struggled to demonstrate consistent improvements in outcomes across institutions. Given this, are RRSs the best intervention to prevent this failure to rescue? In this editorial we examine the progress of RRSs, how they compare to other options, and we consider whether we should continue to question their implementation.

In our 2007 systematic review,[4] we concluded there was weak to moderate evidence supporting RRSs. Since then, 6 other systematic reviews of the effectiveness or implementation of RRSs have been published. One high‐quality review of effectiveness studies published through 2008 by Chan et al.[5] found that RRSs significantly reduced non‐intensive care unit (ICU) CA (relative risk [RR], 0.66; 95% confidence interval [CI], 0.54‐0.80), but not total hospital mortality (RR, 0.96; 95% CI, 0.84‐1.09) in adult inpatients. In pediatric inpatients, RRSs led to significant improvements in both non‐ICU CA (RR, 0.62; 95% CI, 0.46 to 0.84) and total hospital mortality (RR, 0.79; 95% CI, 0.63 to 0.98). Subsequent to 2008, a structured search[6] finds 26 additional studies.[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] Although the benefit for CA in both adults and children has remained robust, even more so since Chan's review, mortality reductions in adult patients appear to have had the most notable shift. In aggregate, the point estimate (for those studies providing analyzable data), for adult mortality has strengthened to 0.88, with a confidence interval of 0.82‐0.96 in favor of the RRS strategy.

This change has occurred as the analyzable studies since 2008 have all had favorable point estimates, and 4 have had statistically significant confidence intervals. Prior to 2008, 5 had unfavorable point estimates, and only 2 had favorable confidence intervals. As RRSs expand, the benefits, although not universal (some hospitals still experience no improvement in outcomes), seem to be getting stronger and more consistent. This may be secondary to maturation of the intervention and implementation strategies, or it may be the result of secular trends outside of the RRS intervention, although studies controlling for this found it not to be the case.[10] The factors associated with successful implementation of the RRS or improved outcomes include knowledge of activation criteria, communication, teamwork, lack of criticism for activating the RRS, and better attitudes about the team's positive effect on nurses and patients. Many of these factors relate to an improved safety culture in general. Additionally, activation rates may have increased in more recent studies, as greater utilization is associated with improved outcomes.[31] Finally, RRSs, like other patient‐safety and quality interventions, mature with time, often taking several years before they have a full effect on outcomes.[31, 32]

Despite these more favorable results for RRSs, we still see a large discrepancy between the magnitude of benefit for CA and mortality. This may partly be because the exposure groups are different; most studies examined non‐ICU CA, yet studies reporting mortality used total hospital mortality (ICU and non‐ICU). Additionally, although RRSs may effectively prevent CA, this intervention may have a more limited effect in preventing the patient's ultimate demise (particularly in the ICU).

We also still see that effectiveness reports for RRSs continue to be of low to moderate quality. Many reports give no statistics or denominator data or have missing data. Few control for secular trends in providers, outcomes, and confounders. Outcome measures vary widely, and none conducted blinded outcome assessments. Most studies use a pre‐post design without concurrent controls, substantially increasing the risk of bias. The better‐designed studies that use concurrent controls or cluster randomization (Priestley,[33] Bristow,[34] and the MERIT trial[35]) tend to show lower treatment effects, although interestingly in the MERIT trial, while the cluster‐randomized data showed no benefit, the pre‐post data showed significant improvement in the RRS intervention hospitals. These results have been attributed to the control hospitals using their code teams for RRS activities,[36] negating a comparative improvement in the intervention hospitals.

Can we improve RRS research? Likely, yes. We can begin by being more careful about defining the exposure group. Ideally, studies should not include data from the ICU or the emergency department because these patient populations are not part of the exposure group. Although most studies removed ICU and emergency department data for CA, they did not do so for hospital mortality. ICU mortality is likely biased, because only a small proportion of ICU patients have been exposed to an RRS. Definitions also need to be stringent and uniform. For example, CA may be defined in a variety of ways such as calling the code team versus documented cardiopulmonary resuscitation. Unexpected hospital mortality is often defined as excluding patients with do not resuscitate (DNR) orders, but this may or may not accurately exclude expected deaths. We also need to better attempt to control for confounders and secular trends. Outcomes such as CA and mortality are strongly influenced by changes in patient case‐mix over time, the frequency of care limitation/DNR orders, or by poor triage decisions.[37] Outcomes such as unanticipated ICU admission are indirect and may be heavily influenced by local cultural factors. Finally, authors need to provide robust statistical data and clear numerators and denominators to support their conclusions.

Although we need to do our best to improve the quality of the RRS literature, the near ubiquitous presence of this patient‐safety intervention in North American hospitals raises a crucial question, Do we even need more effectiveness studies and if so what kind? Randomized controlled trials are not likely. It is hard to argue that we still sit at a position of equipoise, and randomizing patients who are deteriorating to standard care versus an RRS is neither practical nor ethical. Finding appropriate concurrent control hospitals that have not implemented some type of RRS would also be very difficult.

We should, however, continue to test the effectiveness of RRSs but in a more diverse manner. RRSs should be more directly compared to other interventions that can improve the problem of failure to rescue such as increased nurse staffing[38, 39, 40] and hospitalist staffing.[41] The low sensitivity and accuracy of monitoring vital signs on general wards by staff is also an area strongly deserving of investigation, as it is likely central to the problem. Researchers have sought to use various combinations of vital signs, including aggregated or weighted scoring systems, and recent data suggest some approaches may be superior to others.[42] Many have advocated for continuous monitoring of a limited set of vital signs similar to the ICU, and there are some recent data indicating that this might be effective.[43, 44] This work is in the early stages, and we do not yet know whether this strategy will affect outcomes. It is conceivable that if the false alarm rate can be kept very low and we can minimize the failure to recognize deteriorating patients (good sensitivity, specificity, and positive predictive value), the need for the RRS response team may be reduced or even eliminated. Additionally, as electronic medical records (EMRs) have expanded, there has been growing interest in leveraging these systems to improve the effectiveness of RRSs.[45] There is a tremendous amount of information within the EMRs that can be used to complement vital‐sign monitoring (manual or continuous), because baseline medical problems, laboratory values, and recent history may have a strong impact on the predictive value of changes in vital signs.

Research should also focus on the possible unintended consequences, costs, and the cost‐effectiveness of RRSs compared with other interventions that can or may reduce the rate of failure to rescue. Certainly, establishing RRSs has costs including staff time and the need to pull staff from other clinical duties to respond. Unintended harm, such as diversion of ICU staff from their usual care, are often mentioned but never rigorously evaluated. Increasing nurse staffing has very substantial costs, but how these costs compare to the costs of the RRS are unclear, although likely the comparison would be very favorable to the RRS, because staffing typically relies on existing employees with expertise in caring for the critically ill as opposed to workforce expansion. Given the current healthcare economic climate, any model that relies on additional employees is not likely to gain support. Establishing continuous monitoring systems have up‐front capital costs, although they may reduce other costs in the long run (eg, staff, medical liability). They also have intangible costs for provider workload if the false alarm rates are too high. Again, this strategy is too new to know the answers to these concerns. As we move forward, such evaluations are needed to guide policy decisions.

We also need more evaluation of RRS implementation science. The optimal way to organize, train, and staff RRSs is unknown. Most programs use physician‐led teams, although some use nurse‐led teams. Few studies have compared the various models, although 1 study that compared a resident‐led to an attending‐led team found no difference.[17] Education is ubiquitous, although actual staff training (simulation for example) is not commonly described. In addition, there is wide variation in the frequency of RRS activation. We know nurses and residents often feel pressured not to activate RRSs, and much of the success of the RRS relies on nurses identifying deteriorating patients and calling the response team. The use of continuous monitoring combined with automatic notification of staff may reduce the barriers to activating RRSs, increasing activation rates, but until then we need more understanding of how to break down these barriers. Family/patient access to activation has also gained ground (1 program demonstrated outcome improvement only after this was established[13]), but is not yet widespread.

The role of the RRS in improving processes of care, such as the appropriate institution of DNR orders, end of life/palliative care discussions, and early goal‐directed therapy for sepsis, have been presented in several studies[46, 47] but remain inadequately evaluated. Here too, there is much to learn about how we might realize the full effectiveness of this patient‐safety strategy beyond outcomes such as CA and hospital mortality. Ideally, if all appropriate patients had DNR orders and we stopped failing to recognize and respond to deteriorating ward patients, CAs on general hospital wards could be nearly eliminated.

RRSs have been described as a band‐aid for a failed model of general ward care.[37] What is clear is that many patients suffer preventable harm from unrecognized deterioration. This needs to be challenged, but are RRSs the best intervention? Despite the Joint Commission's Patient Safety Goal 16, should we still question their implementation? Should we (and the Joint Commission) reconsider our approach and prioritize our efforts elsewhere or should we feel comfortable with the investment that we have made in these systems? Even though there are many unknowns, and the quality of RRS studies needs improvement, the literature is accumulating that RRSs do reduce non‐ICU CA and improve hospital mortality. Without direct comparison studies demonstrating superiority of other expensive strategies, there is little reason to reconsider the RRS concept or question their implementation and our investment. We should instead invest further in this foundational patient‐safety strategy to make it as effective as it can be.

Disclosures: Dr. Pronovost reports the following potential conflicts of interest: grant or contract support from the Agency for Healthcare Research and Quality, and the Gordon and Betty Moore Foundation (research related to patient safety and quality of care), and the National Institutes of Health (acute lung injury research); consulting fees from the Association of Professionals in Infection Control and Epidemiology, Inc.; honoraria from various hospitals, health systems, and the Leigh Bureau to speak on quality and patient safety; book royalties from the Penguin Group; and board membership for the Cantel Medical Group. Dr. Winters reports the following potential conflicts of interest: contract or grant support from Masimo Corporation, honoraria from 3M Corporation and various hospitals and health systems, royalties from Lippincott Williams &Wilkins (UptoDate), and consulting fees from several legal firms for medical legal consulting.

References
  1. Winters BD, Pham J, Pronovost PJ. Rapid response teams: walk, don't run. JAMA. 2006;296:16451647.
  2. Joint Commission requirement: The Joint Commission announces the 2008 National Patient Safety Goals and Requirements. Jt Comm Perspect. 2007;27(7):122.
  3. Institute for Healthcare Improvement. 5 million lives campaign: overview. Available at: http://www.ihi.org/offerings/Initiatives/PastStrategicInitiatives/5MillionLivesCampaign/Pages/default.aspx. Accessed November 28, 2012.
  4. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35:12381243.
  5. Chan PS, Jain R, Nallmothu BK, Berg RA, Sasson C. Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170:1826.
  6. Winters BD, Weaver SJ, Pfoh ER, Yang T, Pham JC, Dy SM. Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:417425.
  7. Chan PS, Khalid A, Longmore LS, Berg RA, Kosiborod M, Spertus JA. Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300:25062513.
  8. Anwar ul Haque H, Saleem AF, Zaidi S, Haider SR. Experience of pediatric rapid response team in a tertiary care hospital in Pakistan. Indian J Pediatr. 2010;77:273276.
  9. Bader MK, Neal B, Johnson L, et al. Rescue me: saving the vulnerable non‐ICU patient population. Jt Comm J Qual Patient Saf. 2009;35:199205.
  10. Beitler JR, Link N, Bails DB, Hurdle K, Chong DH. Reduction in hospital‐wide mortality after implementation of a rapid response team: a long‐term cohort study. Crit Care. 2011;15:R269.
  11. Benson L, Mitchell C, Link M, Carlson G, Fisher J. Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf. 2008;34:743747.
  12. Campello G, Granja C, Carvalho F, Dias C, Azevedo LF, Costa‐Pereira A. Immediate and long‐term impact of medical emergency teams on cardiac arrest prevalence and mortality: a plea for periodic basic life‐support training programs. Crit Care Med. 2009;37:30543061.
  13. Gerdik C, Vallish RO, Miles K, Godwin SA, Wludyka PS, Panni MK. Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81:16761681.
  14. Hanson CC, Randolph GD, Erickson JA, et al. A reduction in cardiac arrests and duration of clinical instability after implementation of a paediatric rapid response system. Qual Saf Health Care. 2009;18:500504.
  15. Hatler C, Mast D, Bedker D, et al. Implementing a rapid response team to decrease emergencies outside the ICU: one hospital's experience. Medsurg Nurs. 2009;18:8490, 126.
  16. Howell MD, Ngo L, Folcarelli P, et al. Sustained effectiveness of a primary‐team‐based rapid response system. Crit Care Med. 2012;40:25622568.
  17. Karvellas CJ, Souza IA, Gibney RT, Bagshaw SM. Association between implementation of an intensivist‐led medical emergency team and mortality. BMJ Qual Saf. 2012;21:152159.
  18. Konrad D, Jaderling G, Bell M, Granath F, Ekbom A, Martling CR. Reducing in‐hospital cardiac arrests and hospital mortality by introducing a medical emergency team. Intensive Care Med. 2010;36:100106.
  19. Kotsakis A, Lobos AT, Parshuram C, et al. Implementation of a multicenter rapid response system in pediatric academic hospitals is effective. Pediatrics. 2011;128:7278.
  20. Laurens N, Dwyer T. The impact of medical emergency teams on ICU admission rates, cardiopulmonary arrests and mortality in a regional hospital. Resuscitation. 2011;82:707712.
  21. Lighthall GK, Parast LM, Rapoport L, Wagner TH. Introduction of a rapid response system at a United States veterans affairs hospital reduced cardiac arrests. Anesth Analg. 2010;111:679686.
  22. Medina‐Rivera B, Campos‐Santiago Z, Palacios AT, Rodriguez‐Cintron W. The effect of the medical emergency team on unexpected cardiac arrest and death at the VA Caribbean healthcare system: a retrospective study. Crit Care Shock. 2010;13:98105.
  23. Rothberg MB, Belforti R, Fitzgerald J, Friderici J, Keyes M. Four years' experience with a hospitalist‐led medical emergency team: an interrupted time series. J Hosp Med. 2012;7:98103.
  24. Santamaria J, Tobin A, Holmes J. Changing cardiac arrest and hospital mortality rates through a medical emergency team takes time and constant review. Crit Care Med. 2010;38:445450.
  25. Sarani B, Palilonis E, Sonnad S, et al. Clinical emergencies and outcomes in patients admitted to a surgical versus medical service. Resuscitation. 2011;82:415418.
  26. Scherr K, Wilson DM, Wagner J, Haughian M. Evaluating a new rapid response team: NP‐led versus intensivist‐led comparisons. AACN Adv Crit Care. 2012;23:3242.
  27. Scott SS, Elliott S. Implementation of a rapid response team: a success story. Crit Care Nurse. 2009;29:6675.
  28. Shah SK, Cardenas VJ, Kuo YF, Sharma G. Rapid response team in an academic institution: does it make a difference? Chest. 2011;139:13611367.
  29. Tibballs J, Kinney S. Reduction of hospital mortality and of preventable cardiac arrest and death on introduction of a pediatric medical emergency team. Ped Crit Care Med. 2009;10:306312.
  30. Tobin AE, Santamaria JD. Medical emergency teams are associated with reduced mortality across a major metropolitan health network after two years service: a retrospective study using government administrative data. Crit Care. 2012;16:R210.
  31. Jones D, Bellomo R, Bates S, Warrillow S, et al. Long term effect of a medical emergency team on cardiac arrests in a teaching hospital. Crit Care. 2005;9:R808R815.
  32. Buist M, Harrison J, Abaloz E, Dyke S. Six year audit of cardiac arrests and medical emergency team calls in an Australian outer metropolitan teaching hospital. BMJ. 2007;335:12101212.
  33. Priestley G, Watson W, Rashidian R, et al. Introducing Critical Care Outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30:13981404.
  34. Bristow PJ, Hillman KM, Chey T, et al. Rates of in‐hospital arrests, deaths, and intensive care admissions: the effect of a medical emergency team. Med J Aust. 2000;173:236240.
  35. Hillman K, Chen J, Cretikos M, et al. Introduction of the medical emergency team (MET) system: a cluster randomised controlled trial. Lancet. 2005;365:20912097.
  36. Cretikos MA, Chen J, Hillman KM, Bellomo R, Finfer SR, Flabouris A. The effectiveness of implementation of the medical emergency team (MET) system and factors associated with use during the MERIT study. Crit Care Resusc. 2007;9:206212.
  37. Litvak E, Pronovost PJ. Rethinking rapid response teams. JAMA. 2010;304:13751376.
  38. Wiltse Nicely KL, Sloane DM, Aiken LH. Lower mortality for abdominal aortic aneurysm repair in high‐volume hospitals is contingent upon nurse staffing [published online ahead of print October 22, 2012]. Health Serv Res. doi: 10.1111/1475–6773.12004.
  39. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse‐staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:17151722.
  40. Kane RL. The association of registered nurse staffing levels and patient outcomes: systematic review and meta‐analysis. Med Care. 2007;45:11951204.
  41. Lindenauer PK, Rothberg MB, Pekow PS, Kenwood C, Benjamin EM, Auerbach AD. Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:25892600.
  42. Smith GB, Prytherch DR, Meredith P, Schmidt PE, Featherstone PI. The ability of the National Early Warning Score (NEWS) to discriminate patients at risk of early cardiac arrest, unanticipated intensive care unit admission, and death. Resuscitation. 2013;84:465470.
  43. Taenzer AH, Pyke JB, McGrath SP, Blike GT. Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112:282287.
  44. Bellomo R, Ackerman M, Bailey M, et al. A controlled trial of electronic automated advisory vital signs monitoring in general hospital wards. Crit Care Med. 2012;40:23492361.
  45. Agency for Healthcare Research and Quality. Early warning scoring system proactively identifies patients at risk of deterioration, leading to fewer cardiopulmonary emergencies and deaths. Available at: http://www.innovations.ahrq.gov/content.aspx?id=2607. Accessed March 26, 2013.
  46. Sebat F, Musthafa AA, Johnson D, et al. Effect of a rapid response system for patients in shock on time to treatment and mortality during 5 years. Crit Care Med. 2007;35:25682575.
  47. Jones DA, McIntyre T, Baldwin I, Mercer I, Kattula A, Bellomo R. The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9:151156.
References
  1. Winters BD, Pham J, Pronovost PJ. Rapid response teams: walk, don't run. JAMA. 2006;296:16451647.
  2. Joint Commission requirement: The Joint Commission announces the 2008 National Patient Safety Goals and Requirements. Jt Comm Perspect. 2007;27(7):122.
  3. Institute for Healthcare Improvement. 5 million lives campaign: overview. Available at: http://www.ihi.org/offerings/Initiatives/PastStrategicInitiatives/5MillionLivesCampaign/Pages/default.aspx. Accessed November 28, 2012.
  4. Winters BD, Pham JC, Hunt EA, Guallar E, Berenholtz S, Pronovost PJ. Rapid response systems: a systematic review. Crit Care Med. 2007;35:12381243.
  5. Chan PS, Jain R, Nallmothu BK, Berg RA, Sasson C. Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170:1826.
  6. Winters BD, Weaver SJ, Pfoh ER, Yang T, Pham JC, Dy SM. Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:417425.
  7. Chan PS, Khalid A, Longmore LS, Berg RA, Kosiborod M, Spertus JA. Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300:25062513.
  8. Anwar ul Haque H, Saleem AF, Zaidi S, Haider SR. Experience of pediatric rapid response team in a tertiary care hospital in Pakistan. Indian J Pediatr. 2010;77:273276.
  9. Bader MK, Neal B, Johnson L, et al. Rescue me: saving the vulnerable non‐ICU patient population. Jt Comm J Qual Patient Saf. 2009;35:199205.
  10. Beitler JR, Link N, Bails DB, Hurdle K, Chong DH. Reduction in hospital‐wide mortality after implementation of a rapid response team: a long‐term cohort study. Crit Care. 2011;15:R269.
  11. Benson L, Mitchell C, Link M, Carlson G, Fisher J. Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf. 2008;34:743747.
  12. Campello G, Granja C, Carvalho F, Dias C, Azevedo LF, Costa‐Pereira A. Immediate and long‐term impact of medical emergency teams on cardiac arrest prevalence and mortality: a plea for periodic basic life‐support training programs. Crit Care Med. 2009;37:30543061.
  13. Gerdik C, Vallish RO, Miles K, Godwin SA, Wludyka PS, Panni MK. Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81:16761681.
  14. Hanson CC, Randolph GD, Erickson JA, et al. A reduction in cardiac arrests and duration of clinical instability after implementation of a paediatric rapid response system. Qual Saf Health Care. 2009;18:500504.
  15. Hatler C, Mast D, Bedker D, et al. Implementing a rapid response team to decrease emergencies outside the ICU: one hospital's experience. Medsurg Nurs. 2009;18:8490, 126.
  16. Howell MD, Ngo L, Folcarelli P, et al. Sustained effectiveness of a primary‐team‐based rapid response system. Crit Care Med. 2012;40:25622568.
  17. Karvellas CJ, Souza IA, Gibney RT, Bagshaw SM. Association between implementation of an intensivist‐led medical emergency team and mortality. BMJ Qual Saf. 2012;21:152159.
  18. Konrad D, Jaderling G, Bell M, Granath F, Ekbom A, Martling CR. Reducing in‐hospital cardiac arrests and hospital mortality by introducing a medical emergency team. Intensive Care Med. 2010;36:100106.
  19. Kotsakis A, Lobos AT, Parshuram C, et al. Implementation of a multicenter rapid response system in pediatric academic hospitals is effective. Pediatrics. 2011;128:7278.
  20. Laurens N, Dwyer T. The impact of medical emergency teams on ICU admission rates, cardiopulmonary arrests and mortality in a regional hospital. Resuscitation. 2011;82:707712.
  21. Lighthall GK, Parast LM, Rapoport L, Wagner TH. Introduction of a rapid response system at a United States veterans affairs hospital reduced cardiac arrests. Anesth Analg. 2010;111:679686.
  22. Medina‐Rivera B, Campos‐Santiago Z, Palacios AT, Rodriguez‐Cintron W. The effect of the medical emergency team on unexpected cardiac arrest and death at the VA Caribbean healthcare system: a retrospective study. Crit Care Shock. 2010;13:98105.
  23. Rothberg MB, Belforti R, Fitzgerald J, Friderici J, Keyes M. Four years' experience with a hospitalist‐led medical emergency team: an interrupted time series. J Hosp Med. 2012;7:98103.
  24. Santamaria J, Tobin A, Holmes J. Changing cardiac arrest and hospital mortality rates through a medical emergency team takes time and constant review. Crit Care Med. 2010;38:445450.
  25. Sarani B, Palilonis E, Sonnad S, et al. Clinical emergencies and outcomes in patients admitted to a surgical versus medical service. Resuscitation. 2011;82:415418.
  26. Scherr K, Wilson DM, Wagner J, Haughian M. Evaluating a new rapid response team: NP‐led versus intensivist‐led comparisons. AACN Adv Crit Care. 2012;23:3242.
  27. Scott SS, Elliott S. Implementation of a rapid response team: a success story. Crit Care Nurse. 2009;29:6675.
  28. Shah SK, Cardenas VJ, Kuo YF, Sharma G. Rapid response team in an academic institution: does it make a difference? Chest. 2011;139:13611367.
  29. Tibballs J, Kinney S. Reduction of hospital mortality and of preventable cardiac arrest and death on introduction of a pediatric medical emergency team. Ped Crit Care Med. 2009;10:306312.
  30. Tobin AE, Santamaria JD. Medical emergency teams are associated with reduced mortality across a major metropolitan health network after two years service: a retrospective study using government administrative data. Crit Care. 2012;16:R210.
  31. Jones D, Bellomo R, Bates S, Warrillow S, et al. Long term effect of a medical emergency team on cardiac arrests in a teaching hospital. Crit Care. 2005;9:R808R815.
  32. Buist M, Harrison J, Abaloz E, Dyke S. Six year audit of cardiac arrests and medical emergency team calls in an Australian outer metropolitan teaching hospital. BMJ. 2007;335:12101212.
  33. Priestley G, Watson W, Rashidian R, et al. Introducing Critical Care Outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30:13981404.
  34. Bristow PJ, Hillman KM, Chey T, et al. Rates of in‐hospital arrests, deaths, and intensive care admissions: the effect of a medical emergency team. Med J Aust. 2000;173:236240.
  35. Hillman K, Chen J, Cretikos M, et al. Introduction of the medical emergency team (MET) system: a cluster randomised controlled trial. Lancet. 2005;365:20912097.
  36. Cretikos MA, Chen J, Hillman KM, Bellomo R, Finfer SR, Flabouris A. The effectiveness of implementation of the medical emergency team (MET) system and factors associated with use during the MERIT study. Crit Care Resusc. 2007;9:206212.
  37. Litvak E, Pronovost PJ. Rethinking rapid response teams. JAMA. 2010;304:13751376.
  38. Wiltse Nicely KL, Sloane DM, Aiken LH. Lower mortality for abdominal aortic aneurysm repair in high‐volume hospitals is contingent upon nurse staffing [published online ahead of print October 22, 2012]. Health Serv Res. doi: 10.1111/1475–6773.12004.
  39. Needleman J, Buerhaus P, Mattke S, Stewart M, Zelevinsky K. Nurse‐staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:17151722.
  40. Kane RL. The association of registered nurse staffing levels and patient outcomes: systematic review and meta‐analysis. Med Care. 2007;45:11951204.
  41. Lindenauer PK, Rothberg MB, Pekow PS, Kenwood C, Benjamin EM, Auerbach AD. Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:25892600.
  42. Smith GB, Prytherch DR, Meredith P, Schmidt PE, Featherstone PI. The ability of the National Early Warning Score (NEWS) to discriminate patients at risk of early cardiac arrest, unanticipated intensive care unit admission, and death. Resuscitation. 2013;84:465470.
  43. Taenzer AH, Pyke JB, McGrath SP, Blike GT. Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112:282287.
  44. Bellomo R, Ackerman M, Bailey M, et al. A controlled trial of electronic automated advisory vital signs monitoring in general hospital wards. Crit Care Med. 2012;40:23492361.
  45. Agency for Healthcare Research and Quality. Early warning scoring system proactively identifies patients at risk of deterioration, leading to fewer cardiopulmonary emergencies and deaths. Available at: http://www.innovations.ahrq.gov/content.aspx?id=2607. Accessed March 26, 2013.
  46. Sebat F, Musthafa AA, Johnson D, et al. Effect of a rapid response system for patients in shock on time to treatment and mortality during 5 years. Crit Care Med. 2007;35:25682575.
  47. Jones DA, McIntyre T, Baldwin I, Mercer I, Kattula A, Bellomo R. The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9:151156.
Issue
Journal of Hospital Medicine - 8(5)
Issue
Journal of Hospital Medicine - 8(5)
Page Number
278-281
Page Number
278-281
Article Type
Display Headline
Rapid response systems: Should we still question their implementation?
Display Headline
Rapid response systems: Should we still question their implementation?
Sections
Article Source
© 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Bradford D. Winters, MD, Johns Hopkins University School of Medicine, Department of Anesthesiology and Critical Care Medicine, and Armstrong Institute for Patient Safety and Quality, Zayed 9127, 1800 Orleans St., Baltimore, MD 21287; Telephone: 410‐955‐9081; Fax: 410‐955‐9062; E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files

ONLINE EXCLUSIVE: From Hospitalists, for Hospitalists: Top 10 Reasons To Come To HM13

Article Type
Changed
Fri, 09/14/2018 - 12:19
Display Headline
ONLINE EXCLUSIVE: From Hospitalists, for Hospitalists: Top 10 Reasons To Come To HM13

In March, Noah J. Finkel, MD, FHM, of Lahey Health System-Lahey Hospital emailed his hospitalist colleagues and encouraged them to join him at HM13 next month. For hospitalists still undecided about attending the largest conference specifically for hospitalists—especially academic hospitalists and those interested in health IT—SHM offers Dr. Finkel’s “Top 10” reasons to register for the annual meeting, which kicks off May 16 at the Gaylord National Resort and Conference Center in National Harbor, Md.

  1. HM13 offers 22.5 CME credits (sometimes better just to spend a few days cramming it in).
  2. Hospitalists on the Hill (hospitalmedicine2013.org/advocacy.php): great opportunity to meet with members of Congress and discuss issues important to HM (because you really don’t understand the SGR for Medicare reimbursement).
  3. Network with other hospitalists from across the country (avoid “local” medical thinking).
  4. Academic medicine track courses to enhance your teaching and research expertise (please admit that you were probably never formally trained).
  5. Comanagement pre-course and track to help with Medicine consult and orthopedic comanagement (is it a good time to start a beta-blocker?).
  6. Updates in the evidence-based medicine track to make sure that you know about the latest research before the medical students do (never good to be revealed as practicing “old medicine”).
  7. Hospitalist career track lecture to make sure you are climbing the ladder (do you have a career plan at all?).
  8. CPOE guidelines for inpatient medical care: perfect role for a hospitalist.
  9. Bob Wachter’s keynote on quality, safety, and IT. He’s the father of HM—’nuff said.
  10. ZDoggMD, the funniest hospitalist there is!

Dr. Finkel is medical director of information technology, hospital medicine; assistant professor, Tufts University School of Medicine, Lahey Health System-Lahey Hospital, Boston

Issue
The Hospitalist - 2013(04)
Publications
Sections

In March, Noah J. Finkel, MD, FHM, of Lahey Health System-Lahey Hospital emailed his hospitalist colleagues and encouraged them to join him at HM13 next month. For hospitalists still undecided about attending the largest conference specifically for hospitalists—especially academic hospitalists and those interested in health IT—SHM offers Dr. Finkel’s “Top 10” reasons to register for the annual meeting, which kicks off May 16 at the Gaylord National Resort and Conference Center in National Harbor, Md.

  1. HM13 offers 22.5 CME credits (sometimes better just to spend a few days cramming it in).
  2. Hospitalists on the Hill (hospitalmedicine2013.org/advocacy.php): great opportunity to meet with members of Congress and discuss issues important to HM (because you really don’t understand the SGR for Medicare reimbursement).
  3. Network with other hospitalists from across the country (avoid “local” medical thinking).
  4. Academic medicine track courses to enhance your teaching and research expertise (please admit that you were probably never formally trained).
  5. Comanagement pre-course and track to help with Medicine consult and orthopedic comanagement (is it a good time to start a beta-blocker?).
  6. Updates in the evidence-based medicine track to make sure that you know about the latest research before the medical students do (never good to be revealed as practicing “old medicine”).
  7. Hospitalist career track lecture to make sure you are climbing the ladder (do you have a career plan at all?).
  8. CPOE guidelines for inpatient medical care: perfect role for a hospitalist.
  9. Bob Wachter’s keynote on quality, safety, and IT. He’s the father of HM—’nuff said.
  10. ZDoggMD, the funniest hospitalist there is!

Dr. Finkel is medical director of information technology, hospital medicine; assistant professor, Tufts University School of Medicine, Lahey Health System-Lahey Hospital, Boston

In March, Noah J. Finkel, MD, FHM, of Lahey Health System-Lahey Hospital emailed his hospitalist colleagues and encouraged them to join him at HM13 next month. For hospitalists still undecided about attending the largest conference specifically for hospitalists—especially academic hospitalists and those interested in health IT—SHM offers Dr. Finkel’s “Top 10” reasons to register for the annual meeting, which kicks off May 16 at the Gaylord National Resort and Conference Center in National Harbor, Md.

  1. HM13 offers 22.5 CME credits (sometimes better just to spend a few days cramming it in).
  2. Hospitalists on the Hill (hospitalmedicine2013.org/advocacy.php): great opportunity to meet with members of Congress and discuss issues important to HM (because you really don’t understand the SGR for Medicare reimbursement).
  3. Network with other hospitalists from across the country (avoid “local” medical thinking).
  4. Academic medicine track courses to enhance your teaching and research expertise (please admit that you were probably never formally trained).
  5. Comanagement pre-course and track to help with Medicine consult and orthopedic comanagement (is it a good time to start a beta-blocker?).
  6. Updates in the evidence-based medicine track to make sure that you know about the latest research before the medical students do (never good to be revealed as practicing “old medicine”).
  7. Hospitalist career track lecture to make sure you are climbing the ladder (do you have a career plan at all?).
  8. CPOE guidelines for inpatient medical care: perfect role for a hospitalist.
  9. Bob Wachter’s keynote on quality, safety, and IT. He’s the father of HM—’nuff said.
  10. ZDoggMD, the funniest hospitalist there is!

Dr. Finkel is medical director of information technology, hospital medicine; assistant professor, Tufts University School of Medicine, Lahey Health System-Lahey Hospital, Boston

Issue
The Hospitalist - 2013(04)
Issue
The Hospitalist - 2013(04)
Publications
Publications
Article Type
Display Headline
ONLINE EXCLUSIVE: From Hospitalists, for Hospitalists: Top 10 Reasons To Come To HM13
Display Headline
ONLINE EXCLUSIVE: From Hospitalists, for Hospitalists: Top 10 Reasons To Come To HM13
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Chlorhexidine-impregnated washcloths lowers risk of hospital-acquired infections

Article Type
Changed
Fri, 09/14/2018 - 12:19
Display Headline
Chlorhexidine-impregnated washcloths lowers risk of hospital-acquired infections

Clinical question

In patients at high risk, does daily bathing with chlorhexidine-impregnated washcloths reduce the risk of hospital-acquired bloodstream infections?

Bottom line

For hospitalized patients at high-risk of nosocomial infections, daily bathing with chlorhexidine-impregnated washcloths reduces the rate of methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococcus (VRE) acquisition, but was not found to reduce the rate of bloodstream infections from these organisms. The rate of hospital-acquired bloodstream infections overall was significantly reduced; this included infections from other organisms such as coagulase-negative staphylococci (CoNS) and fungi. LOE = 1b

Reference

Climo MW, Yokoe DS, Warren DK, et al. Effect of daily chlorhexidine bathing on hospital-acquired infection. N Engl J Med 2013;368(6):533-542.

Study design

Randomized controlled trial (nonblinded)

Funding source

Industry + government

Allocation

Uncertain

Setting

Inpatient (any location)

Synopsis

In this multicenter trial, investigators enrolled patients in 8 intensive care units and one bone marrow transplantation unit. Each unit was randomized to bathe patients daily with either nonantimicrobial washcloths or 2% chlorhexidine-impregnated washcloths for 6 months, followed by the alternate product for the next 6 months. Bathing was performed by using the washcloths on all body surfaces sequentially, excluding the face, per manufacturer’s instructions. Active surveillance testing for MRSA and VRE was performed on all units during the study period. Analysis was by intention to treat. Use of chlorhexidine washcloths lowered the risk of MRSA or VRE acquisition by 23% (5.10 vs 6.60 cases per 1000 patient-days; P = .03). The rate of hospital-acquired bloodstream infections also decreased by 28% with the use of these washcloths (3.78 vs 6.60 cases per 1000 patient-days; P = .007). Specifically, both primary bloodstream infections and central catheter-associated bloodstream infections occurred less frequently with the intervention (31% decrease in primary infections, P = .006; 53% decrease in catheter-related infections, P = .004). Thirty percent of the 221 bloodstream infections detected during both the intervention and control periods were due to staphylococci, either Staphylcoccus aureus or CoNS. Use of the chlorhexidine washcloths decreased CoNS bloodstream infections by 56% (0.60 vs 1.36 cases per 1000 patient-days; P = .008) and fungal central catheter-associated infections by 90% (0.07 vs 0.77 cases per 1000 catheter-days; P < .001). There were no serious adverse events associated with the chlorhexidine washcloths. The MRSA and VRE isolates that were acquired did not show increased resistance to chlorhexidine although this does not allay the concern regarding longer-term emergence of high-level resistance.

 

Issue
The Hospitalist - 2013(04)
Publications
Sections

Clinical question

In patients at high risk, does daily bathing with chlorhexidine-impregnated washcloths reduce the risk of hospital-acquired bloodstream infections?

Bottom line

For hospitalized patients at high-risk of nosocomial infections, daily bathing with chlorhexidine-impregnated washcloths reduces the rate of methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococcus (VRE) acquisition, but was not found to reduce the rate of bloodstream infections from these organisms. The rate of hospital-acquired bloodstream infections overall was significantly reduced; this included infections from other organisms such as coagulase-negative staphylococci (CoNS) and fungi. LOE = 1b

Reference

Climo MW, Yokoe DS, Warren DK, et al. Effect of daily chlorhexidine bathing on hospital-acquired infection. N Engl J Med 2013;368(6):533-542.

Study design

Randomized controlled trial (nonblinded)

Funding source

Industry + government

Allocation

Uncertain

Setting

Inpatient (any location)

Synopsis

In this multicenter trial, investigators enrolled patients in 8 intensive care units and one bone marrow transplantation unit. Each unit was randomized to bathe patients daily with either nonantimicrobial washcloths or 2% chlorhexidine-impregnated washcloths for 6 months, followed by the alternate product for the next 6 months. Bathing was performed by using the washcloths on all body surfaces sequentially, excluding the face, per manufacturer’s instructions. Active surveillance testing for MRSA and VRE was performed on all units during the study period. Analysis was by intention to treat. Use of chlorhexidine washcloths lowered the risk of MRSA or VRE acquisition by 23% (5.10 vs 6.60 cases per 1000 patient-days; P = .03). The rate of hospital-acquired bloodstream infections also decreased by 28% with the use of these washcloths (3.78 vs 6.60 cases per 1000 patient-days; P = .007). Specifically, both primary bloodstream infections and central catheter-associated bloodstream infections occurred less frequently with the intervention (31% decrease in primary infections, P = .006; 53% decrease in catheter-related infections, P = .004). Thirty percent of the 221 bloodstream infections detected during both the intervention and control periods were due to staphylococci, either Staphylcoccus aureus or CoNS. Use of the chlorhexidine washcloths decreased CoNS bloodstream infections by 56% (0.60 vs 1.36 cases per 1000 patient-days; P = .008) and fungal central catheter-associated infections by 90% (0.07 vs 0.77 cases per 1000 catheter-days; P < .001). There were no serious adverse events associated with the chlorhexidine washcloths. The MRSA and VRE isolates that were acquired did not show increased resistance to chlorhexidine although this does not allay the concern regarding longer-term emergence of high-level resistance.

 

Clinical question

In patients at high risk, does daily bathing with chlorhexidine-impregnated washcloths reduce the risk of hospital-acquired bloodstream infections?

Bottom line

For hospitalized patients at high-risk of nosocomial infections, daily bathing with chlorhexidine-impregnated washcloths reduces the rate of methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococcus (VRE) acquisition, but was not found to reduce the rate of bloodstream infections from these organisms. The rate of hospital-acquired bloodstream infections overall was significantly reduced; this included infections from other organisms such as coagulase-negative staphylococci (CoNS) and fungi. LOE = 1b

Reference

Climo MW, Yokoe DS, Warren DK, et al. Effect of daily chlorhexidine bathing on hospital-acquired infection. N Engl J Med 2013;368(6):533-542.

Study design

Randomized controlled trial (nonblinded)

Funding source

Industry + government

Allocation

Uncertain

Setting

Inpatient (any location)

Synopsis

In this multicenter trial, investigators enrolled patients in 8 intensive care units and one bone marrow transplantation unit. Each unit was randomized to bathe patients daily with either nonantimicrobial washcloths or 2% chlorhexidine-impregnated washcloths for 6 months, followed by the alternate product for the next 6 months. Bathing was performed by using the washcloths on all body surfaces sequentially, excluding the face, per manufacturer’s instructions. Active surveillance testing for MRSA and VRE was performed on all units during the study period. Analysis was by intention to treat. Use of chlorhexidine washcloths lowered the risk of MRSA or VRE acquisition by 23% (5.10 vs 6.60 cases per 1000 patient-days; P = .03). The rate of hospital-acquired bloodstream infections also decreased by 28% with the use of these washcloths (3.78 vs 6.60 cases per 1000 patient-days; P = .007). Specifically, both primary bloodstream infections and central catheter-associated bloodstream infections occurred less frequently with the intervention (31% decrease in primary infections, P = .006; 53% decrease in catheter-related infections, P = .004). Thirty percent of the 221 bloodstream infections detected during both the intervention and control periods were due to staphylococci, either Staphylcoccus aureus or CoNS. Use of the chlorhexidine washcloths decreased CoNS bloodstream infections by 56% (0.60 vs 1.36 cases per 1000 patient-days; P = .008) and fungal central catheter-associated infections by 90% (0.07 vs 0.77 cases per 1000 catheter-days; P < .001). There were no serious adverse events associated with the chlorhexidine washcloths. The MRSA and VRE isolates that were acquired did not show increased resistance to chlorhexidine although this does not allay the concern regarding longer-term emergence of high-level resistance.

 

Issue
The Hospitalist - 2013(04)
Issue
The Hospitalist - 2013(04)
Publications
Publications
Article Type
Display Headline
Chlorhexidine-impregnated washcloths lowers risk of hospital-acquired infections
Display Headline
Chlorhexidine-impregnated washcloths lowers risk of hospital-acquired infections
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

10 days enoxaparin better than 35 days rivaroxaban for medical inpatient thromboprophylaxis (MAGELLAN)

Article Type
Changed
Fri, 09/14/2018 - 12:19
Display Headline
10 days enoxaparin better than 35 days rivaroxaban for medical inpatient thromboprophylaxis (MAGELLAN)

Clinical question

Is rivaroxaban for 35 days better than enoxaparin for 10 days to prevent venous thromboembolism in medical inpatients?

Bottom line

Enoxaparin for 10 days provides similar protection to rivaroxaban for 35 days against symptomatic venous thromboembolism (VTE) or VTE-related death, and the extended use of rivaroxaban leads to an increase in clinically relevant and major bleeding. Rivaroxaban cannot be recommended for this indication. The larger question is whether we should be routinely anticoagulating these patients at all. Although the 2012 guidelines from the American College of Chest Physicians (http://guideline.gov/content.aspx?id=35263) recommend prophylaxis for inpatients at increased risk for VTE, a recent American College of Physicians' guideline calls this practice into question, noting that for every 4 pulmonary emboli prevented, you cause 9 major bleeding events (Ann Intern Med 2011;155:602). LOE = 1b

Reference

Cohen AT, Spiro TE, Büller HR, et al, for the MAGELLAN Investigators. Rivaroxaban for thromboprophylaxis in acutely ill medical patients. N Engl J Med 2013;368(6):513-523.

Study design

Randomized controlled trial (double-blinded)

Funding source

Industry

Allocation

Concealed

Setting

Inpatient (any location)

Synopsis

Patients admitted within 72 hours of an acute medical illness who were older than 40 years and who had reduced mobility were randomized to receive either enoxaparin 40 mg once daily for 10 days plus 35 days oral placebo or rivaroxaban 10 mg twice daily for 35 days plus 10 days of subcutaneous placebo. There were a total of 8101 patients in the study (average age = 71 years). Patients were hospitalized for infectious disease (45%), heart failure (32%), respiratory insufficiency (27%), stroke (17%) or active cancer (7%), and at least 1 day of immobilization was anticipated with decreased mobility for at least 4 days. This study was performed in 52 countries, many of which must keep their patients longer in the hospital than we do in the United States, as the median duration of hospitalization was a whopping 11 days. There was an extensive list of inclusion criteria, with patients having at least one risk factor for VTE and not having any obvious bleeding risks. Groups were balanced at the start of the study, analysis was by intention to treat, and outcomes were blindly adjudicated. Patients underwent ultrasound to detect asymptomatic deep vein thrombosis (DVT) at 10 days and 35 days, and underwent imaging to detect VTE if they were symptomatic at any time. The composite efficacy outcome was a combination of asymptomatic proximal DVT, symptomatic DVT or PE, and VTE-related death at 10 and 35 days, and the safety outcome was major or fatal bleeding at 10 and 35 days. Only approximately 75% of the patients are included in the efficacy outcome, because approximately one fourth in each group failed to have the follow-up ultrasound to detect asymptomatic DVT. Although the authors point to the superiority of rivaroxaban at 35 days (4.4% vs 5.7%; P = .02; number needed to treat [NNT] = 77), this is only because of a decrease in asymptomatic DVTs; that is, DVTs we would never have known about were it not for the mandated study ultrasound. There was no significant difference in the likelihood of symptomatic VTE or VTE-related death. Major bleeding was more common in the rivaroxaban group at 35 days (4.1% vs 1.7%; P < .001; number needed to treat to harm = 42). This includes 7 fatal bleeds in the rivaroxaban group and only 1 in the enoxaparin group. The authors do not perform statistical testing for fatal bleeds (and a number of other outcomes that appear unfavorable to rivaroxaban) but I did and found that it was statistically significant (two-tailed chi-square = 4.7; P = .03). All-cause mortality was similar between groups. The "net benefit" was the composite of the primary efficacy and primary safety outcomes and favors enoxaparin (7.8% vs 9.4%; P = .02; NNT = 62). Rather than stating the obvious (enoxaparin was superior to rivaroxaban for the net benefit outcome), the authors spin this by stating that "the prespecified analysis of net clinical benefit or harm did not show a benefit with rivaroxaban at either day 10 or 35."

 

 

 

Issue
The Hospitalist - 2013(04)
Publications
Sections

Clinical question

Is rivaroxaban for 35 days better than enoxaparin for 10 days to prevent venous thromboembolism in medical inpatients?

Bottom line

Enoxaparin for 10 days provides similar protection to rivaroxaban for 35 days against symptomatic venous thromboembolism (VTE) or VTE-related death, and the extended use of rivaroxaban leads to an increase in clinically relevant and major bleeding. Rivaroxaban cannot be recommended for this indication. The larger question is whether we should be routinely anticoagulating these patients at all. Although the 2012 guidelines from the American College of Chest Physicians (http://guideline.gov/content.aspx?id=35263) recommend prophylaxis for inpatients at increased risk for VTE, a recent American College of Physicians' guideline calls this practice into question, noting that for every 4 pulmonary emboli prevented, you cause 9 major bleeding events (Ann Intern Med 2011;155:602). LOE = 1b

Reference

Cohen AT, Spiro TE, Büller HR, et al, for the MAGELLAN Investigators. Rivaroxaban for thromboprophylaxis in acutely ill medical patients. N Engl J Med 2013;368(6):513-523.

Study design

Randomized controlled trial (double-blinded)

Funding source

Industry

Allocation

Concealed

Setting

Inpatient (any location)

Synopsis

Patients admitted within 72 hours of an acute medical illness who were older than 40 years and who had reduced mobility were randomized to receive either enoxaparin 40 mg once daily for 10 days plus 35 days oral placebo or rivaroxaban 10 mg twice daily for 35 days plus 10 days of subcutaneous placebo. There were a total of 8101 patients in the study (average age = 71 years). Patients were hospitalized for infectious disease (45%), heart failure (32%), respiratory insufficiency (27%), stroke (17%) or active cancer (7%), and at least 1 day of immobilization was anticipated with decreased mobility for at least 4 days. This study was performed in 52 countries, many of which must keep their patients longer in the hospital than we do in the United States, as the median duration of hospitalization was a whopping 11 days. There was an extensive list of inclusion criteria, with patients having at least one risk factor for VTE and not having any obvious bleeding risks. Groups were balanced at the start of the study, analysis was by intention to treat, and outcomes were blindly adjudicated. Patients underwent ultrasound to detect asymptomatic deep vein thrombosis (DVT) at 10 days and 35 days, and underwent imaging to detect VTE if they were symptomatic at any time. The composite efficacy outcome was a combination of asymptomatic proximal DVT, symptomatic DVT or PE, and VTE-related death at 10 and 35 days, and the safety outcome was major or fatal bleeding at 10 and 35 days. Only approximately 75% of the patients are included in the efficacy outcome, because approximately one fourth in each group failed to have the follow-up ultrasound to detect asymptomatic DVT. Although the authors point to the superiority of rivaroxaban at 35 days (4.4% vs 5.7%; P = .02; number needed to treat [NNT] = 77), this is only because of a decrease in asymptomatic DVTs; that is, DVTs we would never have known about were it not for the mandated study ultrasound. There was no significant difference in the likelihood of symptomatic VTE or VTE-related death. Major bleeding was more common in the rivaroxaban group at 35 days (4.1% vs 1.7%; P < .001; number needed to treat to harm = 42). This includes 7 fatal bleeds in the rivaroxaban group and only 1 in the enoxaparin group. The authors do not perform statistical testing for fatal bleeds (and a number of other outcomes that appear unfavorable to rivaroxaban) but I did and found that it was statistically significant (two-tailed chi-square = 4.7; P = .03). All-cause mortality was similar between groups. The "net benefit" was the composite of the primary efficacy and primary safety outcomes and favors enoxaparin (7.8% vs 9.4%; P = .02; NNT = 62). Rather than stating the obvious (enoxaparin was superior to rivaroxaban for the net benefit outcome), the authors spin this by stating that "the prespecified analysis of net clinical benefit or harm did not show a benefit with rivaroxaban at either day 10 or 35."

 

 

 

Clinical question

Is rivaroxaban for 35 days better than enoxaparin for 10 days to prevent venous thromboembolism in medical inpatients?

Bottom line

Enoxaparin for 10 days provides similar protection to rivaroxaban for 35 days against symptomatic venous thromboembolism (VTE) or VTE-related death, and the extended use of rivaroxaban leads to an increase in clinically relevant and major bleeding. Rivaroxaban cannot be recommended for this indication. The larger question is whether we should be routinely anticoagulating these patients at all. Although the 2012 guidelines from the American College of Chest Physicians (http://guideline.gov/content.aspx?id=35263) recommend prophylaxis for inpatients at increased risk for VTE, a recent American College of Physicians' guideline calls this practice into question, noting that for every 4 pulmonary emboli prevented, you cause 9 major bleeding events (Ann Intern Med 2011;155:602). LOE = 1b

Reference

Cohen AT, Spiro TE, Büller HR, et al, for the MAGELLAN Investigators. Rivaroxaban for thromboprophylaxis in acutely ill medical patients. N Engl J Med 2013;368(6):513-523.

Study design

Randomized controlled trial (double-blinded)

Funding source

Industry

Allocation

Concealed

Setting

Inpatient (any location)

Synopsis

Patients admitted within 72 hours of an acute medical illness who were older than 40 years and who had reduced mobility were randomized to receive either enoxaparin 40 mg once daily for 10 days plus 35 days oral placebo or rivaroxaban 10 mg twice daily for 35 days plus 10 days of subcutaneous placebo. There were a total of 8101 patients in the study (average age = 71 years). Patients were hospitalized for infectious disease (45%), heart failure (32%), respiratory insufficiency (27%), stroke (17%) or active cancer (7%), and at least 1 day of immobilization was anticipated with decreased mobility for at least 4 days. This study was performed in 52 countries, many of which must keep their patients longer in the hospital than we do in the United States, as the median duration of hospitalization was a whopping 11 days. There was an extensive list of inclusion criteria, with patients having at least one risk factor for VTE and not having any obvious bleeding risks. Groups were balanced at the start of the study, analysis was by intention to treat, and outcomes were blindly adjudicated. Patients underwent ultrasound to detect asymptomatic deep vein thrombosis (DVT) at 10 days and 35 days, and underwent imaging to detect VTE if they were symptomatic at any time. The composite efficacy outcome was a combination of asymptomatic proximal DVT, symptomatic DVT or PE, and VTE-related death at 10 and 35 days, and the safety outcome was major or fatal bleeding at 10 and 35 days. Only approximately 75% of the patients are included in the efficacy outcome, because approximately one fourth in each group failed to have the follow-up ultrasound to detect asymptomatic DVT. Although the authors point to the superiority of rivaroxaban at 35 days (4.4% vs 5.7%; P = .02; number needed to treat [NNT] = 77), this is only because of a decrease in asymptomatic DVTs; that is, DVTs we would never have known about were it not for the mandated study ultrasound. There was no significant difference in the likelihood of symptomatic VTE or VTE-related death. Major bleeding was more common in the rivaroxaban group at 35 days (4.1% vs 1.7%; P < .001; number needed to treat to harm = 42). This includes 7 fatal bleeds in the rivaroxaban group and only 1 in the enoxaparin group. The authors do not perform statistical testing for fatal bleeds (and a number of other outcomes that appear unfavorable to rivaroxaban) but I did and found that it was statistically significant (two-tailed chi-square = 4.7; P = .03). All-cause mortality was similar between groups. The "net benefit" was the composite of the primary efficacy and primary safety outcomes and favors enoxaparin (7.8% vs 9.4%; P = .02; NNT = 62). Rather than stating the obvious (enoxaparin was superior to rivaroxaban for the net benefit outcome), the authors spin this by stating that "the prespecified analysis of net clinical benefit or harm did not show a benefit with rivaroxaban at either day 10 or 35."

 

 

 

Issue
The Hospitalist - 2013(04)
Issue
The Hospitalist - 2013(04)
Publications
Publications
Article Type
Display Headline
10 days enoxaparin better than 35 days rivaroxaban for medical inpatient thromboprophylaxis (MAGELLAN)
Display Headline
10 days enoxaparin better than 35 days rivaroxaban for medical inpatient thromboprophylaxis (MAGELLAN)
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

ONLINE EXCLUSIVE: Study: Tracheostomy Collar Facilitates Quicker Transition

Article Type
Changed
Fri, 09/14/2018 - 12:19
Display Headline
ONLINE EXCLUSIVE: Study: Tracheostomy Collar Facilitates Quicker Transition

Each day a patient spends on a ventilator increases pneumonia risk by about 1% (Am J Respir Crit Care Med. 2002;165[7]:867-903). Being unable to move or talk also might induce a sense of helplessness. As a result, many clinicians wean off a ventilator sooner rather than later.

A recent study (JAMA. 2013;309[7]:671-677) has found that unassisted breathing via a tracheostomy collar facilitates a quicker transition than breathing with pressure support after prolonged mechanical ventilation (>21 days). Investigators reported their findings at the Society of Critical Care Medicine’s 42nd Congress in January in San Juan, Puerto Rico.

On average, patients were able to successfully wean four days earlier with unassisted breathing versus pressure support—a significant difference, says lead investigator Amal Jubran, MD, section chief of pulmonary and critical-care medicine at the Edward Hines Jr. VA Hospital in Chicago. No major differences were reported in survival between the two groups at six-month and 12-month intervals after enrollment in the study.

“The faster pace of weaning in the tracheostomy collar group may be related to its effect on clinical decision-making,” says Dr. Jubran, a professor at Loyola University Chicago’s Stritch School of Medicine. “Observing a patient breathing through a tracheostomy collar provides the clinician with a clear view of the patient’s respiratory capabilities.”

In contrast, with pressure support, a clinician’s perception of weanability “is clouded because the patient is receiving ventilator assistance,” she says. “It is extremely difficult to distinguish between how much work the patient is doing and how much work the ventilator is doing.”

Amid this uncertainty, Dr. Jubran adds, clinicians are more likely to accelerate the weaning process in patients who unexpectedly respond well during a tracheostomy collar challenge than in those receiving a low level of pressure support.

In the study, less than 10% of 312 patients—most of whom were elderly—required reconnection to a ventilator after being weaned successfully. Weaning efforts should be restarted only after cardiopulmonary stability has been reached, she says.

Factoring into the equation are the measurements for blood pressure and respiratory rate and the amounts of oxygenation and sedation in patients on ventilators, says Paul Odenbach, MD, SHM, a hospitalist at Abbott Northwestern Hospital in Minneapolis.

“I look at them clinically overall,” he says. “The most important piece is eyeballing them from where they are in their disease trajectory.

Are they awake enough to be protecting their airway once they are extubated?” he adds. He has found that a stable airway is more easily achieved with a tracheostomy collar.

Listen to Dr. Odenbach explain what hospitalists should watch out for when weaning patients off mechanical ventilation, especially in critical-care situations.

 

Managing heart failure, treating infections, and optimizing nutrition are crucial before weaning off ventilation, says geriatrician Joel Sender, MD, section chief of pulmonary medicine at St. Barnabas Hospital in Bronx, N.Y., and medical director of its Rehabilitation & Continuing Care Center.

“It is important to identify the best candidates for weaning and then apply the best methods,” says Dr. Sender. “Sadly, many patients are not good candidates, and only a portion are successfully weaned.” That’s why “there’s a great need to have a frank discussion with the family to answer their questions and to promote a realistic set of treatment goals.” TH

Susan Kreimer is a freelance writer in New York.

Key Takeaways for Hospitalists

  • The biggest obstacle in weaning management is the delay in starting to assess whether a patient is ready for weaning.
  • Weaning off mechanical ventilation should be attempted as soon as cardiopulmonary instability has been resolved.
  • Patients requiring prolonged mechanical ventilation should be weaned with daily trials of unassisted breathing through a tracheostomy collar and not with pressure support.

 

 

 

Issue
The Hospitalist - 2013(04)
Publications
Sections

Each day a patient spends on a ventilator increases pneumonia risk by about 1% (Am J Respir Crit Care Med. 2002;165[7]:867-903). Being unable to move or talk also might induce a sense of helplessness. As a result, many clinicians wean off a ventilator sooner rather than later.

A recent study (JAMA. 2013;309[7]:671-677) has found that unassisted breathing via a tracheostomy collar facilitates a quicker transition than breathing with pressure support after prolonged mechanical ventilation (>21 days). Investigators reported their findings at the Society of Critical Care Medicine’s 42nd Congress in January in San Juan, Puerto Rico.

On average, patients were able to successfully wean four days earlier with unassisted breathing versus pressure support—a significant difference, says lead investigator Amal Jubran, MD, section chief of pulmonary and critical-care medicine at the Edward Hines Jr. VA Hospital in Chicago. No major differences were reported in survival between the two groups at six-month and 12-month intervals after enrollment in the study.

“The faster pace of weaning in the tracheostomy collar group may be related to its effect on clinical decision-making,” says Dr. Jubran, a professor at Loyola University Chicago’s Stritch School of Medicine. “Observing a patient breathing through a tracheostomy collar provides the clinician with a clear view of the patient’s respiratory capabilities.”

In contrast, with pressure support, a clinician’s perception of weanability “is clouded because the patient is receiving ventilator assistance,” she says. “It is extremely difficult to distinguish between how much work the patient is doing and how much work the ventilator is doing.”

Amid this uncertainty, Dr. Jubran adds, clinicians are more likely to accelerate the weaning process in patients who unexpectedly respond well during a tracheostomy collar challenge than in those receiving a low level of pressure support.

In the study, less than 10% of 312 patients—most of whom were elderly—required reconnection to a ventilator after being weaned successfully. Weaning efforts should be restarted only after cardiopulmonary stability has been reached, she says.

Factoring into the equation are the measurements for blood pressure and respiratory rate and the amounts of oxygenation and sedation in patients on ventilators, says Paul Odenbach, MD, SHM, a hospitalist at Abbott Northwestern Hospital in Minneapolis.

“I look at them clinically overall,” he says. “The most important piece is eyeballing them from where they are in their disease trajectory.

Are they awake enough to be protecting their airway once they are extubated?” he adds. He has found that a stable airway is more easily achieved with a tracheostomy collar.

Listen to Dr. Odenbach explain what hospitalists should watch out for when weaning patients off mechanical ventilation, especially in critical-care situations.

 

Managing heart failure, treating infections, and optimizing nutrition are crucial before weaning off ventilation, says geriatrician Joel Sender, MD, section chief of pulmonary medicine at St. Barnabas Hospital in Bronx, N.Y., and medical director of its Rehabilitation & Continuing Care Center.

“It is important to identify the best candidates for weaning and then apply the best methods,” says Dr. Sender. “Sadly, many patients are not good candidates, and only a portion are successfully weaned.” That’s why “there’s a great need to have a frank discussion with the family to answer their questions and to promote a realistic set of treatment goals.” TH

Susan Kreimer is a freelance writer in New York.

Key Takeaways for Hospitalists

  • The biggest obstacle in weaning management is the delay in starting to assess whether a patient is ready for weaning.
  • Weaning off mechanical ventilation should be attempted as soon as cardiopulmonary instability has been resolved.
  • Patients requiring prolonged mechanical ventilation should be weaned with daily trials of unassisted breathing through a tracheostomy collar and not with pressure support.

 

 

 

Each day a patient spends on a ventilator increases pneumonia risk by about 1% (Am J Respir Crit Care Med. 2002;165[7]:867-903). Being unable to move or talk also might induce a sense of helplessness. As a result, many clinicians wean off a ventilator sooner rather than later.

A recent study (JAMA. 2013;309[7]:671-677) has found that unassisted breathing via a tracheostomy collar facilitates a quicker transition than breathing with pressure support after prolonged mechanical ventilation (>21 days). Investigators reported their findings at the Society of Critical Care Medicine’s 42nd Congress in January in San Juan, Puerto Rico.

On average, patients were able to successfully wean four days earlier with unassisted breathing versus pressure support—a significant difference, says lead investigator Amal Jubran, MD, section chief of pulmonary and critical-care medicine at the Edward Hines Jr. VA Hospital in Chicago. No major differences were reported in survival between the two groups at six-month and 12-month intervals after enrollment in the study.

“The faster pace of weaning in the tracheostomy collar group may be related to its effect on clinical decision-making,” says Dr. Jubran, a professor at Loyola University Chicago’s Stritch School of Medicine. “Observing a patient breathing through a tracheostomy collar provides the clinician with a clear view of the patient’s respiratory capabilities.”

In contrast, with pressure support, a clinician’s perception of weanability “is clouded because the patient is receiving ventilator assistance,” she says. “It is extremely difficult to distinguish between how much work the patient is doing and how much work the ventilator is doing.”

Amid this uncertainty, Dr. Jubran adds, clinicians are more likely to accelerate the weaning process in patients who unexpectedly respond well during a tracheostomy collar challenge than in those receiving a low level of pressure support.

In the study, less than 10% of 312 patients—most of whom were elderly—required reconnection to a ventilator after being weaned successfully. Weaning efforts should be restarted only after cardiopulmonary stability has been reached, she says.

Factoring into the equation are the measurements for blood pressure and respiratory rate and the amounts of oxygenation and sedation in patients on ventilators, says Paul Odenbach, MD, SHM, a hospitalist at Abbott Northwestern Hospital in Minneapolis.

“I look at them clinically overall,” he says. “The most important piece is eyeballing them from where they are in their disease trajectory.

Are they awake enough to be protecting their airway once they are extubated?” he adds. He has found that a stable airway is more easily achieved with a tracheostomy collar.

Listen to Dr. Odenbach explain what hospitalists should watch out for when weaning patients off mechanical ventilation, especially in critical-care situations.

 

Managing heart failure, treating infections, and optimizing nutrition are crucial before weaning off ventilation, says geriatrician Joel Sender, MD, section chief of pulmonary medicine at St. Barnabas Hospital in Bronx, N.Y., and medical director of its Rehabilitation & Continuing Care Center.

“It is important to identify the best candidates for weaning and then apply the best methods,” says Dr. Sender. “Sadly, many patients are not good candidates, and only a portion are successfully weaned.” That’s why “there’s a great need to have a frank discussion with the family to answer their questions and to promote a realistic set of treatment goals.” TH

Susan Kreimer is a freelance writer in New York.

Key Takeaways for Hospitalists

  • The biggest obstacle in weaning management is the delay in starting to assess whether a patient is ready for weaning.
  • Weaning off mechanical ventilation should be attempted as soon as cardiopulmonary instability has been resolved.
  • Patients requiring prolonged mechanical ventilation should be weaned with daily trials of unassisted breathing through a tracheostomy collar and not with pressure support.

 

 

 

Issue
The Hospitalist - 2013(04)
Issue
The Hospitalist - 2013(04)
Publications
Publications
Article Type
Display Headline
ONLINE EXCLUSIVE: Study: Tracheostomy Collar Facilitates Quicker Transition
Display Headline
ONLINE EXCLUSIVE: Study: Tracheostomy Collar Facilitates Quicker Transition
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)

Doubt and the importance of waiting

Article Type
Changed
Thu, 12/06/2018 - 10:48
Display Headline
Doubt and the importance of waiting

There was an article in the New York Times recently about a software program developed by scientists at Harvard University and the Massachusetts Institute of Technology that grades essays instantly. How strange it must be to have a computer judge all the nuances of language involved in a work of prose. Multiple-choice questions alone can already be contentious, as anyone can attest who has waited months for the results of a 240-question board exam.

It gets even more hairy in real life. What I would give for the ability to examine a patient, submit my findings to some omniscient entity (Watson, maybe?), and get instant feedback.

Unfortunately, this is not how rheumatology works. The temporal artery biopsy does not always come back positive, and when it doesn’t, it becomes much harder to know what to do when a patient relapses once he is down to 20 mg of prednisone. And how do I justify putting a refractory dermatomyositis patient on a toxic immunosuppressant when her muscle biopsy is negative and all I have to base my decision on is a highly suggestive skin rash?

Very little of what we do as rheumatologists is evidence based. Doubt is a recurring theme in my practice. I am lucky that I practice with other physicians with whom I can bounce ideas around and that I maintain a relationship with previous mentors who are always ready with good advice. I am fortunate to be close enough to Boston that many of my scleroderma patients can get enrolled in trials. But doubt is ever-present and is very frequently the cause of not insignificant personal malaise.

Here’s what’s even more striking about the essay-grading software, and even more pertinent to our nebulous field: they’ve eliminated waiting. According to the Times article, a student can submit an essay, receive feedback right away, and resubmit the essay in an attempt to get a better grade. That is an enviable state of affairs.

There is, however, something to be said for waiting. Patience is a defining virtue in rheumatology. We wait for a patient with vague symptoms to develop more symptoms or to "declare" themselves. We wait for labs and pathology reports and imaging findings. Perhaps most importantly, we wait for either a response to treatment or for the other shoe to drop.

By promising instant feedback to students, are we not depriving them of an important life lesson? After all, we spend our lifetimes waiting. We wait to grow up, to get our driver’s licenses, to hear about school or job applications. We eagerly wait for holidays, for meals to be served, and for spring to arrive. We wait to grow older. We wait to recover from illness.

As Father James Donelan, S.J., put it, "Waiting is a mystery – a natural sacrament of life." Waiting teaches us to have patience and self-control, to innovate and be imaginative. Waiting feeds our curiosity and cultivates a sense of wonder.

There is something to be said for learning to be comfortable with disquiet. Through doubt and waiting we learn to ask questions. And in a field like ours, where our scope of knowledge is so tiny compared with what we have yet to learn, questioning is not a bad word. Vladimir Nabokov, who was also an entomologist apart from being a writer, said it best: "There is no science without fancy and no art without fact."

Dr. Chan practices rheumatology in Pawtucket, R.I. This column, "Rheum in Bloom," appears regularly in Rheumatology News.

Author and Disclosure Information

Publications
Legacy Keywords
rheumatology, Dr. Karmela Chan
Sections
Author and Disclosure Information

Author and Disclosure Information

There was an article in the New York Times recently about a software program developed by scientists at Harvard University and the Massachusetts Institute of Technology that grades essays instantly. How strange it must be to have a computer judge all the nuances of language involved in a work of prose. Multiple-choice questions alone can already be contentious, as anyone can attest who has waited months for the results of a 240-question board exam.

It gets even more hairy in real life. What I would give for the ability to examine a patient, submit my findings to some omniscient entity (Watson, maybe?), and get instant feedback.

Unfortunately, this is not how rheumatology works. The temporal artery biopsy does not always come back positive, and when it doesn’t, it becomes much harder to know what to do when a patient relapses once he is down to 20 mg of prednisone. And how do I justify putting a refractory dermatomyositis patient on a toxic immunosuppressant when her muscle biopsy is negative and all I have to base my decision on is a highly suggestive skin rash?

Very little of what we do as rheumatologists is evidence based. Doubt is a recurring theme in my practice. I am lucky that I practice with other physicians with whom I can bounce ideas around and that I maintain a relationship with previous mentors who are always ready with good advice. I am fortunate to be close enough to Boston that many of my scleroderma patients can get enrolled in trials. But doubt is ever-present and is very frequently the cause of not insignificant personal malaise.

Here’s what’s even more striking about the essay-grading software, and even more pertinent to our nebulous field: they’ve eliminated waiting. According to the Times article, a student can submit an essay, receive feedback right away, and resubmit the essay in an attempt to get a better grade. That is an enviable state of affairs.

There is, however, something to be said for waiting. Patience is a defining virtue in rheumatology. We wait for a patient with vague symptoms to develop more symptoms or to "declare" themselves. We wait for labs and pathology reports and imaging findings. Perhaps most importantly, we wait for either a response to treatment or for the other shoe to drop.

By promising instant feedback to students, are we not depriving them of an important life lesson? After all, we spend our lifetimes waiting. We wait to grow up, to get our driver’s licenses, to hear about school or job applications. We eagerly wait for holidays, for meals to be served, and for spring to arrive. We wait to grow older. We wait to recover from illness.

As Father James Donelan, S.J., put it, "Waiting is a mystery – a natural sacrament of life." Waiting teaches us to have patience and self-control, to innovate and be imaginative. Waiting feeds our curiosity and cultivates a sense of wonder.

There is something to be said for learning to be comfortable with disquiet. Through doubt and waiting we learn to ask questions. And in a field like ours, where our scope of knowledge is so tiny compared with what we have yet to learn, questioning is not a bad word. Vladimir Nabokov, who was also an entomologist apart from being a writer, said it best: "There is no science without fancy and no art without fact."

Dr. Chan practices rheumatology in Pawtucket, R.I. This column, "Rheum in Bloom," appears regularly in Rheumatology News.

There was an article in the New York Times recently about a software program developed by scientists at Harvard University and the Massachusetts Institute of Technology that grades essays instantly. How strange it must be to have a computer judge all the nuances of language involved in a work of prose. Multiple-choice questions alone can already be contentious, as anyone can attest who has waited months for the results of a 240-question board exam.

It gets even more hairy in real life. What I would give for the ability to examine a patient, submit my findings to some omniscient entity (Watson, maybe?), and get instant feedback.

Unfortunately, this is not how rheumatology works. The temporal artery biopsy does not always come back positive, and when it doesn’t, it becomes much harder to know what to do when a patient relapses once he is down to 20 mg of prednisone. And how do I justify putting a refractory dermatomyositis patient on a toxic immunosuppressant when her muscle biopsy is negative and all I have to base my decision on is a highly suggestive skin rash?

Very little of what we do as rheumatologists is evidence based. Doubt is a recurring theme in my practice. I am lucky that I practice with other physicians with whom I can bounce ideas around and that I maintain a relationship with previous mentors who are always ready with good advice. I am fortunate to be close enough to Boston that many of my scleroderma patients can get enrolled in trials. But doubt is ever-present and is very frequently the cause of not insignificant personal malaise.

Here’s what’s even more striking about the essay-grading software, and even more pertinent to our nebulous field: they’ve eliminated waiting. According to the Times article, a student can submit an essay, receive feedback right away, and resubmit the essay in an attempt to get a better grade. That is an enviable state of affairs.

There is, however, something to be said for waiting. Patience is a defining virtue in rheumatology. We wait for a patient with vague symptoms to develop more symptoms or to "declare" themselves. We wait for labs and pathology reports and imaging findings. Perhaps most importantly, we wait for either a response to treatment or for the other shoe to drop.

By promising instant feedback to students, are we not depriving them of an important life lesson? After all, we spend our lifetimes waiting. We wait to grow up, to get our driver’s licenses, to hear about school or job applications. We eagerly wait for holidays, for meals to be served, and for spring to arrive. We wait to grow older. We wait to recover from illness.

As Father James Donelan, S.J., put it, "Waiting is a mystery – a natural sacrament of life." Waiting teaches us to have patience and self-control, to innovate and be imaginative. Waiting feeds our curiosity and cultivates a sense of wonder.

There is something to be said for learning to be comfortable with disquiet. Through doubt and waiting we learn to ask questions. And in a field like ours, where our scope of knowledge is so tiny compared with what we have yet to learn, questioning is not a bad word. Vladimir Nabokov, who was also an entomologist apart from being a writer, said it best: "There is no science without fancy and no art without fact."

Dr. Chan practices rheumatology in Pawtucket, R.I. This column, "Rheum in Bloom," appears regularly in Rheumatology News.

Publications
Publications
Article Type
Display Headline
Doubt and the importance of waiting
Display Headline
Doubt and the importance of waiting
Legacy Keywords
rheumatology, Dr. Karmela Chan
Legacy Keywords
rheumatology, Dr. Karmela Chan
Sections
Article Source

PURLs Copyright

Inside the Article

Manage most SEGAs with rapamycin analogs, not surgery

Article Type
Changed
Tue, 02/14/2023 - 13:10
Display Headline
Manage most SEGAs with rapamycin analogs, not surgery

SAN DIEGO – Medical management with sirolimus or everolimus for pediatric patients with tuberous sclerosis complex and subependymal giant cell astrocytomas is more effective and safer than surgery, researchers from the University of Cincinnati and University of California, Los Angeles, have found.

Although the benign tumors have traditionally been left to surgeons, it’s become clear in recent years that rapamycin analogs are effective, too. The question has been "which [approach] is best? Medical management "is known to be pretty mild compared to the surgery," but it’s not curative, explained lead investigator Susanne Yoon, the University of Cincinnati medical student who presented the results at the annual meeting of the American Academy of Neurology.

The team compared outcomes for 23 SEGA (subependymal giant cell astrocytoma) patients who underwent surgery, 81 who took sirolimus or everolimus, and 9 who got both. The surgery patients were diagnosed when they were about 10 years old and were followed for a median of 8.9 years; the medical patients were about 7 years old when diagnosed, and were followed for a median of 2.8 years. Boys made up the majority of both groups.

None of the children who took a rapamycin analog needed surgery; tumors shrank by more than half in 61% (45). The drugs caused infections, weight change, or hyperlipidemia in some, but only 13% (11) needed to stop the drug or go to the hospital because of side effects.

Meanwhile, surgery cured just 39% (9) of the children who got it, sometimes after two or three operations; 61% (14) of those patients had prolonged hospitalizations or were hospitalized due to postoperative complications that included intracranial hemorrhage in 8, hydrocephalus/shunt malfunction in 6, neurologic impairment, and seizures.

"Not only does medical management win in efficacy, but it also wins in the safety issues. Rapalog [rapamycin] therapy, alone or in combination, is becoming a cornerstone of tumor management" in neurocutaneous disorders, said Dr. David H. Viskochil, professor of pediatrics at the University of Utah, Salt Lake City, commenting on the study.

"Of course, there are emergent situations where you’ve just got to go in and get the tumor out; you can’t wait 3 months to see" if drugs work. "But if a child is just starting to show some symptoms and not deteriorating, then you can start with medicine first and see what happens," he said.

"The question is if you got [SEGAs] really early, would surgical cure be much more likely? The studies aren’t quite there yet," he said in an interview.

Ms. Yoon and Dr. Viskochil said they have no disclosures.

[email protected]

Meeting/Event
Author and Disclosure Information

Publications
Topics
Legacy Keywords
sirolimus, everolimus, tuberous sclerosis complex, subependymal giant cell astrocytomas, benign tumors, rapamycin, Susanne Yoon, American Academy of Neurology, SEGA, subependymal giant cell astrocytoma, pediatric tumors
Author and Disclosure Information

Author and Disclosure Information

Meeting/Event
Meeting/Event

SAN DIEGO – Medical management with sirolimus or everolimus for pediatric patients with tuberous sclerosis complex and subependymal giant cell astrocytomas is more effective and safer than surgery, researchers from the University of Cincinnati and University of California, Los Angeles, have found.

Although the benign tumors have traditionally been left to surgeons, it’s become clear in recent years that rapamycin analogs are effective, too. The question has been "which [approach] is best? Medical management "is known to be pretty mild compared to the surgery," but it’s not curative, explained lead investigator Susanne Yoon, the University of Cincinnati medical student who presented the results at the annual meeting of the American Academy of Neurology.

The team compared outcomes for 23 SEGA (subependymal giant cell astrocytoma) patients who underwent surgery, 81 who took sirolimus or everolimus, and 9 who got both. The surgery patients were diagnosed when they were about 10 years old and were followed for a median of 8.9 years; the medical patients were about 7 years old when diagnosed, and were followed for a median of 2.8 years. Boys made up the majority of both groups.

None of the children who took a rapamycin analog needed surgery; tumors shrank by more than half in 61% (45). The drugs caused infections, weight change, or hyperlipidemia in some, but only 13% (11) needed to stop the drug or go to the hospital because of side effects.

Meanwhile, surgery cured just 39% (9) of the children who got it, sometimes after two or three operations; 61% (14) of those patients had prolonged hospitalizations or were hospitalized due to postoperative complications that included intracranial hemorrhage in 8, hydrocephalus/shunt malfunction in 6, neurologic impairment, and seizures.

"Not only does medical management win in efficacy, but it also wins in the safety issues. Rapalog [rapamycin] therapy, alone or in combination, is becoming a cornerstone of tumor management" in neurocutaneous disorders, said Dr. David H. Viskochil, professor of pediatrics at the University of Utah, Salt Lake City, commenting on the study.

"Of course, there are emergent situations where you’ve just got to go in and get the tumor out; you can’t wait 3 months to see" if drugs work. "But if a child is just starting to show some symptoms and not deteriorating, then you can start with medicine first and see what happens," he said.

"The question is if you got [SEGAs] really early, would surgical cure be much more likely? The studies aren’t quite there yet," he said in an interview.

Ms. Yoon and Dr. Viskochil said they have no disclosures.

[email protected]

SAN DIEGO – Medical management with sirolimus or everolimus for pediatric patients with tuberous sclerosis complex and subependymal giant cell astrocytomas is more effective and safer than surgery, researchers from the University of Cincinnati and University of California, Los Angeles, have found.

Although the benign tumors have traditionally been left to surgeons, it’s become clear in recent years that rapamycin analogs are effective, too. The question has been "which [approach] is best? Medical management "is known to be pretty mild compared to the surgery," but it’s not curative, explained lead investigator Susanne Yoon, the University of Cincinnati medical student who presented the results at the annual meeting of the American Academy of Neurology.

The team compared outcomes for 23 SEGA (subependymal giant cell astrocytoma) patients who underwent surgery, 81 who took sirolimus or everolimus, and 9 who got both. The surgery patients were diagnosed when they were about 10 years old and were followed for a median of 8.9 years; the medical patients were about 7 years old when diagnosed, and were followed for a median of 2.8 years. Boys made up the majority of both groups.

None of the children who took a rapamycin analog needed surgery; tumors shrank by more than half in 61% (45). The drugs caused infections, weight change, or hyperlipidemia in some, but only 13% (11) needed to stop the drug or go to the hospital because of side effects.

Meanwhile, surgery cured just 39% (9) of the children who got it, sometimes after two or three operations; 61% (14) of those patients had prolonged hospitalizations or were hospitalized due to postoperative complications that included intracranial hemorrhage in 8, hydrocephalus/shunt malfunction in 6, neurologic impairment, and seizures.

"Not only does medical management win in efficacy, but it also wins in the safety issues. Rapalog [rapamycin] therapy, alone or in combination, is becoming a cornerstone of tumor management" in neurocutaneous disorders, said Dr. David H. Viskochil, professor of pediatrics at the University of Utah, Salt Lake City, commenting on the study.

"Of course, there are emergent situations where you’ve just got to go in and get the tumor out; you can’t wait 3 months to see" if drugs work. "But if a child is just starting to show some symptoms and not deteriorating, then you can start with medicine first and see what happens," he said.

"The question is if you got [SEGAs] really early, would surgical cure be much more likely? The studies aren’t quite there yet," he said in an interview.

Ms. Yoon and Dr. Viskochil said they have no disclosures.

[email protected]

Publications
Publications
Topics
Article Type
Display Headline
Manage most SEGAs with rapamycin analogs, not surgery
Display Headline
Manage most SEGAs with rapamycin analogs, not surgery
Legacy Keywords
sirolimus, everolimus, tuberous sclerosis complex, subependymal giant cell astrocytomas, benign tumors, rapamycin, Susanne Yoon, American Academy of Neurology, SEGA, subependymal giant cell astrocytoma, pediatric tumors
Legacy Keywords
sirolimus, everolimus, tuberous sclerosis complex, subependymal giant cell astrocytomas, benign tumors, rapamycin, Susanne Yoon, American Academy of Neurology, SEGA, subependymal giant cell astrocytoma, pediatric tumors
Article Source

AT THE 2013 AAN ANNUAL MEETING

PURLs Copyright

Inside the Article

Vitals

Major finding: Rapamycin analogs shrink SEGA tumors by more than 50% in a majority of children, and obviate the need for surgery.

Data source: Comparison of surgical and medical treatment of SEGA tumors in 113 children.

Disclosures: Ms. Yoon and Dr. Viskochil said they have no disclosures.

Resident Use of Handoff Information

Article Type
Changed
Sun, 05/21/2017 - 18:14
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

Files
References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
Article PDF
Issue
Journal of Hospital Medicine - 8(6)
Page Number
328-333
Sections
Files
Files
Article PDF
Article PDF

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

Hospital communication failures are a leading cause of serious errors and adverse events in the United States.[1, 2, 3, 4] With the implementation of duty‐hour restrictions for resident physicians,[5] there has been particular focus on the transfer of information during handoffs at change of shift.[6, 7] Many residency programs have sought to improve the processes of written and verbal handoffs through various initiatives, including: (1) automated linkage of handoff forms to electronic medical records (EMRs)[8, 9, 10]; (2) introduction of oral communication curricula, handoff simulation, or mnemonics[11, 12, 13]; and (3) faculty oversight of housestaff handoffs.[14, 15] Underlying each initiative has been the assumption that improving written and verbal handoff processes will ensure the availability of optimal patient information for on‐call housestaff. There has been little investigation, however, into what clinical questions are actually being asked of on‐call trainees, as well as what sources of information they are using to provide answers.

The aim of our study was to examine the extent to which written and verbal handoffs are utilized by pediatric trainees to derive answers to questions posed during overnight shifts. We also sought to describe both the frequency and types of on‐call questions being asked of trainees. Our primary outcome was trainee use of written handoffs to answer on‐call questions. Secondary outcomes included trainee use of verbal handoffs, as well as their use of alternative information resources to answer on‐call questions, including other clinical staff (ie, attending physicians, senior residents, nursing staff), patients and their families, the medical record, or the Internet. We then examined a variety of trainee, patient, and question characteristics to assess potential predictors of written and verbal handoff use.

METHODS

Institutional approval was granted to prospectively observe pediatric interns at the start of their overnight on‐call shifts on 2 inpatient wards at Boston Children's Hospital during 3 winter months (November through January). Our study was conducted during the postintervention period of a larger study that was designed to examine the effectiveness of a new resident handoff bundle on resident workflow and patient safety.[13] Interns rotating on study ward 1 used a structured, nonautomated tool (Microsoft Word version 2003; Microsoft Corp., Redmond, WA). Interns on study ward 2 used a handoff tool that was developed at the study hospital for use with the hospital's EMR, Cerner PowerChart version 2007.17 (Cerner Corp., Kansas City, MO). Interns on both wards received training on specific communication strategies, including verbal and written handoff processes.[13]

For our study, we recorded all questions being asked of on‐call interns by patients, parents, or other family members, as well as nurses or other clinical providers after completion of their evening handoff. We then directly observed all information resources used to derive answers to any questions asked pertaining to patients discussed in the evening handoff. We excluded any questions about new patient admissions or transfers, as well as nonpatient‐related questions.

Both study wards were staffed by separate day and night housestaff teams, who worked shifts of 12 to 14 hours in duration and had similar nursing schedules. The day team consisted of 3 interns and 1 senior resident per ward. The night team consisted of 1 intern on each ward, supervised by a senior resident covering both wards. Each day intern rotated for 1 week (Sunday through Thursday) during their month‐long ward rotation as part of the night team. We considered any intern on either of the 2 study wards to be eligible for enrollment in this study. Written consent was obtained from all participants.

The night intern received a verbal and written handoff at the shift change (usually performed between 5 and 7pm) from 1 of the departing day interns prior to the start of the observation period. This handoff was conducted face‐to‐face in a ward conference room typically with the on‐call night intern and supervising resident receiving the handoff together from the departing day intern/senior resident.

Observation Protocol

Data collection was conducted by an independent, board‐certified, pediatric physician observer on alternating weeknights immediately after the day‐to‐night evening handoff had taken place. A strict observation protocol was followed. When an eligible question was asked of the participating intern, the physician observer would record the question and the time. The question source, defined as a nurse, parent/patient, or other clinical staff (eg, pharmacist, consultant) was documented, as well as the mode of questioning, defined as face to face, text page, or phone call.

The observer would then note if and when the question was answered. Once the question was answered, the observer would ask the intern if he or she had used the written handoff to provide the answer (yes or no). Our primary outcome was reported use of the written handoff. In addition, the observer directly noted if the intern looked at the written handoff tool at any time when answering a question. The intern was also asked to name any and all additional information resources used, including verbal handoff, senior resident, nursing staff, other clinicians, a patient/parent or other family member, a patient's physical exam, the EMR, the Internet, or his or her own medical or clinical knowledge.

All question and answer information was tracked using a handheld, digital, time device. In addition, the following patient data were recorded for each patient involved in a recorded question: the patient's admitting service, transfer status, and length of stay.

Data Categorization and Analysis

Content of recorded questions were categorized according to whether they involved: (1) medications (including drug allergies or levels), (2) diet or fluids, (3) laboratory values or diagnostic testing/procedures, (4) physical exam findings (eg, a distended abdomen, blood pressure, height/weight), or (5) general care‐plan questions. We also categorized time used for generating an answer as immediate (<5 minutes), delayed (>5 minutes but <1.5 hours), or deferred (any question unanswered during the time of observation).

All data were entered into a database using SPSS 16.0 Data Builder software (SPSS Inc., Chicago, IL), and statistical analyses were performed with PASW 18 (SPSS Inc.) and SAS 9.2 (SAS Institute Inc., Cary, NC) software. Observed questions were summarized according to content categories. We also described trainee and patient characteristics relevant to the questions being studied. To study risk factors for written handoff use, the outcome was dichotomized as reported use of written handoff by the intern as a resource to answer the question asked versus written handoff use was not reported by the intern as a resource to answer the question asked. We did not include observed use of the written handoff in these statistical analyses. To accommodate for patient‐ or provider‐induced correlations among observed questions, we used a generalized estimation equations approach (PROC GENMOD in SAS 9.2) to fit logistic regression models for written handoff use and permitted a nested correlation structure among the questions (ie, questions from the same patient were allowed to be correlated, and patients under the care of the same intern could have intern‐induced correlation). Univariate regression modeling was used to evaluate the effects of question, patient, and intern characteristics. Multivariate logistic regression models were used to identify independent risk factors for written handoff use. Any variable that had a P value 0.1 in univariate regression model was considered as a candidate variable in the multivariate regression model. We then used a backward elimination approach to obtain the final model, which only included variables remaining to be significant at a P<0.05 significance level. Our analysis for verbal handoff use was carried out in a similar fashion.

RESULTS

Twenty‐eight observation nights (equivalent to 77 hours and 6 minutes of total direct observation time), consisting of 13 sessions on study ward 1 and 15 sessions on study ward 2, were completed. A total of 15 first‐year pediatric interns (5 male, 33%; 10 female, 66.7%), with a median age of 27.5 years (interquartile range [IQR]: 2629 years) participated. Interns on the 2 study wards were comparable with regard to trainee week of service (P=0.43) and consecutive night of call at the time of observation (P=0.45). Each intern was observed for a mean of 2 sessions (range, 13 sessions), with a mean observation time per session of approximately 2 hours and 45 minutes ( 23 minutes).

Questions

A total of 260 questions (ward 1: 136 questions, ward 2: 124 questions) met inclusion criteria and involved 101 different patients, with a median of 2 questions/patient (IQR: 13) and a range of 1 to 14 questions/patient. Overall, interns were asked 2.6 questions/hour (IQR: 1.44.7), with a range of 0 to 7 questions per hour; the great majority of questions (210 [82%]) were posed face to face. Types of questions recorded included medications 28% (73), diet/fluids 15% (39), laboratory or diagnostic/procedural related 22% (57), physical exam or other measurements 8.5% (22), or other general medical or patient care‐plan questions 26.5% (69) (Table 1). Examples of recorded questions are provided in Table 2.

Patient, Question, and Answer Characteristics
 No. (%)
  • NOTE: Abbreviations: CCS, complex care service. *Patients' inpatient length of stay means time (in days) between admission date and night of recorded question. Interns' week of service and consecutive night means time (in weeks or days, respectively) between interns' ward rotation start date and night of observation. Clinical provider means nursing staff, referring pediatrician, pharmacist, or other clinical provider. Other resources includes general medical/clinical knowledge, the electronic medical record, parents' report, other clinicians' report (ie, senior resident, nursing staff), Internet.

Patients, n=101 
Admitting services 
General pediatrics49 (48)
Pediatric subspecialty27 (27)
CCS*25 (25)
Patients transferred from critical care unit 
Yes21 (21)
No80 (79)
Questions, n=260 
Patients' length of stay at time of recorded question* 
2 days142 (55)
>2 days118 (45)
Intern consecutive night shift (15) 
1st or 2nd night (early)86 (33)
3rd through 5th night (late)174 (67)
Intern week of service during a 4‐week rotation 
Weeks 12 (early)119 (46)
Weeks 34 (late)141 (54)
Question sources 
Clinical provider167 (64)
Parent/patient or other family member93 (36)
Question categories 
Medications73 (28)
Diet and/or fluids39 (15)
Labs or diagnostic imaging/procedures57 (22)
Physical exam/vital signs/measurements22 (8.5)
Other general medical or patient care plan questions69 (26.5)
Answers, n=233 
Resources reported 
Written sign‐out17 (7.3)
Verbal sign‐out (excluding any written sign‐out use)59 (25.3)
Other resources157 (67.4)
Question Examples by Category
Question Categories
  • NOTE: Abbreviations: AM, morning; NG, nasogastric; NPO, nothing by mouth.

Medication questions (including medication allergy or drug level questions)
Could you clarify the lasix orders?
Pharmacy rejected the medication, what do you want to do?
Dietary and fluid questions
Do you want to continue NG feeds at 10 mL/hr and advance?
Is she going to need to be NPO for the biopsy in the AM?
Laboratory or diagnostic tests/procedure questions
Do you want blood cultures on this patient?
What was the result of her x‐ray?
Physical exam questions (including height/weight or vital sign measurements)
What do you think of my back (site of biopsy)?
Is my back okay, because it seems sore after the (renal) biopsy?
Other (patient related) general medical or care plan questions
Did you talk with urology about their recommendations?
Do you know the plan for tomorrow?

Across the 2 study wards, 48% (49) of patients involved in questions were admitted to a general pediatric service; 27% (27) were admitted to a pediatric specialty service (including the genetics/metabolism, endocrinology, adolescent medicine, pulmonary, or toxicology admitting services); the remaining 25% (25) were admitted to a complex care service (CCS), specifically designed for patients with multisystem genetic, neurological, or congenital disorders (Table 1).[16, 17] Approximately 21% (21) of patients had been transferred to the floor from a critical care unit (Table 1).

Answers

Of the 260 recorded questions, 90% (233) had documented answers. For the 10% (27) of questions with undocumented answers, 21 were observed to be verbally deferred by the intern to the day team or another care provider (ie, other physician or nurse), and almost half (42.9% [9]) involved general care‐plan questions; the remainder involved medication (4), diet (2), diagnostic testing (5), or vital sign (1) questions. An additional 6 questions went unanswered during the observation period, and it is unknown if or when they were answered.

Of the answered questions, 90% (209) of answers were provided by trainees within 5 minutes and 9% (21) within 1.5 hours. In all, interns reported using 1 information resource to provide answers for 61% (142) of questions, at least 2 resources for 33% (76) questions, and 3 resources for 6% (15) questions.

Across both study wards, interns reported using information provided in written or verbal handoffs to answer 32.6% of questions. Interns reported using the written handoff, either alone or in combination with other information resources, to provide answers for 7.3% (17) of questions; verbal handoff, either alone or in combination with another resource (excluding written handoff), was reported as a resource for 25.3% (59) of questions. Of note, interns were directly observed to look at the written handoff when answering 21% (49) of questions.

A variety of other resources, including general medical/clinical knowledge, the EMR, and parents or other resources, were used to answer the remaining 67.4% (157) of questions. Intern general medical knowledge (ie, reports of simply knowing the answer to the question in their head[s]) was used to provide answers for 53.2% (124) of questions asked.

Unadjusted univariate regression analyses assessing predictors of written and verbal handoff use are shown in Figure 1. Multivariate logistic regression analyses showed that both dietary questions (odds ratio [OR]: 3.64, 95% confidence interval [CI]: 1.518.76; P=0.004) and interns' consecutive call night (OR: 0.29, 95% CI: 0.090.93; P=0.04) remained significant predictors of written handoff use. After adjusting for risk factors identified above, no differences in written handoff use were seen between the 2 wards.

Figure 1
Univariate predictors of written and verbal handoff use. Physical exam/measurement questions are not displayed in this graph as they were not associated with written or verbal handoff use. Abbreviations: CI, confidence interval; ICU, intensive care unit. *P < 0.05 = significant univariate predictor of written handoff use. **P < 0.05 = significant univariate predictor of verbal handoff use.

Multivariate logistic regression for predictors of the verbal handoff use showed that questions regarding patients with longer lengths of stay (OR: 1.97, 95% CI: 1.023.8; P=0.04), those regarding general care plans (OR: 2.07, 95% CI: 1.133.78; P=0.02), as well as those asked by clinical staff (OR: 1.95, 95 CI: 1.043.66; P=0.04), remained significant predictors of reported verbal handoff use.

DISCUSSION

In light of the recent changes in duty hours implemented in July 2011, many pediatric training programs are having trainees work in day and night shifts.[18] Pediatric resident physicians frequently answer questions that pertain to patients handed off between day and night shifts. We found that on average, information provided in the verbal and written handoff was used almost once per hour. Housestaff in our study generally based their answers on information found in 1 or 2 resources, with almost one‐third of all questions involving some use of the written or verbal handoff. Prior research has documented widespread problems with resident handoff practices across programs and a high rate of medical errors due to miscommunications.[3, 4, 19, 20] Given how often information contained within the handoff was used as interns went about their nightly tasks, it is not difficult to understand how errors or omissions in the handoff process may potentially translate into frequent problems in direct patient care.

Trainees reported using written handoff tools to provide answers for 7.3% of questions. As we had suspected, they relied less frequently on their written handoffs as they completed more consecutive call nights. Interestingly, however, even when housestaff did not report using the written handoff, they were observed quite often to look at it before providing an answer. One explanation for this discrepancy between trainee reports and our observations is that the written handoff may serve as a memory tool, even if housestaff do not directly attribute their answers to its content. Our study also found that answers to questions concerning patients' diet and fluids were more likely to be ascribed to information contained in the written handoff. This finding supports the potential value of automated written handoff tools that are linked to the EMR, which can best ensure accuracy of this type of information.

Housestaff in our study also reported using information received during the verbal handoff to answer 1 out of every 4 on‐call questions. Although we did not specifically rate or monitor the quality of verbal handoffs, prior research has demonstrated that resident verbal handoff is often plagued with incomplete and inaccurate data.[3, 4, 19, 21] One investigation found that pediatric interns were prone to overestimating the effectiveness of their verbal handoffs, even as they failed to convey urgent information to their peers.[19] In light of such prior work, our finding that interns frequently rely on the verbal transfer of information supports specific residency training program handoff initiatives that target verbal exchanges.[11, 22, 23]

Although information obtained in the handoff was frequently required by on‐call housestaff, our study found that two‐thirds of all questions were answered using other resources, most often general medical or clinical knowledge. Clearly, background knowledge and experience is fundamental to trainees' ability to perform their jobs. Such reliance on general knowledge for problem solving may not be unique to interns. One recent observational study of senior pediatric cardiac subspecialists reported a high frequency of reliance on their own clinical experience, instinct, or prior training in making clinical decisions.[24] Further investigation may be useful to parse out the exact types of clinical knowledge being used, and may have important implications for how training programs plan for overnight supervision.[25, 26, 27]

Our study has several limitations. First, it was beyond the scope of this study to link housestaff answers to patient outcomes or medical errors. Given the frequency with which the handoff, a known source of vulnerability to medical error, was used by on‐call housestaff, our study suggests that future research evaluating the relationship between questions asked of on‐call housestaff, the answers provided, and downstream patient safety incidents may be merited. Second, our study was conducted in a single pediatric residency program with 1 physician observer midway through the first year of training and only in the early evening hours. This limits the generalizability of our findings, as the use of handoffs to answer on‐call questions may be different at other stages of the training process, within other specialties, or even at different times of the day. We also began our observations after the handoff had taken place; future studies may want to assess how variations in written and verbal handoff processes affect their use. As a final limitation, we note that although collecting information in real time using a direct observational method eliminated the problem of recall bias, there may have been attribution bias.

The results of our study demonstrate that on‐call pediatric housestaff are frequently asked a variety of clinical questions posed by hospital staff, patients, and their families. We found that trainees are apt to rely both on handoff information and other resources to provide answers. By better understanding what resources on‐call housestaff are accessing to answer questions overnight, we may be able to better target interventions needed to improve the availability of patient information, as well as the usefulness of written and verbal handoff tools.[11, 22, 23]

Acknowledgments

The authors thank Katharine Levinson, MD, and Melissa Atmadja, BA, for their help with the data review and guidance with database management. The authors also thank the housestaff from the Boston Combined Residency Program in Pediatrics for their participation in this study.

Disclosures: Maireade E. McSweeney, MD, as the responsible author certifies that all coauthors have seen and agree with the contents of this article, takes responsibility for the accuracy of these data, and certifies that this information is not under review by any other publication. All authors had no financial conflicts of interest or conflicts of interest relevant to this article to disclose. Dr. Landrigan is supported in part by the Children's Hospital Association for his work as an Executive Council member of the Pediatric Research in Inpatient Settings network. In addition, he has received honoraria from the Committee of Interns and Residents as well as multiple academic medical centers for lectures delivered on handoffs, sleep deprivation, and patient safety, and he has served as an expert witness in cases regarding patient safety and sleep deprivation.

References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
References
  1. Improving America's hospitals: The Joint Commission's annual report on quality and safety. 2007. Available at: http://www.jointcommission. org/Improving_Americas_Hospitals_The_Joint_Commissions_Annual_Report_on_Quality_and_Safety_‐_2007. Accessed October 3, 2011.
  2. US Department of Health and Human Services, Office of Inspector General. Adverse events in hospitals: methods for identifying events. 2010. Available at: http://oig.hhs.gov/oei/reports/oei‐06‐08‐00221.pdf. Accessed October 3, 2011.
  3. Arora V, Johnson J, Lovinger D, Humphrey HJ, Meltzer DO. Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14:401407.
  4. Horwitz LI, Moin T, Krumholz HM, Wang L, Bradley EH. Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168:17551760.
  5. Accreditation Council for Graduate Medical Education. Common program requirements. 2010. Available at: http://acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed January 25, 2011.
  6. Volpp KG, Landrigan CP. Building physician work hour regulations from first principles and best evidence. JAMA. 2008;300:11971199.
  7. Vidyarthi AR, Arora V, Schnipper JL, Wall SD, Wachter RM. Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1:257266.
  8. Eaton EG, Horvath KD, Lober WB, Rossini AJ, Pellegrini CA. A randomized, controlled trial evaluating the impact of a computerized rounding and sign‐out system on continuity of care and resident work hours. J Am Coll Surg. 2005;200:538545.
  9. Wayne J TR, Reinhardt G, Rooney D, Makoul G, Chopra S, DaRosa D. Simple standardized patient handoff system that increases accuracy and completeness. J Surg Educ. 2008;65:476485.
  10. Li P, Ali S, Tang C, Ghali WA, Stelfox HT. Review of computerized physician handoff tools for improving the quality of patient care [published online ahead of print November 20, 2012]. J Hosp Med. doi: 10.1002/jhm.1988.
  11. Sectish TC, Starmer AJ, Landrigan CP, Spector ND. Establishing a multisite education and research project requires leadership, expertise, collaboration, and an important aim. Pediatrics. 2010;126:619622.
  12. Farnan JM, Paro JA, Rodriguez RM, et al. Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2009;25:129134.
  13. Starmer AJ, Spector ND, Srivastava R, Allen AD, Landrigan CP, Sectish TC. I‐pass, a mnemonic to standardize verbal handoffs. Pediatrics. 2012;129:201204.
  14. Chu ES, Reid M, Schulz T, et al. A structured handoff program for interns. Acad Med. 2009;84:347352.
  15. Nabors C, Peterson SJ, Lee WN, et al. Experience with faculty supervision of an electronic resident sign‐out system. Am J Med. 2010;123:376381.
  16. Berry JG, Hall DE, Kuo DZ, et al. Hospital utilization and characteristics of patients experiencing recurrent readmissions within children's hospitals. JAMA. 2011;305:682690.
  17. Simon TD, Berry J, Feudtner C, et al. Children with complex chronic conditions in inpatient hospital settings in the United States. Pediatrics. 2010;126:647655.
  18. Chua KP, Gordon MB, Sectish T, Landrigan CP. Effects of a night‐team system on resident sleep and work hours. Pediatrics. 2011;128:11421147.
  19. Chang VY AV, Lev‐Ari S, D'Arcy M, Keysar B. Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125:491496.
  20. McSweeney ME, Lightdale JR, Vinci RJ, Moses J. Patient handoffs: pediatric resident experiences and lessons learned. Clin Pediatr (Phila). 2011;50:5763.
  21. Borowitz SM, Waggoner‐Fountain LA, Bass EJ, Sledd RM. Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17:610.
  22. Arora V, Johnson J. A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32:646655.
  23. Horwitz LI, Moin T, Green ML. Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22:14701474.
  24. Darst JR, Newburger JW, Resch S, Rathod RH, Lock JE. Deciding without data. Congenit Heart Dis. 2010;5:339342.
  25. Farnan JM, Petty LA, Georgitis E, et al. A systematic review: the effect of clinical supervision on patient and residency education outcomes. Acad Med. 2012;87:428442.
  26. Haber LA, Lau CY, Sharpe BA, Arora VM, Farnan JM, Ranji SR. Effects of increased overnight supervision on resident education, decision‐making, and autonomy. J Hosp Med. 2012;7:606610.
  27. Farnan JM, Burger A, Boonayasai RT, et al. Survey of overnight academic hospitalist supervision of trainees. J Hosp Med. 2012;7:521523.
Issue
Journal of Hospital Medicine - 8(6)
Issue
Journal of Hospital Medicine - 8(6)
Page Number
328-333
Page Number
328-333
Article Type
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources
Display Headline
Answering questions on call: Pediatric resident physicians' use of handoffs and other resources
Sections
Article Source

Copyright © 2013 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Maireade E. McSweeney, MD, Division of Gastroenterology and Nutrition, Boston Children's Hospital, Boston, MA 02115; Telephone: 617‐355‐7036; Fax: 617–730‐0495; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Hospital Value‐Based Purchasing

Article Type
Changed
Mon, 01/02/2017 - 19:34
Display Headline
Hospital value‐based purchasing

The Centers for Medicaid and Medicare Services' (CMS) Hospital Inpatient Value‐Based Purchasing (VBP) Program, which was signed into law as part of the Patient Protection and Affordable Care Act of 2010, aims to incentivize inpatient providers to deliver high‐value, as opposed to high‐volume, healthcare.[1] Beginning on October 1, 2012, the start of the 2013 fiscal year (FY), hospitals participating in the VBP program became eligible for a variety of performance‐based incentive payments from CMS. These payments are based on an acute care hospital's ability to meet performance measurements in 6 care domains: (1) patient safety, (2) care coordination, (3) clinical processes and outcomes, (4) population or community health, (5) efficiency and cost reduction, and (6) patient‐ and caregiver‐centered experience.[2] The VBP program's ultimate purpose is to enable CMS to improve the health of Medicare beneficiaries by purchasing better care for them at a lower cost. These 3 characteristics of careimproved health, improved care, and lower costsare the foundation of CMS' conception of value.[1, 2] They are closely related to an economic conception of value, which is the difference between an intervention's benefit and its cost.

Although in principle not a new idea, the formal mandate of hospitals to provide high‐value healthcare through financial incentives marks an important change in Medicare and Medicaid policy. In this opportune review of VBP, we first discuss the relevant historical changes in the reimbursement environment of US hospitals that have set the stage for VBP. We then describe the structure of CMS' VBP program, with a focus on which facilities are eligible to participate in the program, the specific outcomes measured and incentivized, how rewards and penalties are allocated, and how the program will be funded. In an effort to anticipate some of the issues that lie ahead, we then highlight a number of potential challenges to the success of VBP, and discuss how VBP will impact the delivery and reimbursement of inpatient care services. We conclude by examining how the VBP program is likely to evolve over time.

HISTORICAL CONTEXT FOR VBP

Over the last decade, CMS has embarked on a number of initiatives to incentivize the provision of higher‐quality and more cost‐effective care. For example, in 2003, CMS implemented a national pay‐for‐performance (P4P) pilot project called the Premier Hospital Quality Incentive Demonstration (HQID).[3, 4] HQID, which ran for 6 years, tracked and rewarded the performance of 216 hospitals in 6 healthcare service domains: (1) acute myocardial infarction (AMI), (2) congestive heart failure (CHF), (3) pneumonia, (4) coronary artery bypass graft surgery, (5) hip and knee replacement surgery, and (6) perioperative management of surgical patients (including prevention of surgical site infections).[4] CMS then introduced its Hospital Compare Web site in 2005 to facilitate public reporting of hospital‐level quality outcomes.[3, 5] This Web site provides the public with access to data on hospital performance across a wide array of measures of process quality, clinical outcomes, spending, and resource utilization.[5] Next, in October 2008, CMS stopped reimbursing hospitals for a number of costly and common hospital‐acquired complications, including hospital‐acquired bloodstream infections and urinary tract infections, patient falls, and pressure ulcers.[3, 6] VBP is the latest and most comprehensive step that CMS has taken in its decade‐long effort to shift from volume to value‐based compensation for inpatient care.

Although CMS appears fully invested in using performance incentives to increase healthcare value, existing evidence of the effects of P4P on patient outcomes remains quite mixed.[7] On one hand, an analysis of an inpatient P4P program sponsored by the United Kingdom's National Health Service's (NHS) suggests that P4P may improve quality and save lives; indeed, hospitals that participated in the NHS P4P program significantly reduced inpatient mortality from pneumonia, saving an estimated 890 lives.[8] Additional empirical work suggests that the HQID was also associated with early improvements in healthcare quality.[9] However, a subsequent long‐term analysis found that participation in HQID had no discernible effect on 30‐day mortality rates.[10] Moreover, a meta‐analysis of P4P incentives for individual practitioners found few methodologically robust studies of P4P for clinicians and concluded that P4P's effects on individual practice patterns and outcomes remain largely uncertain.[11]

VBP: STRUCTURE AND DESIGN

This section reviews the structure of the VBP program. We describe current VBP eligibility criteria and sources of funding for the program, how hospitals participating in VBP are evaluated, and how VBP incentives for FY 2013 have been calculated.

Hospital Eligibility for VBP

All acute care hospitals in the United States (excluding Maryland) that are not psychiatric hospitals, rehabilitation hospitals, long‐term care facilities, children's hospitals, or cancer hospitals are eligible to participate in VBP in FY 2013 (full eligibility criteria is outlined in Table 1). For FY 2013, CMS chose to incentivize measures in just 2 care domains: (1) clinical processes of care and (2) patient experience of care. To be eligible for VBP in FY 2013, a hospital must report at least 10 cases each in at least 4 of 12 measures included in the clinical processes of care domain (Table 2), and/or must have at least 100 completed Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). Designed and validated by CMS, the HCAHPS survey provides hospitals with a standardized instrument for gathering information about patient satisfaction with, and perspectives on, their hospital care.[12] HCAHPS will be used to assess 8 patient experience of care measures (Table 3).

Inclusion and Exclusion Criteria for the Inpatient Value‐Based Purchasing Program in Fiscal Year 2013
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; HHS, US Department of Health and Human Services; VBP, Value‐Based Purchasing.

Inclusion criteria
Acute care hospital
Located in all 50 US states or District of Columbia (excluding Maryland)
Has at least 10 cases in at least 4 of 12 clinical process of care measures and/or at least 100 completed HCAHPS surveys
Exclusion criteria
Psychiatric, rehabilitation, long‐term care, children's or cancer hospital
Does not participate in Hospital Inpatient Quality Reporting Program during the VBP performance period
Cited by the Secretary of HHS for significant patient safety violations during performance period
Hospital does not meet minimum reporting requirements for number of cases, process measures, and surveys needed to participate in VBP
Clinical Process of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Disease Process Process of Care Measure
  • NOTE: Mortality measures to be added in fiscal year 2014: acute myocardial infarction, congestive heart failure, pneumonia.

Acute myocardial infarction Fibrinolytic therapy received within 30 minutes of hospital arrival
Primary percutaneous coronary intervention received within 90 minutes of hospital arrival
Heart failure Discharge instructions provided
Pneumonia Blood cultures performed in the emergency department prior to initial antibiotic received in hospital
Initial antibiotic selection for community‐acquired pneumonia in immunocompetent patient
Healthcare‐associated infections Prophylactic antibiotic received within 1 hour prior to surgical incision
Prophylactic antibiotic selection for surgical patients
Prophylactic antibiotics discontinued within 24 hours after surgery ends
Cardiac surgery patients with controlled 6:00 am postoperative serum glucose
Surgeries Surgery patients on ‐blocker prior to arrival that received ‐blocker during perioperative period
Surgery patients with recommended venous thromboembolism prophylaxis ordered
Surgery patients who received appropriate venous thromboembolism prophylaxis within 24 hours prior to surgery to 24 hours after surgery
Patient Experience of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Communication with nurses
Communication with doctors
Responsiveness of hospital staff
Pain management
Communication about medicines
Cleanliness and quietness of hospital environment
Discharge information
Overall rating of hospital

Participation in the program is mandatory for eligible hospitals, and CMS estimates that more than 3000 facilities across the United States will participate in FY 2013. Roughly $850 million dollars in VBP incentives will be paid out to these participating hospitals in FY 2013. The program is being financed through a 1% across‐the‐board reduction in FY 2013 diagnosis‐related group (DRG)‐based inpatient payments to participating hospitals. On December 20, 2012, CMS publically announced FY 2013 VBP incentives for all participating hospitals. Each hospital's incentive is retroactive and based on its performance between July 1, 2011 and March 31, 2012.

All data used for calculating VBP incentives is reported to CMS through its Hospital Inpatient Quality Reporting (Hospital IQR) Program, a national program instituted in 2003 that rewards hospitals for reporting designated quality measures. As of 2007, approximately 95% of eligible US hospitals were using the Hospital IQR program.[1] Measures evaluated via chart abstracts and surveys reflect a hospital's performance for its entire patient population, whereas measures assessed with claims data reflect hospital performance only for Medicare patients.

Evaluation of Hospitals

In FY 2013, hospital VBP incentive payments will be based entirely on performance in 2 domains: (1) clinical processes of care (weighted 70%) and (2) patient experience of care (weighted 30%). For each domain, CMS will evaluate each hospital's improvement over time as well as achievement compared to other hospitals in the VBP program. By assessing and rewarding both achievement and improvement, CMS will ensure that lower‐performing hospitals will still be rewarded for making substantial improvements in quality. To evaluate the first metricimprovement over timeCMS will compare a hospital's performance during a given reporting period with its baseline performance 2 years prior to this block of time. A hospital receives improvement points for improving its performance over time. To assess the second metricachievement compared to other hospitals in the VBP programCMS will compare each hospital's performance during a reporting period with the baseline performance (eg, performance 2 years prior to reporting period) of all other hospitals in the VBP program. A hospital is awarded achievement points if its performance exceeds the 50th percentile of all hospitals during the baseline performance period. Improvement scores range from 0 to 9, whereas achievement scores range from 0 to 10. The greater of a hospital's improvement and achievement scores on each VBP measure are used to calculate each hospital's total earned clinical care domain score and total earned HCAHPS base score. Hospitals that lack baseline performance data, which is required to assess improvement, will be evaluated solely on the basis of achievement points.[1] The total earned clinical care domain score is multiplied by 70% to reach the clinical care domain's contribution to a hospital's total performance score.

Each hospital's total patient experience domain, or HCAHPS performance, score consists of 2 components: a total earned HCAHPS base score as described above and a consistency score. The consistency score evaluates the reliability of a hospital's performance across all 8 patient experience of care measures (Table 3). If a hospital is above the 50th percentile of all hospital scores during the baseline period on all 8 measures, then it receives 100% of its consistency points. If a hospital is at the 0 percentile for a given measure, then it receives 0 consistency points for all measures. This provision promotes consistency by harshly penalizing hospitals with extremely poor performance on any 1 specific measure. If 1 or more measures are between the 0 and 50th percentiles, then it will receive a consistency score that takes into account how many measures were below the 50th percentile and their distance from this threshold. Each hospital's total HCAHPS performance score (the sum of total earned HCAHPS base points and consistency points) is then multiplied by 30% to arrive at the patient experience of care domain's contribution to a hospital's total performance score.

Importantly, CMS excluded from its VBP initiative 10 clinical process measures reported in the Hospital IQR Program because they are topped out; that is, almost all hospitals already perform them at very high rates (Table 4). Examples of these topped out process measures include administration of aspirin to all patients with AMI on arrival at the hospital; counseling of patients with AMI, CHF, and pneumonia about smoking cessation; and prescribing angiotensin‐converting enzyme inhibitors or angiotensin receptor blockers to patients with CHF and left ventricular dysfunction.[1]

Topped Out Measures
Disease Process Measure
  • NOTE: Abbreviations: ACEI, angiotensin‐converting enzyme inhibitor; ARB, angiotensin receptor blocker.

Acute myocardial infarction Aspirin administered on arrival to the emergency department
ACEI or ARB prescribed on discharge
Patient counseled about smoking cessation
‐Blocker prescribed on discharge
Aspirin prescribed at discharge
Heart failure Patient counseled about smoking cessation
Evaluation of left ventricular systolic function
ACEI or ARB prescribed for left ventricular systolic dysfunction
Pneumonia Patient counseled about smoking cessation
Surgical Care Improvement Project Surgery patients with appropriate hair removal

Calculation of VBP Incentives and Public Reporting

A hospital's total performance score for FY 2013 is equal to the sum of 70% of its clinical care domain score and 30% of its total HCAHPS performance score. This total performance score is entered into a linear mathematical formula to calculate each hospital's incentive payment. CMS projects that VBP will lead to a net increase in Medicare payments for one‐half of hospitals and a net decrease in payments for the other half of participating facilities.[1]

In December 2012, CMS publicly disclosed information about the initial performance of each hospital in the VBP program. Reported information included: (1) hospital performance for each applicable performance measure, (2) hospital performance by disease condition or procedure, and (3) hospital's total performance score. Initial analyses of this performance data revealed that 1557 hospitals will receive bonus payments under VBP in FY 2013, whereas 1427 hospitals will lose money under this program. Treasure Valley Hospital, a 10‐bed physician‐owned hospital in Boise, Idaho, will receive a 0.83% increase in Medicare payments, the largest payment increase under VBP in 2013. Conversely, Auburn Community Hospital in upstate New York, will suffer the most severe payment reduction: 0.9% per Medicare admission. The penalty will cost Auburn Hospital about $100,000, which is slightly more than 0.1% of its yearly $85 million operating budget.[13] For almost two‐thirds of participating hospitals, FY 2013 Medicare payments will change by <0.25%.[13] Additional information about VBP payments for FY 2013, including the number of hospitals who received VBP incentives and the size and range of these payments, is now accessible to the public through CMS' Hospital Compare Web site (http://www.hospitalcompare.hhs.gov).

CHALLENGES OF VBP

As the Medicare VBP program evolves, and hospitals confront ever‐larger financial incentives to deliver high‐value as opposed to high‐volume care, it will be important to recognize limitations of the VBP program as they arise. Here we briefly discuss several conceptual and implementation challenges that physicians and policymakers should consider when assessing the merits of VBP in promoting high‐quality healthcare.

Rigorous and Continuous Evaluation of VBP Programs

The main premise of using VBP to incentivize hospitals to deliver high‐quality cost‐effective care is that the process measures used to determine hospital quality do impact patient outcomes. However, it is already well established that improvements in measures of process quality are not always associated with improvements in patient outcomes.[14, 15, 16] Moreover, incentivizing specific process measures encourages hospitals to shift resources away from other aspects of care delivery, which may have ambiguous, or even deleterious, effects on patient outcomes. Although incentives ideally push hospitals to shift resources away from low‐quality care toward high‐quality care, in practice this is not always the case. Hospital resources may instead be drawn away from areas that are not yet incented by VBP, but for which improvements in quality of care are desperately needed. The same empirical focus behind using VBP to incentivize hospitals to improve patient outcomes efficiently should be used to evaluate whether VBP is continually meeting its stated goals: reducing overall patient morbidity and mortality and improving patient satisfaction at ideally lower cost. The experience of the US education system with public policies designed to improve student testing performance may serve as a cautionary example here. Such policies, which provide financial rewards to schools whose students perform well on standardized tests, can indeed raise testing performance. However, these policies also lead educators to teach to the test, and to neglect important topics that are not tested on standardized exams.[17]

Prioritization of Process Measures

As payment incentives for VBP currently stand, process measures are weighted equally regardless of the clinical benefits they generate and the resources required to achieve improvements in process quality. For instance, 2 process measures, continuing home ‐blocker medications for patients with coronary artery disease undergoing surgery and early percutaneous coronary intervention for patients with AMI, may be weighted equally as process measures although both their clinical benefits and the costs of implementation are very different. Some hospitals responding to VBP incentives may choose to invest in areas where their ability to earn VBP incentive payments is high and the costs of improvement are low, although those areas may not be where interventions are most needed because clinical outcomes could be most improved. Recognizing that process measures have heterogeneous benefits and costs of implementation is important when prioritizing their reimbursement in VBP.

Measuring Improvements in Hospital Quality

Tying hospital financial compensation to hospital quality implies that measures of hospital quality should be robust. To incentivize hospitals to improve quality not only relative to other hospitals but to themselves in the past, the VBP program has established a baseline performance for each hospital. Each hospital is compared to its baseline performance in subsequent evaluation periods. Thus, properly measuring a hospital's baseline performance is important. During a given baseline period, some hospitals may have better or worse outcomes than their steady state due to random variation alone. Some hospitals deemed to have a low baseline will experience improvements in quality that are not related to active efforts to improve quality but through chance alone. Similarly, some hospitals deemed to have a high baseline will experience reductions in quality through chance. Of course, neither of these changes should be subject to differences in reimbursement because they do not reflect actual organizational changes made by the hospitals. The VBP program has made significant efforts to address this issue by requiring participating hospitals to have a large enough sample of cases such that estimated rates of process quality adherence meet a reliability threshold (ie, are likely to be consistent over time rather than vary substantially through chance alone). However, not all process measures exhibit high reliability, particularly those for which adverse events are rare (eg, foreign objects retained after surgery, air embolisms, and blood incompatibility). Ultimately, CMS's decision to balance the need for statistically reliable data with the goal of including as many hospitals as possible in the VBP program will require ongoing reevaluation of this issue.

Choosing Hospital Comparators Appropriately

In the current VBP program, hospitals will be evaluated in part by how they compare to hospitals nationally. However, studies of regional variation in healthcare have demonstrated large variations in practice patterns across the United States,[18, 19, 20] raising the question of whether hospitals should, at least initially, be compared to hospitals in the same geographic area. Although the ultimate goal of VBP should be to hold hospitals to a national standard, local practice patterns are not easily modified within 1‐ to 2‐year timeframes. Initially comparing hospitals to a national rather than local standard may unfairly penalize hospitals that are relative underperformers nationally but overperformers regionally. Although CMS's policy to reward improvement within hospitals over time mitigates issues arising from a cross‐sectional comparison of hospitals, the issue still remains if many hospitals within a region not only underperform relative to other hospitals nationally but also fail to demonstrate improvement. More broadly, this issue extends to differences across hospitals in factors that impact their ability to meet VBP goals. These factors may include, for example, hospital size, profitability, patient case and insurance mix, and presence of an electronic medical record. Comparing hospitals with vastly different abilities to achieve VBP goals and improve quickly may amount to inequitable policy.

Continual Evaluation of Topped‐Out Measures

Process measures that are met at high rates at nearly all hospitals are not used in evaluations by CMS for VBP. An assumption underlying CMS' decision to not reward hospitals for achieving these topped‐out measures is that once physicians and hospitals make cognitive and system‐level improvements that improve process quality, these gains will persist after the incentive is removed. Thus, CMS hopes and anticipates that although performance incentives will make it easier for well‐meaning physicians to learn to do the right thing, doctors will continue to do the right things for patients after these incentives are removed.[21, 22] Although this assumption may generally be accurate, it is important to continue to evaluate whether measures that are currently topped out continue to remain adequately performed, because rewarding new quality measures will necessarily lead hospitals to reallocate resources away from other clinical activities. Although we hope that the continued public reporting of topped‐out measures will prevent declines in performance on these measures, policy makers and clinicians should be aware that the lack of financial incentives for topped‐out measures may result in declines in quality. To this point, an analysis of 35 Kaiser Permanente facilities from 1997 to 2007 demonstrated that the removal of financial incentives for diabetic retinopathy and cervical cancer screening was associated with subsequent declines in performance of 3% and 1.6% per year, respectively.[23]

Will VBP Incentives Be Large Enough to Change Practice Patterns?

The VBP Program's ability to influence change depends, at least in part, on how the incentives offered under this program compare to the magnitude of the investments that hospitals must make to achieve a given reward. In general, larger incentives are necessary to motivate more significant changes in behavior or to influence organizations to invest the resources needed to achieve change. The incentives offered under VBP in FY 2013 are quite modest. Almost two‐thirds of participating hospitals will see their FY 2013 Medicare revenues change by <0.25%, roughly $125,000 at most.[13, 24] Although these incentives may motivate hospitals that can improve performance and achievement with very modest investments, they may have little impact on organizations that need to make significant upfront investments in care processes to achieve sustainable improvements in care quality. As CMS increases the size of VBP incentives over the next 2 to 4 years, it will also hold hospitals accountable for a broader and increasingly complex set of outcomes. Improving these outcomes may require investments in areas such as information technology and process improvement that far surpass the VBP incentive reward.

Moreover, prior research suggests that financial incentives like those available under VBP may contribute only slightly to performance improvements when public reporting already exists. For example, in a 2‐year study of 613 US hospitals implementing pay‐for‐performance plus public reporting or public reporting only, pay for performance plus public reporting was associated with only a 2.6% to 4.1% increase in a composite measure of quality when compared to hospitals with public reporting only.[9] Similarly, a study of 54 hospitals participating in the CMS pay for performance pilot initiative found no significant improvement in quality of care or outcomes for AMI when compared to 446 control hospitals.[25] A long‐term analysis of pay for performance in the Medicare Premier Hospital Quality Incentive Demonstration found that participation in the program had no discernible effect on 30‐day mortality rates.[10] Finally, a study of physician medical groups contracting with a large network healthcare maintenance organization found that the implementation of pay for performance did not result in major before and after improvements in clinical quality compared to a control group of medical groups.[26]

High‐Value Care Is Not Always Low‐Cost Care

Not surprisingly, the clinical process measures included in CMS' hospital VBP program evaluate a select and relatively small group of high‐value and low‐cost interventions (eg, appropriate administration of antibiotics and tight control of serum glucose in surgical patients). However, an important body of work has demonstrated that high‐cost care (eg, intensive inpatient hospital care for common acute medical conditions) may also be highly valuable in terms of improving survival.[20, 27, 28, 29, 30] As the hospital VBP program evolves, its overseers will need to consider whether to include additional incentives for high‐value high‐cost healthcare services. Such considerations will likely become increasingly salient as healthcare delivery organizations move toward capitated delivery models. In particular, the VBP program's Medicare Spending Per Beneficiary measure, which quantifies inpatient and subsequent outpatient spending per beneficiary after a given hospitalization episode, will need to distinguish between higher‐spending hospitals that provide highly effective care (eg, care that reduces mortality and readmissions) and facilities that provide less‐effective care.

FUTURE OF VBP

Although the future of VBP is unknown, CMS is likely to modify the program in a number of ways over the next 3 to 5 years. First, CMS will likely expand the breadth and focus of incentivized measures in the VBP program. In FY 2014, for example, CMS is adding a set of 3, 30‐day mortality outcome measures to VBP: 30‐day risk‐adjusted mortality for AMI, CHF, and pneumonia.[1] A hospital's performance with respect to these outcomes will represent 25% of its total performance score in 2014, whereas the clinical process of care and patient experience of care domains will account for 45% and 30% of this score, respectively. In 2015, patient experience and outcome measures will account for 30% each in a hospital's performance score, whereas process and efficiency measures will each account for 20% of this score, respectively. The composition of this performance score evidences a shift away from rewarding process‐based measures and toward incentivizing measures of clinical outcomes and patient satisfaction, the latter of which may be highly subjective and more representative of a hospital's catchment population than of a hospital's care itself.[31] Additional measures in the domains of patient safety, care coordination, population and community health, emergency room wait times, and cost control may also be added to the VBP program in FY 2015 to FY 2017. Furthermore, CMS will continue to reevaluate the appropriateness of measures that are already included in VBP and will stop incentivizing measures that have become topped out, or are no longer supported by the National Quality Forum.[1, 13]

Second, CMS has established an annual gradual increase of 0.25% in the percentage of each hospital's inpatient DRG‐based payment that is at stake under VBP. In FY 2014, for example, participating hospitals will be required to contribute 1.25% of inpatient DRG payments to the VBP program. This percentage is likely to increase to 2% or more by 2017.[1, 32]

Third, expansions of the VBP program complement a number of other quality improvement efforts overseen by CMS, including the Hospital Readmissions Reduction Program. Effective for discharges beginning on October 1, 2012, hospitals with excess readmissions for AMI, CHF, and pneumonia are at risk for reimbursement reductions for all Medicare admissions in proportion to the rate of excess rehospitalizations. Some of the same concerns about the hospital VBP program outlined above have also been raised for this program, namely, whether readmission penalties will be large enough to impact hospital behavior, whether readmissions are even preventable,[33, 34] and whether adjustments in hospital‐level policies will reduce admissions that are known to be heavily influenced by patient economic and social factors that are outside of a hospital's control.[35, 36] Despite the limitations of VBP and the challenges that lie ahead, there is optimism that rewarding hospitals that provide high‐value rather than high‐volume care will not only improve outcomes of hospitalized patients in the United States, but will potentially be able to do so at a lower cost. Encouraging hospitals to improve their quality of care may also have important spillover effects on other healthcare domains. For example, hospitals that adopt systems to ensure prompt delivery of antibiotics to patients with pneumonia may also observe positive spillover effects with the prompt antibiotic management of other acute infectious illnesses that are not covered by VBP. VBP may have spillover effects on medical malpractice liability and defensive medicine as well. Indeed, financial incentives to practice higher‐quality evidenced‐based care may reduce medical malpractice liability and defensive medicine.

The government's ultimate goal in implementing VBP is to identify a broad and clinically relevant set of outcome measures that can be used to incentivize hospitals to deliver high‐quality as opposed to high‐volume healthcare. The first wave of outcome measures has already been instituted. It remains to be seen whether the incentive rewards of Medicare's hospital VBP program will be large enough that hospitals feel compelled to improve and compete for them.

Files
References
  1. Centers for Medicare and Medicaid Services. Hospital Value‐Based Purchasing Web site. 2013. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html. Accessed March 4, 2013.
  2. VanLare JM, Conway PH. Value‐based purchasing—national programs to move from volume to value. N Engl J Med. 2012;367:292295.
  3. Joynt KE, Rosenthal MB. Hospital value‐based purchasing: will Medicare's new policy exacerbate disparities? Circ Cardiovasc Qual Outcomes. 2012;5:148149.
  4. Centers for Medicare and Medicaid Services. CMS/premier hospital quality incentive demonstration (QHID). 2013. Available at: https://www.premierinc.com/quality‐safety/tools‐services/p4p/hqi/faqs.jsp. Accessed March 5, 2013.
  5. Centers for Medicare and Medicaid Services. Hospital Compare Web site. 2013. Available at: http://www.medicare.gov/hospitalcompare. Accessed March 4, 2013.
  6. Brown J, Doloresco F, Mylotte JM. “Never events”: not every hospital‐acquired infection is preventable. Clin Infect Dis. 2009;49:743746.
  7. Epstein AM. Will pay for performance improve quality of care? The answer is in the details. N Engl J Med. 2012;367:18521853.
  8. Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367:18211828.
  9. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356:486496.
  10. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long‐term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  11. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157:889899.
  12. Centers for Medicare and Medicaid Services. Hospital Consumer Assessment Of Healthcare Providers and Systems Web site. 2013. Available at: http://www.hcahpsonline.org. Accessed March 5, 2013.
  13. Rau J. Medicare discloses hospitals' bonuses, penalties based on quality. Kaiser Health News. December 20, 2012. Available at: http://www.kaiserhealthnews.org/stories/2012/december/21/medicare‐hospitals‐value‐based‐purchasing.aspx?referrer=search. Accessed March 26, 2013.
  14. Yasaitis L, Fisher ES, Skinner JS, Chandra A. Hospital quality and intensity of spending: is there an association? Health Aff (Millwood). 2009;28:w566w572.
  15. Fonarow GC, Abraham WT, Albert NM, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA. 2007;297:6170.
  16. Rubin HR, Pronovost P, Diette GB. The advantages and disadvantages of process‐based measures of health care quality. Int J Qual Health Care. 2001;13:469474.
  17. Jacob BA. Accountability, incentives and behavior: the impact of high‐stakes testing in the Chicago public schools. J Public Econ. 2005;89:761796.
  18. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138:273287.
  19. Fisher ES. Medical care—is more always better? N Engl J Med. 2003;349:16651667.
  20. Romley JA, Jena AB, Goldman DP. Hospital spending and inpatient mortality: evidence from California: an observational study. Ann Intern Med. 2011;154:160167.
  21. James BC. Making it easy to do it right. N Engl J Med. 2001;345:991993.
  22. Christensen RD, Henry E, Ilstrup S, Baer VL. A high rate of compliance with neonatal intensive care unit transfusion guidelines persists even after a program to improve transfusion guideline compliance ended. Transfusion. 2011;51:25192520.
  23. Lester H, Schmittdiel J, Selby J, et al. The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators. BMJ. 2010;340:c1898.
  24. Werner RM, Dudley RA. Medicare's new hospital value‐based purchasing program is likely to have only a small impact on hospital payments. Health Aff (Millwood). 2012;31:19321940.
  25. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297:23732380.
  26. Mullen KJ, Frank RG, Rosenthal MB. Can you get what you pay for? Pay‐for‐performance and the quality of healthcare providers. Rand J Econ. 2010;41:6491.
  27. Romley JA, Jena AB, O'Leary JF, Goldman DP. Spending and mortality in US acute care hospitals. Am J Manag Care. 2013;19:e46e54.
  28. Barnato AE, Farrell MH, Chang CC, Lave JR, Roberts MS, Angus DC. Development and validation of hospital “end‐of‐life” treatment intensity measures. Med Care. 2009;47:10981105.
  29. Ong MK, Mangione CM, Romano PS, et al. Looking forward, looking back: assessing variations in hospital resource use and outcomes for elderly patients with heart failure. Circ Cardiovasc Qual Outcomes. 2009;2:548557.
  30. Stukel TA, Fisher ES, Alter DA, et al. Association of hospital spending intensity with mortality and readmission rates in Ontario hospitals. JAMA. 2012;307:10371045.
  31. Young GJ, Meterko M, Desai KR. Patient satisfaction with hospital care: effects of demographic and institutional characteristics. Med Care. 2000;38:325334.
  32. VanLare JM, Blum JD, Conway PH. Linking performance with payment: implementing the Physician Value‐Based Payment Modifier. JAMA. 2012;308:20892090.
  33. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183:E391E402.
  34. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183:E1067E1072.
  35. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366:13661369.
  36. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305:675681.
Article PDF
Issue
Journal of Hospital Medicine - 8(5)
Page Number
271-277
Sections
Files
Files
Article PDF
Article PDF

The Centers for Medicaid and Medicare Services' (CMS) Hospital Inpatient Value‐Based Purchasing (VBP) Program, which was signed into law as part of the Patient Protection and Affordable Care Act of 2010, aims to incentivize inpatient providers to deliver high‐value, as opposed to high‐volume, healthcare.[1] Beginning on October 1, 2012, the start of the 2013 fiscal year (FY), hospitals participating in the VBP program became eligible for a variety of performance‐based incentive payments from CMS. These payments are based on an acute care hospital's ability to meet performance measurements in 6 care domains: (1) patient safety, (2) care coordination, (3) clinical processes and outcomes, (4) population or community health, (5) efficiency and cost reduction, and (6) patient‐ and caregiver‐centered experience.[2] The VBP program's ultimate purpose is to enable CMS to improve the health of Medicare beneficiaries by purchasing better care for them at a lower cost. These 3 characteristics of careimproved health, improved care, and lower costsare the foundation of CMS' conception of value.[1, 2] They are closely related to an economic conception of value, which is the difference between an intervention's benefit and its cost.

Although in principle not a new idea, the formal mandate of hospitals to provide high‐value healthcare through financial incentives marks an important change in Medicare and Medicaid policy. In this opportune review of VBP, we first discuss the relevant historical changes in the reimbursement environment of US hospitals that have set the stage for VBP. We then describe the structure of CMS' VBP program, with a focus on which facilities are eligible to participate in the program, the specific outcomes measured and incentivized, how rewards and penalties are allocated, and how the program will be funded. In an effort to anticipate some of the issues that lie ahead, we then highlight a number of potential challenges to the success of VBP, and discuss how VBP will impact the delivery and reimbursement of inpatient care services. We conclude by examining how the VBP program is likely to evolve over time.

HISTORICAL CONTEXT FOR VBP

Over the last decade, CMS has embarked on a number of initiatives to incentivize the provision of higher‐quality and more cost‐effective care. For example, in 2003, CMS implemented a national pay‐for‐performance (P4P) pilot project called the Premier Hospital Quality Incentive Demonstration (HQID).[3, 4] HQID, which ran for 6 years, tracked and rewarded the performance of 216 hospitals in 6 healthcare service domains: (1) acute myocardial infarction (AMI), (2) congestive heart failure (CHF), (3) pneumonia, (4) coronary artery bypass graft surgery, (5) hip and knee replacement surgery, and (6) perioperative management of surgical patients (including prevention of surgical site infections).[4] CMS then introduced its Hospital Compare Web site in 2005 to facilitate public reporting of hospital‐level quality outcomes.[3, 5] This Web site provides the public with access to data on hospital performance across a wide array of measures of process quality, clinical outcomes, spending, and resource utilization.[5] Next, in October 2008, CMS stopped reimbursing hospitals for a number of costly and common hospital‐acquired complications, including hospital‐acquired bloodstream infections and urinary tract infections, patient falls, and pressure ulcers.[3, 6] VBP is the latest and most comprehensive step that CMS has taken in its decade‐long effort to shift from volume to value‐based compensation for inpatient care.

Although CMS appears fully invested in using performance incentives to increase healthcare value, existing evidence of the effects of P4P on patient outcomes remains quite mixed.[7] On one hand, an analysis of an inpatient P4P program sponsored by the United Kingdom's National Health Service's (NHS) suggests that P4P may improve quality and save lives; indeed, hospitals that participated in the NHS P4P program significantly reduced inpatient mortality from pneumonia, saving an estimated 890 lives.[8] Additional empirical work suggests that the HQID was also associated with early improvements in healthcare quality.[9] However, a subsequent long‐term analysis found that participation in HQID had no discernible effect on 30‐day mortality rates.[10] Moreover, a meta‐analysis of P4P incentives for individual practitioners found few methodologically robust studies of P4P for clinicians and concluded that P4P's effects on individual practice patterns and outcomes remain largely uncertain.[11]

VBP: STRUCTURE AND DESIGN

This section reviews the structure of the VBP program. We describe current VBP eligibility criteria and sources of funding for the program, how hospitals participating in VBP are evaluated, and how VBP incentives for FY 2013 have been calculated.

Hospital Eligibility for VBP

All acute care hospitals in the United States (excluding Maryland) that are not psychiatric hospitals, rehabilitation hospitals, long‐term care facilities, children's hospitals, or cancer hospitals are eligible to participate in VBP in FY 2013 (full eligibility criteria is outlined in Table 1). For FY 2013, CMS chose to incentivize measures in just 2 care domains: (1) clinical processes of care and (2) patient experience of care. To be eligible for VBP in FY 2013, a hospital must report at least 10 cases each in at least 4 of 12 measures included in the clinical processes of care domain (Table 2), and/or must have at least 100 completed Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). Designed and validated by CMS, the HCAHPS survey provides hospitals with a standardized instrument for gathering information about patient satisfaction with, and perspectives on, their hospital care.[12] HCAHPS will be used to assess 8 patient experience of care measures (Table 3).

Inclusion and Exclusion Criteria for the Inpatient Value‐Based Purchasing Program in Fiscal Year 2013
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; HHS, US Department of Health and Human Services; VBP, Value‐Based Purchasing.

Inclusion criteria
Acute care hospital
Located in all 50 US states or District of Columbia (excluding Maryland)
Has at least 10 cases in at least 4 of 12 clinical process of care measures and/or at least 100 completed HCAHPS surveys
Exclusion criteria
Psychiatric, rehabilitation, long‐term care, children's or cancer hospital
Does not participate in Hospital Inpatient Quality Reporting Program during the VBP performance period
Cited by the Secretary of HHS for significant patient safety violations during performance period
Hospital does not meet minimum reporting requirements for number of cases, process measures, and surveys needed to participate in VBP
Clinical Process of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Disease Process Process of Care Measure
  • NOTE: Mortality measures to be added in fiscal year 2014: acute myocardial infarction, congestive heart failure, pneumonia.

Acute myocardial infarction Fibrinolytic therapy received within 30 minutes of hospital arrival
Primary percutaneous coronary intervention received within 90 minutes of hospital arrival
Heart failure Discharge instructions provided
Pneumonia Blood cultures performed in the emergency department prior to initial antibiotic received in hospital
Initial antibiotic selection for community‐acquired pneumonia in immunocompetent patient
Healthcare‐associated infections Prophylactic antibiotic received within 1 hour prior to surgical incision
Prophylactic antibiotic selection for surgical patients
Prophylactic antibiotics discontinued within 24 hours after surgery ends
Cardiac surgery patients with controlled 6:00 am postoperative serum glucose
Surgeries Surgery patients on ‐blocker prior to arrival that received ‐blocker during perioperative period
Surgery patients with recommended venous thromboembolism prophylaxis ordered
Surgery patients who received appropriate venous thromboembolism prophylaxis within 24 hours prior to surgery to 24 hours after surgery
Patient Experience of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Communication with nurses
Communication with doctors
Responsiveness of hospital staff
Pain management
Communication about medicines
Cleanliness and quietness of hospital environment
Discharge information
Overall rating of hospital

Participation in the program is mandatory for eligible hospitals, and CMS estimates that more than 3000 facilities across the United States will participate in FY 2013. Roughly $850 million dollars in VBP incentives will be paid out to these participating hospitals in FY 2013. The program is being financed through a 1% across‐the‐board reduction in FY 2013 diagnosis‐related group (DRG)‐based inpatient payments to participating hospitals. On December 20, 2012, CMS publically announced FY 2013 VBP incentives for all participating hospitals. Each hospital's incentive is retroactive and based on its performance between July 1, 2011 and March 31, 2012.

All data used for calculating VBP incentives is reported to CMS through its Hospital Inpatient Quality Reporting (Hospital IQR) Program, a national program instituted in 2003 that rewards hospitals for reporting designated quality measures. As of 2007, approximately 95% of eligible US hospitals were using the Hospital IQR program.[1] Measures evaluated via chart abstracts and surveys reflect a hospital's performance for its entire patient population, whereas measures assessed with claims data reflect hospital performance only for Medicare patients.

Evaluation of Hospitals

In FY 2013, hospital VBP incentive payments will be based entirely on performance in 2 domains: (1) clinical processes of care (weighted 70%) and (2) patient experience of care (weighted 30%). For each domain, CMS will evaluate each hospital's improvement over time as well as achievement compared to other hospitals in the VBP program. By assessing and rewarding both achievement and improvement, CMS will ensure that lower‐performing hospitals will still be rewarded for making substantial improvements in quality. To evaluate the first metricimprovement over timeCMS will compare a hospital's performance during a given reporting period with its baseline performance 2 years prior to this block of time. A hospital receives improvement points for improving its performance over time. To assess the second metricachievement compared to other hospitals in the VBP programCMS will compare each hospital's performance during a reporting period with the baseline performance (eg, performance 2 years prior to reporting period) of all other hospitals in the VBP program. A hospital is awarded achievement points if its performance exceeds the 50th percentile of all hospitals during the baseline performance period. Improvement scores range from 0 to 9, whereas achievement scores range from 0 to 10. The greater of a hospital's improvement and achievement scores on each VBP measure are used to calculate each hospital's total earned clinical care domain score and total earned HCAHPS base score. Hospitals that lack baseline performance data, which is required to assess improvement, will be evaluated solely on the basis of achievement points.[1] The total earned clinical care domain score is multiplied by 70% to reach the clinical care domain's contribution to a hospital's total performance score.

Each hospital's total patient experience domain, or HCAHPS performance, score consists of 2 components: a total earned HCAHPS base score as described above and a consistency score. The consistency score evaluates the reliability of a hospital's performance across all 8 patient experience of care measures (Table 3). If a hospital is above the 50th percentile of all hospital scores during the baseline period on all 8 measures, then it receives 100% of its consistency points. If a hospital is at the 0 percentile for a given measure, then it receives 0 consistency points for all measures. This provision promotes consistency by harshly penalizing hospitals with extremely poor performance on any 1 specific measure. If 1 or more measures are between the 0 and 50th percentiles, then it will receive a consistency score that takes into account how many measures were below the 50th percentile and their distance from this threshold. Each hospital's total HCAHPS performance score (the sum of total earned HCAHPS base points and consistency points) is then multiplied by 30% to arrive at the patient experience of care domain's contribution to a hospital's total performance score.

Importantly, CMS excluded from its VBP initiative 10 clinical process measures reported in the Hospital IQR Program because they are topped out; that is, almost all hospitals already perform them at very high rates (Table 4). Examples of these topped out process measures include administration of aspirin to all patients with AMI on arrival at the hospital; counseling of patients with AMI, CHF, and pneumonia about smoking cessation; and prescribing angiotensin‐converting enzyme inhibitors or angiotensin receptor blockers to patients with CHF and left ventricular dysfunction.[1]

Topped Out Measures
Disease Process Measure
  • NOTE: Abbreviations: ACEI, angiotensin‐converting enzyme inhibitor; ARB, angiotensin receptor blocker.

Acute myocardial infarction Aspirin administered on arrival to the emergency department
ACEI or ARB prescribed on discharge
Patient counseled about smoking cessation
‐Blocker prescribed on discharge
Aspirin prescribed at discharge
Heart failure Patient counseled about smoking cessation
Evaluation of left ventricular systolic function
ACEI or ARB prescribed for left ventricular systolic dysfunction
Pneumonia Patient counseled about smoking cessation
Surgical Care Improvement Project Surgery patients with appropriate hair removal

Calculation of VBP Incentives and Public Reporting

A hospital's total performance score for FY 2013 is equal to the sum of 70% of its clinical care domain score and 30% of its total HCAHPS performance score. This total performance score is entered into a linear mathematical formula to calculate each hospital's incentive payment. CMS projects that VBP will lead to a net increase in Medicare payments for one‐half of hospitals and a net decrease in payments for the other half of participating facilities.[1]

In December 2012, CMS publicly disclosed information about the initial performance of each hospital in the VBP program. Reported information included: (1) hospital performance for each applicable performance measure, (2) hospital performance by disease condition or procedure, and (3) hospital's total performance score. Initial analyses of this performance data revealed that 1557 hospitals will receive bonus payments under VBP in FY 2013, whereas 1427 hospitals will lose money under this program. Treasure Valley Hospital, a 10‐bed physician‐owned hospital in Boise, Idaho, will receive a 0.83% increase in Medicare payments, the largest payment increase under VBP in 2013. Conversely, Auburn Community Hospital in upstate New York, will suffer the most severe payment reduction: 0.9% per Medicare admission. The penalty will cost Auburn Hospital about $100,000, which is slightly more than 0.1% of its yearly $85 million operating budget.[13] For almost two‐thirds of participating hospitals, FY 2013 Medicare payments will change by <0.25%.[13] Additional information about VBP payments for FY 2013, including the number of hospitals who received VBP incentives and the size and range of these payments, is now accessible to the public through CMS' Hospital Compare Web site (http://www.hospitalcompare.hhs.gov).

CHALLENGES OF VBP

As the Medicare VBP program evolves, and hospitals confront ever‐larger financial incentives to deliver high‐value as opposed to high‐volume care, it will be important to recognize limitations of the VBP program as they arise. Here we briefly discuss several conceptual and implementation challenges that physicians and policymakers should consider when assessing the merits of VBP in promoting high‐quality healthcare.

Rigorous and Continuous Evaluation of VBP Programs

The main premise of using VBP to incentivize hospitals to deliver high‐quality cost‐effective care is that the process measures used to determine hospital quality do impact patient outcomes. However, it is already well established that improvements in measures of process quality are not always associated with improvements in patient outcomes.[14, 15, 16] Moreover, incentivizing specific process measures encourages hospitals to shift resources away from other aspects of care delivery, which may have ambiguous, or even deleterious, effects on patient outcomes. Although incentives ideally push hospitals to shift resources away from low‐quality care toward high‐quality care, in practice this is not always the case. Hospital resources may instead be drawn away from areas that are not yet incented by VBP, but for which improvements in quality of care are desperately needed. The same empirical focus behind using VBP to incentivize hospitals to improve patient outcomes efficiently should be used to evaluate whether VBP is continually meeting its stated goals: reducing overall patient morbidity and mortality and improving patient satisfaction at ideally lower cost. The experience of the US education system with public policies designed to improve student testing performance may serve as a cautionary example here. Such policies, which provide financial rewards to schools whose students perform well on standardized tests, can indeed raise testing performance. However, these policies also lead educators to teach to the test, and to neglect important topics that are not tested on standardized exams.[17]

Prioritization of Process Measures

As payment incentives for VBP currently stand, process measures are weighted equally regardless of the clinical benefits they generate and the resources required to achieve improvements in process quality. For instance, 2 process measures, continuing home ‐blocker medications for patients with coronary artery disease undergoing surgery and early percutaneous coronary intervention for patients with AMI, may be weighted equally as process measures although both their clinical benefits and the costs of implementation are very different. Some hospitals responding to VBP incentives may choose to invest in areas where their ability to earn VBP incentive payments is high and the costs of improvement are low, although those areas may not be where interventions are most needed because clinical outcomes could be most improved. Recognizing that process measures have heterogeneous benefits and costs of implementation is important when prioritizing their reimbursement in VBP.

Measuring Improvements in Hospital Quality

Tying hospital financial compensation to hospital quality implies that measures of hospital quality should be robust. To incentivize hospitals to improve quality not only relative to other hospitals but to themselves in the past, the VBP program has established a baseline performance for each hospital. Each hospital is compared to its baseline performance in subsequent evaluation periods. Thus, properly measuring a hospital's baseline performance is important. During a given baseline period, some hospitals may have better or worse outcomes than their steady state due to random variation alone. Some hospitals deemed to have a low baseline will experience improvements in quality that are not related to active efforts to improve quality but through chance alone. Similarly, some hospitals deemed to have a high baseline will experience reductions in quality through chance. Of course, neither of these changes should be subject to differences in reimbursement because they do not reflect actual organizational changes made by the hospitals. The VBP program has made significant efforts to address this issue by requiring participating hospitals to have a large enough sample of cases such that estimated rates of process quality adherence meet a reliability threshold (ie, are likely to be consistent over time rather than vary substantially through chance alone). However, not all process measures exhibit high reliability, particularly those for which adverse events are rare (eg, foreign objects retained after surgery, air embolisms, and blood incompatibility). Ultimately, CMS's decision to balance the need for statistically reliable data with the goal of including as many hospitals as possible in the VBP program will require ongoing reevaluation of this issue.

Choosing Hospital Comparators Appropriately

In the current VBP program, hospitals will be evaluated in part by how they compare to hospitals nationally. However, studies of regional variation in healthcare have demonstrated large variations in practice patterns across the United States,[18, 19, 20] raising the question of whether hospitals should, at least initially, be compared to hospitals in the same geographic area. Although the ultimate goal of VBP should be to hold hospitals to a national standard, local practice patterns are not easily modified within 1‐ to 2‐year timeframes. Initially comparing hospitals to a national rather than local standard may unfairly penalize hospitals that are relative underperformers nationally but overperformers regionally. Although CMS's policy to reward improvement within hospitals over time mitigates issues arising from a cross‐sectional comparison of hospitals, the issue still remains if many hospitals within a region not only underperform relative to other hospitals nationally but also fail to demonstrate improvement. More broadly, this issue extends to differences across hospitals in factors that impact their ability to meet VBP goals. These factors may include, for example, hospital size, profitability, patient case and insurance mix, and presence of an electronic medical record. Comparing hospitals with vastly different abilities to achieve VBP goals and improve quickly may amount to inequitable policy.

Continual Evaluation of Topped‐Out Measures

Process measures that are met at high rates at nearly all hospitals are not used in evaluations by CMS for VBP. An assumption underlying CMS' decision to not reward hospitals for achieving these topped‐out measures is that once physicians and hospitals make cognitive and system‐level improvements that improve process quality, these gains will persist after the incentive is removed. Thus, CMS hopes and anticipates that although performance incentives will make it easier for well‐meaning physicians to learn to do the right thing, doctors will continue to do the right things for patients after these incentives are removed.[21, 22] Although this assumption may generally be accurate, it is important to continue to evaluate whether measures that are currently topped out continue to remain adequately performed, because rewarding new quality measures will necessarily lead hospitals to reallocate resources away from other clinical activities. Although we hope that the continued public reporting of topped‐out measures will prevent declines in performance on these measures, policy makers and clinicians should be aware that the lack of financial incentives for topped‐out measures may result in declines in quality. To this point, an analysis of 35 Kaiser Permanente facilities from 1997 to 2007 demonstrated that the removal of financial incentives for diabetic retinopathy and cervical cancer screening was associated with subsequent declines in performance of 3% and 1.6% per year, respectively.[23]

Will VBP Incentives Be Large Enough to Change Practice Patterns?

The VBP Program's ability to influence change depends, at least in part, on how the incentives offered under this program compare to the magnitude of the investments that hospitals must make to achieve a given reward. In general, larger incentives are necessary to motivate more significant changes in behavior or to influence organizations to invest the resources needed to achieve change. The incentives offered under VBP in FY 2013 are quite modest. Almost two‐thirds of participating hospitals will see their FY 2013 Medicare revenues change by <0.25%, roughly $125,000 at most.[13, 24] Although these incentives may motivate hospitals that can improve performance and achievement with very modest investments, they may have little impact on organizations that need to make significant upfront investments in care processes to achieve sustainable improvements in care quality. As CMS increases the size of VBP incentives over the next 2 to 4 years, it will also hold hospitals accountable for a broader and increasingly complex set of outcomes. Improving these outcomes may require investments in areas such as information technology and process improvement that far surpass the VBP incentive reward.

Moreover, prior research suggests that financial incentives like those available under VBP may contribute only slightly to performance improvements when public reporting already exists. For example, in a 2‐year study of 613 US hospitals implementing pay‐for‐performance plus public reporting or public reporting only, pay for performance plus public reporting was associated with only a 2.6% to 4.1% increase in a composite measure of quality when compared to hospitals with public reporting only.[9] Similarly, a study of 54 hospitals participating in the CMS pay for performance pilot initiative found no significant improvement in quality of care or outcomes for AMI when compared to 446 control hospitals.[25] A long‐term analysis of pay for performance in the Medicare Premier Hospital Quality Incentive Demonstration found that participation in the program had no discernible effect on 30‐day mortality rates.[10] Finally, a study of physician medical groups contracting with a large network healthcare maintenance organization found that the implementation of pay for performance did not result in major before and after improvements in clinical quality compared to a control group of medical groups.[26]

High‐Value Care Is Not Always Low‐Cost Care

Not surprisingly, the clinical process measures included in CMS' hospital VBP program evaluate a select and relatively small group of high‐value and low‐cost interventions (eg, appropriate administration of antibiotics and tight control of serum glucose in surgical patients). However, an important body of work has demonstrated that high‐cost care (eg, intensive inpatient hospital care for common acute medical conditions) may also be highly valuable in terms of improving survival.[20, 27, 28, 29, 30] As the hospital VBP program evolves, its overseers will need to consider whether to include additional incentives for high‐value high‐cost healthcare services. Such considerations will likely become increasingly salient as healthcare delivery organizations move toward capitated delivery models. In particular, the VBP program's Medicare Spending Per Beneficiary measure, which quantifies inpatient and subsequent outpatient spending per beneficiary after a given hospitalization episode, will need to distinguish between higher‐spending hospitals that provide highly effective care (eg, care that reduces mortality and readmissions) and facilities that provide less‐effective care.

FUTURE OF VBP

Although the future of VBP is unknown, CMS is likely to modify the program in a number of ways over the next 3 to 5 years. First, CMS will likely expand the breadth and focus of incentivized measures in the VBP program. In FY 2014, for example, CMS is adding a set of 3, 30‐day mortality outcome measures to VBP: 30‐day risk‐adjusted mortality for AMI, CHF, and pneumonia.[1] A hospital's performance with respect to these outcomes will represent 25% of its total performance score in 2014, whereas the clinical process of care and patient experience of care domains will account for 45% and 30% of this score, respectively. In 2015, patient experience and outcome measures will account for 30% each in a hospital's performance score, whereas process and efficiency measures will each account for 20% of this score, respectively. The composition of this performance score evidences a shift away from rewarding process‐based measures and toward incentivizing measures of clinical outcomes and patient satisfaction, the latter of which may be highly subjective and more representative of a hospital's catchment population than of a hospital's care itself.[31] Additional measures in the domains of patient safety, care coordination, population and community health, emergency room wait times, and cost control may also be added to the VBP program in FY 2015 to FY 2017. Furthermore, CMS will continue to reevaluate the appropriateness of measures that are already included in VBP and will stop incentivizing measures that have become topped out, or are no longer supported by the National Quality Forum.[1, 13]

Second, CMS has established an annual gradual increase of 0.25% in the percentage of each hospital's inpatient DRG‐based payment that is at stake under VBP. In FY 2014, for example, participating hospitals will be required to contribute 1.25% of inpatient DRG payments to the VBP program. This percentage is likely to increase to 2% or more by 2017.[1, 32]

Third, expansions of the VBP program complement a number of other quality improvement efforts overseen by CMS, including the Hospital Readmissions Reduction Program. Effective for discharges beginning on October 1, 2012, hospitals with excess readmissions for AMI, CHF, and pneumonia are at risk for reimbursement reductions for all Medicare admissions in proportion to the rate of excess rehospitalizations. Some of the same concerns about the hospital VBP program outlined above have also been raised for this program, namely, whether readmission penalties will be large enough to impact hospital behavior, whether readmissions are even preventable,[33, 34] and whether adjustments in hospital‐level policies will reduce admissions that are known to be heavily influenced by patient economic and social factors that are outside of a hospital's control.[35, 36] Despite the limitations of VBP and the challenges that lie ahead, there is optimism that rewarding hospitals that provide high‐value rather than high‐volume care will not only improve outcomes of hospitalized patients in the United States, but will potentially be able to do so at a lower cost. Encouraging hospitals to improve their quality of care may also have important spillover effects on other healthcare domains. For example, hospitals that adopt systems to ensure prompt delivery of antibiotics to patients with pneumonia may also observe positive spillover effects with the prompt antibiotic management of other acute infectious illnesses that are not covered by VBP. VBP may have spillover effects on medical malpractice liability and defensive medicine as well. Indeed, financial incentives to practice higher‐quality evidenced‐based care may reduce medical malpractice liability and defensive medicine.

The government's ultimate goal in implementing VBP is to identify a broad and clinically relevant set of outcome measures that can be used to incentivize hospitals to deliver high‐quality as opposed to high‐volume healthcare. The first wave of outcome measures has already been instituted. It remains to be seen whether the incentive rewards of Medicare's hospital VBP program will be large enough that hospitals feel compelled to improve and compete for them.

The Centers for Medicaid and Medicare Services' (CMS) Hospital Inpatient Value‐Based Purchasing (VBP) Program, which was signed into law as part of the Patient Protection and Affordable Care Act of 2010, aims to incentivize inpatient providers to deliver high‐value, as opposed to high‐volume, healthcare.[1] Beginning on October 1, 2012, the start of the 2013 fiscal year (FY), hospitals participating in the VBP program became eligible for a variety of performance‐based incentive payments from CMS. These payments are based on an acute care hospital's ability to meet performance measurements in 6 care domains: (1) patient safety, (2) care coordination, (3) clinical processes and outcomes, (4) population or community health, (5) efficiency and cost reduction, and (6) patient‐ and caregiver‐centered experience.[2] The VBP program's ultimate purpose is to enable CMS to improve the health of Medicare beneficiaries by purchasing better care for them at a lower cost. These 3 characteristics of careimproved health, improved care, and lower costsare the foundation of CMS' conception of value.[1, 2] They are closely related to an economic conception of value, which is the difference between an intervention's benefit and its cost.

Although in principle not a new idea, the formal mandate of hospitals to provide high‐value healthcare through financial incentives marks an important change in Medicare and Medicaid policy. In this opportune review of VBP, we first discuss the relevant historical changes in the reimbursement environment of US hospitals that have set the stage for VBP. We then describe the structure of CMS' VBP program, with a focus on which facilities are eligible to participate in the program, the specific outcomes measured and incentivized, how rewards and penalties are allocated, and how the program will be funded. In an effort to anticipate some of the issues that lie ahead, we then highlight a number of potential challenges to the success of VBP, and discuss how VBP will impact the delivery and reimbursement of inpatient care services. We conclude by examining how the VBP program is likely to evolve over time.

HISTORICAL CONTEXT FOR VBP

Over the last decade, CMS has embarked on a number of initiatives to incentivize the provision of higher‐quality and more cost‐effective care. For example, in 2003, CMS implemented a national pay‐for‐performance (P4P) pilot project called the Premier Hospital Quality Incentive Demonstration (HQID).[3, 4] HQID, which ran for 6 years, tracked and rewarded the performance of 216 hospitals in 6 healthcare service domains: (1) acute myocardial infarction (AMI), (2) congestive heart failure (CHF), (3) pneumonia, (4) coronary artery bypass graft surgery, (5) hip and knee replacement surgery, and (6) perioperative management of surgical patients (including prevention of surgical site infections).[4] CMS then introduced its Hospital Compare Web site in 2005 to facilitate public reporting of hospital‐level quality outcomes.[3, 5] This Web site provides the public with access to data on hospital performance across a wide array of measures of process quality, clinical outcomes, spending, and resource utilization.[5] Next, in October 2008, CMS stopped reimbursing hospitals for a number of costly and common hospital‐acquired complications, including hospital‐acquired bloodstream infections and urinary tract infections, patient falls, and pressure ulcers.[3, 6] VBP is the latest and most comprehensive step that CMS has taken in its decade‐long effort to shift from volume to value‐based compensation for inpatient care.

Although CMS appears fully invested in using performance incentives to increase healthcare value, existing evidence of the effects of P4P on patient outcomes remains quite mixed.[7] On one hand, an analysis of an inpatient P4P program sponsored by the United Kingdom's National Health Service's (NHS) suggests that P4P may improve quality and save lives; indeed, hospitals that participated in the NHS P4P program significantly reduced inpatient mortality from pneumonia, saving an estimated 890 lives.[8] Additional empirical work suggests that the HQID was also associated with early improvements in healthcare quality.[9] However, a subsequent long‐term analysis found that participation in HQID had no discernible effect on 30‐day mortality rates.[10] Moreover, a meta‐analysis of P4P incentives for individual practitioners found few methodologically robust studies of P4P for clinicians and concluded that P4P's effects on individual practice patterns and outcomes remain largely uncertain.[11]

VBP: STRUCTURE AND DESIGN

This section reviews the structure of the VBP program. We describe current VBP eligibility criteria and sources of funding for the program, how hospitals participating in VBP are evaluated, and how VBP incentives for FY 2013 have been calculated.

Hospital Eligibility for VBP

All acute care hospitals in the United States (excluding Maryland) that are not psychiatric hospitals, rehabilitation hospitals, long‐term care facilities, children's hospitals, or cancer hospitals are eligible to participate in VBP in FY 2013 (full eligibility criteria is outlined in Table 1). For FY 2013, CMS chose to incentivize measures in just 2 care domains: (1) clinical processes of care and (2) patient experience of care. To be eligible for VBP in FY 2013, a hospital must report at least 10 cases each in at least 4 of 12 measures included in the clinical processes of care domain (Table 2), and/or must have at least 100 completed Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS). Designed and validated by CMS, the HCAHPS survey provides hospitals with a standardized instrument for gathering information about patient satisfaction with, and perspectives on, their hospital care.[12] HCAHPS will be used to assess 8 patient experience of care measures (Table 3).

Inclusion and Exclusion Criteria for the Inpatient Value‐Based Purchasing Program in Fiscal Year 2013
  • NOTE: Abbreviations: HCAHPS, Hospital Consumer Assessment of Healthcare Providers and Systems; HHS, US Department of Health and Human Services; VBP, Value‐Based Purchasing.

Inclusion criteria
Acute care hospital
Located in all 50 US states or District of Columbia (excluding Maryland)
Has at least 10 cases in at least 4 of 12 clinical process of care measures and/or at least 100 completed HCAHPS surveys
Exclusion criteria
Psychiatric, rehabilitation, long‐term care, children's or cancer hospital
Does not participate in Hospital Inpatient Quality Reporting Program during the VBP performance period
Cited by the Secretary of HHS for significant patient safety violations during performance period
Hospital does not meet minimum reporting requirements for number of cases, process measures, and surveys needed to participate in VBP
Clinical Process of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Disease Process Process of Care Measure
  • NOTE: Mortality measures to be added in fiscal year 2014: acute myocardial infarction, congestive heart failure, pneumonia.

Acute myocardial infarction Fibrinolytic therapy received within 30 minutes of hospital arrival
Primary percutaneous coronary intervention received within 90 minutes of hospital arrival
Heart failure Discharge instructions provided
Pneumonia Blood cultures performed in the emergency department prior to initial antibiotic received in hospital
Initial antibiotic selection for community‐acquired pneumonia in immunocompetent patient
Healthcare‐associated infections Prophylactic antibiotic received within 1 hour prior to surgical incision
Prophylactic antibiotic selection for surgical patients
Prophylactic antibiotics discontinued within 24 hours after surgery ends
Cardiac surgery patients with controlled 6:00 am postoperative serum glucose
Surgeries Surgery patients on ‐blocker prior to arrival that received ‐blocker during perioperative period
Surgery patients with recommended venous thromboembolism prophylaxis ordered
Surgery patients who received appropriate venous thromboembolism prophylaxis within 24 hours prior to surgery to 24 hours after surgery
Patient Experience of Care Measures Evaluated by Value‐Based Purchasing in Fiscal Year 2013
Communication with nurses
Communication with doctors
Responsiveness of hospital staff
Pain management
Communication about medicines
Cleanliness and quietness of hospital environment
Discharge information
Overall rating of hospital

Participation in the program is mandatory for eligible hospitals, and CMS estimates that more than 3000 facilities across the United States will participate in FY 2013. Roughly $850 million dollars in VBP incentives will be paid out to these participating hospitals in FY 2013. The program is being financed through a 1% across‐the‐board reduction in FY 2013 diagnosis‐related group (DRG)‐based inpatient payments to participating hospitals. On December 20, 2012, CMS publically announced FY 2013 VBP incentives for all participating hospitals. Each hospital's incentive is retroactive and based on its performance between July 1, 2011 and March 31, 2012.

All data used for calculating VBP incentives is reported to CMS through its Hospital Inpatient Quality Reporting (Hospital IQR) Program, a national program instituted in 2003 that rewards hospitals for reporting designated quality measures. As of 2007, approximately 95% of eligible US hospitals were using the Hospital IQR program.[1] Measures evaluated via chart abstracts and surveys reflect a hospital's performance for its entire patient population, whereas measures assessed with claims data reflect hospital performance only for Medicare patients.

Evaluation of Hospitals

In FY 2013, hospital VBP incentive payments will be based entirely on performance in 2 domains: (1) clinical processes of care (weighted 70%) and (2) patient experience of care (weighted 30%). For each domain, CMS will evaluate each hospital's improvement over time as well as achievement compared to other hospitals in the VBP program. By assessing and rewarding both achievement and improvement, CMS will ensure that lower‐performing hospitals will still be rewarded for making substantial improvements in quality. To evaluate the first metricimprovement over timeCMS will compare a hospital's performance during a given reporting period with its baseline performance 2 years prior to this block of time. A hospital receives improvement points for improving its performance over time. To assess the second metricachievement compared to other hospitals in the VBP programCMS will compare each hospital's performance during a reporting period with the baseline performance (eg, performance 2 years prior to reporting period) of all other hospitals in the VBP program. A hospital is awarded achievement points if its performance exceeds the 50th percentile of all hospitals during the baseline performance period. Improvement scores range from 0 to 9, whereas achievement scores range from 0 to 10. The greater of a hospital's improvement and achievement scores on each VBP measure are used to calculate each hospital's total earned clinical care domain score and total earned HCAHPS base score. Hospitals that lack baseline performance data, which is required to assess improvement, will be evaluated solely on the basis of achievement points.[1] The total earned clinical care domain score is multiplied by 70% to reach the clinical care domain's contribution to a hospital's total performance score.

Each hospital's total patient experience domain, or HCAHPS performance, score consists of 2 components: a total earned HCAHPS base score as described above and a consistency score. The consistency score evaluates the reliability of a hospital's performance across all 8 patient experience of care measures (Table 3). If a hospital is above the 50th percentile of all hospital scores during the baseline period on all 8 measures, then it receives 100% of its consistency points. If a hospital is at the 0 percentile for a given measure, then it receives 0 consistency points for all measures. This provision promotes consistency by harshly penalizing hospitals with extremely poor performance on any 1 specific measure. If 1 or more measures are between the 0 and 50th percentiles, then it will receive a consistency score that takes into account how many measures were below the 50th percentile and their distance from this threshold. Each hospital's total HCAHPS performance score (the sum of total earned HCAHPS base points and consistency points) is then multiplied by 30% to arrive at the patient experience of care domain's contribution to a hospital's total performance score.

Importantly, CMS excluded from its VBP initiative 10 clinical process measures reported in the Hospital IQR Program because they are topped out; that is, almost all hospitals already perform them at very high rates (Table 4). Examples of these topped out process measures include administration of aspirin to all patients with AMI on arrival at the hospital; counseling of patients with AMI, CHF, and pneumonia about smoking cessation; and prescribing angiotensin‐converting enzyme inhibitors or angiotensin receptor blockers to patients with CHF and left ventricular dysfunction.[1]

Topped Out Measures
Disease Process Measure
  • NOTE: Abbreviations: ACEI, angiotensin‐converting enzyme inhibitor; ARB, angiotensin receptor blocker.

Acute myocardial infarction Aspirin administered on arrival to the emergency department
ACEI or ARB prescribed on discharge
Patient counseled about smoking cessation
‐Blocker prescribed on discharge
Aspirin prescribed at discharge
Heart failure Patient counseled about smoking cessation
Evaluation of left ventricular systolic function
ACEI or ARB prescribed for left ventricular systolic dysfunction
Pneumonia Patient counseled about smoking cessation
Surgical Care Improvement Project Surgery patients with appropriate hair removal

Calculation of VBP Incentives and Public Reporting

A hospital's total performance score for FY 2013 is equal to the sum of 70% of its clinical care domain score and 30% of its total HCAHPS performance score. This total performance score is entered into a linear mathematical formula to calculate each hospital's incentive payment. CMS projects that VBP will lead to a net increase in Medicare payments for one‐half of hospitals and a net decrease in payments for the other half of participating facilities.[1]

In December 2012, CMS publicly disclosed information about the initial performance of each hospital in the VBP program. Reported information included: (1) hospital performance for each applicable performance measure, (2) hospital performance by disease condition or procedure, and (3) hospital's total performance score. Initial analyses of this performance data revealed that 1557 hospitals will receive bonus payments under VBP in FY 2013, whereas 1427 hospitals will lose money under this program. Treasure Valley Hospital, a 10‐bed physician‐owned hospital in Boise, Idaho, will receive a 0.83% increase in Medicare payments, the largest payment increase under VBP in 2013. Conversely, Auburn Community Hospital in upstate New York, will suffer the most severe payment reduction: 0.9% per Medicare admission. The penalty will cost Auburn Hospital about $100,000, which is slightly more than 0.1% of its yearly $85 million operating budget.[13] For almost two‐thirds of participating hospitals, FY 2013 Medicare payments will change by <0.25%.[13] Additional information about VBP payments for FY 2013, including the number of hospitals who received VBP incentives and the size and range of these payments, is now accessible to the public through CMS' Hospital Compare Web site (http://www.hospitalcompare.hhs.gov).

CHALLENGES OF VBP

As the Medicare VBP program evolves, and hospitals confront ever‐larger financial incentives to deliver high‐value as opposed to high‐volume care, it will be important to recognize limitations of the VBP program as they arise. Here we briefly discuss several conceptual and implementation challenges that physicians and policymakers should consider when assessing the merits of VBP in promoting high‐quality healthcare.

Rigorous and Continuous Evaluation of VBP Programs

The main premise of using VBP to incentivize hospitals to deliver high‐quality cost‐effective care is that the process measures used to determine hospital quality do impact patient outcomes. However, it is already well established that improvements in measures of process quality are not always associated with improvements in patient outcomes.[14, 15, 16] Moreover, incentivizing specific process measures encourages hospitals to shift resources away from other aspects of care delivery, which may have ambiguous, or even deleterious, effects on patient outcomes. Although incentives ideally push hospitals to shift resources away from low‐quality care toward high‐quality care, in practice this is not always the case. Hospital resources may instead be drawn away from areas that are not yet incented by VBP, but for which improvements in quality of care are desperately needed. The same empirical focus behind using VBP to incentivize hospitals to improve patient outcomes efficiently should be used to evaluate whether VBP is continually meeting its stated goals: reducing overall patient morbidity and mortality and improving patient satisfaction at ideally lower cost. The experience of the US education system with public policies designed to improve student testing performance may serve as a cautionary example here. Such policies, which provide financial rewards to schools whose students perform well on standardized tests, can indeed raise testing performance. However, these policies also lead educators to teach to the test, and to neglect important topics that are not tested on standardized exams.[17]

Prioritization of Process Measures

As payment incentives for VBP currently stand, process measures are weighted equally regardless of the clinical benefits they generate and the resources required to achieve improvements in process quality. For instance, 2 process measures, continuing home ‐blocker medications for patients with coronary artery disease undergoing surgery and early percutaneous coronary intervention for patients with AMI, may be weighted equally as process measures although both their clinical benefits and the costs of implementation are very different. Some hospitals responding to VBP incentives may choose to invest in areas where their ability to earn VBP incentive payments is high and the costs of improvement are low, although those areas may not be where interventions are most needed because clinical outcomes could be most improved. Recognizing that process measures have heterogeneous benefits and costs of implementation is important when prioritizing their reimbursement in VBP.

Measuring Improvements in Hospital Quality

Tying hospital financial compensation to hospital quality implies that measures of hospital quality should be robust. To incentivize hospitals to improve quality not only relative to other hospitals but to themselves in the past, the VBP program has established a baseline performance for each hospital. Each hospital is compared to its baseline performance in subsequent evaluation periods. Thus, properly measuring a hospital's baseline performance is important. During a given baseline period, some hospitals may have better or worse outcomes than their steady state due to random variation alone. Some hospitals deemed to have a low baseline will experience improvements in quality that are not related to active efforts to improve quality but through chance alone. Similarly, some hospitals deemed to have a high baseline will experience reductions in quality through chance. Of course, neither of these changes should be subject to differences in reimbursement because they do not reflect actual organizational changes made by the hospitals. The VBP program has made significant efforts to address this issue by requiring participating hospitals to have a large enough sample of cases such that estimated rates of process quality adherence meet a reliability threshold (ie, are likely to be consistent over time rather than vary substantially through chance alone). However, not all process measures exhibit high reliability, particularly those for which adverse events are rare (eg, foreign objects retained after surgery, air embolisms, and blood incompatibility). Ultimately, CMS's decision to balance the need for statistically reliable data with the goal of including as many hospitals as possible in the VBP program will require ongoing reevaluation of this issue.

Choosing Hospital Comparators Appropriately

In the current VBP program, hospitals will be evaluated in part by how they compare to hospitals nationally. However, studies of regional variation in healthcare have demonstrated large variations in practice patterns across the United States,[18, 19, 20] raising the question of whether hospitals should, at least initially, be compared to hospitals in the same geographic area. Although the ultimate goal of VBP should be to hold hospitals to a national standard, local practice patterns are not easily modified within 1‐ to 2‐year timeframes. Initially comparing hospitals to a national rather than local standard may unfairly penalize hospitals that are relative underperformers nationally but overperformers regionally. Although CMS's policy to reward improvement within hospitals over time mitigates issues arising from a cross‐sectional comparison of hospitals, the issue still remains if many hospitals within a region not only underperform relative to other hospitals nationally but also fail to demonstrate improvement. More broadly, this issue extends to differences across hospitals in factors that impact their ability to meet VBP goals. These factors may include, for example, hospital size, profitability, patient case and insurance mix, and presence of an electronic medical record. Comparing hospitals with vastly different abilities to achieve VBP goals and improve quickly may amount to inequitable policy.

Continual Evaluation of Topped‐Out Measures

Process measures that are met at high rates at nearly all hospitals are not used in evaluations by CMS for VBP. An assumption underlying CMS' decision to not reward hospitals for achieving these topped‐out measures is that once physicians and hospitals make cognitive and system‐level improvements that improve process quality, these gains will persist after the incentive is removed. Thus, CMS hopes and anticipates that although performance incentives will make it easier for well‐meaning physicians to learn to do the right thing, doctors will continue to do the right things for patients after these incentives are removed.[21, 22] Although this assumption may generally be accurate, it is important to continue to evaluate whether measures that are currently topped out continue to remain adequately performed, because rewarding new quality measures will necessarily lead hospitals to reallocate resources away from other clinical activities. Although we hope that the continued public reporting of topped‐out measures will prevent declines in performance on these measures, policy makers and clinicians should be aware that the lack of financial incentives for topped‐out measures may result in declines in quality. To this point, an analysis of 35 Kaiser Permanente facilities from 1997 to 2007 demonstrated that the removal of financial incentives for diabetic retinopathy and cervical cancer screening was associated with subsequent declines in performance of 3% and 1.6% per year, respectively.[23]

Will VBP Incentives Be Large Enough to Change Practice Patterns?

The VBP Program's ability to influence change depends, at least in part, on how the incentives offered under this program compare to the magnitude of the investments that hospitals must make to achieve a given reward. In general, larger incentives are necessary to motivate more significant changes in behavior or to influence organizations to invest the resources needed to achieve change. The incentives offered under VBP in FY 2013 are quite modest. Almost two‐thirds of participating hospitals will see their FY 2013 Medicare revenues change by <0.25%, roughly $125,000 at most.[13, 24] Although these incentives may motivate hospitals that can improve performance and achievement with very modest investments, they may have little impact on organizations that need to make significant upfront investments in care processes to achieve sustainable improvements in care quality. As CMS increases the size of VBP incentives over the next 2 to 4 years, it will also hold hospitals accountable for a broader and increasingly complex set of outcomes. Improving these outcomes may require investments in areas such as information technology and process improvement that far surpass the VBP incentive reward.

Moreover, prior research suggests that financial incentives like those available under VBP may contribute only slightly to performance improvements when public reporting already exists. For example, in a 2‐year study of 613 US hospitals implementing pay‐for‐performance plus public reporting or public reporting only, pay for performance plus public reporting was associated with only a 2.6% to 4.1% increase in a composite measure of quality when compared to hospitals with public reporting only.[9] Similarly, a study of 54 hospitals participating in the CMS pay for performance pilot initiative found no significant improvement in quality of care or outcomes for AMI when compared to 446 control hospitals.[25] A long‐term analysis of pay for performance in the Medicare Premier Hospital Quality Incentive Demonstration found that participation in the program had no discernible effect on 30‐day mortality rates.[10] Finally, a study of physician medical groups contracting with a large network healthcare maintenance organization found that the implementation of pay for performance did not result in major before and after improvements in clinical quality compared to a control group of medical groups.[26]

High‐Value Care Is Not Always Low‐Cost Care

Not surprisingly, the clinical process measures included in CMS' hospital VBP program evaluate a select and relatively small group of high‐value and low‐cost interventions (eg, appropriate administration of antibiotics and tight control of serum glucose in surgical patients). However, an important body of work has demonstrated that high‐cost care (eg, intensive inpatient hospital care for common acute medical conditions) may also be highly valuable in terms of improving survival.[20, 27, 28, 29, 30] As the hospital VBP program evolves, its overseers will need to consider whether to include additional incentives for high‐value high‐cost healthcare services. Such considerations will likely become increasingly salient as healthcare delivery organizations move toward capitated delivery models. In particular, the VBP program's Medicare Spending Per Beneficiary measure, which quantifies inpatient and subsequent outpatient spending per beneficiary after a given hospitalization episode, will need to distinguish between higher‐spending hospitals that provide highly effective care (eg, care that reduces mortality and readmissions) and facilities that provide less‐effective care.

FUTURE OF VBP

Although the future of VBP is unknown, CMS is likely to modify the program in a number of ways over the next 3 to 5 years. First, CMS will likely expand the breadth and focus of incentivized measures in the VBP program. In FY 2014, for example, CMS is adding a set of 3, 30‐day mortality outcome measures to VBP: 30‐day risk‐adjusted mortality for AMI, CHF, and pneumonia.[1] A hospital's performance with respect to these outcomes will represent 25% of its total performance score in 2014, whereas the clinical process of care and patient experience of care domains will account for 45% and 30% of this score, respectively. In 2015, patient experience and outcome measures will account for 30% each in a hospital's performance score, whereas process and efficiency measures will each account for 20% of this score, respectively. The composition of this performance score evidences a shift away from rewarding process‐based measures and toward incentivizing measures of clinical outcomes and patient satisfaction, the latter of which may be highly subjective and more representative of a hospital's catchment population than of a hospital's care itself.[31] Additional measures in the domains of patient safety, care coordination, population and community health, emergency room wait times, and cost control may also be added to the VBP program in FY 2015 to FY 2017. Furthermore, CMS will continue to reevaluate the appropriateness of measures that are already included in VBP and will stop incentivizing measures that have become topped out, or are no longer supported by the National Quality Forum.[1, 13]

Second, CMS has established an annual gradual increase of 0.25% in the percentage of each hospital's inpatient DRG‐based payment that is at stake under VBP. In FY 2014, for example, participating hospitals will be required to contribute 1.25% of inpatient DRG payments to the VBP program. This percentage is likely to increase to 2% or more by 2017.[1, 32]

Third, expansions of the VBP program complement a number of other quality improvement efforts overseen by CMS, including the Hospital Readmissions Reduction Program. Effective for discharges beginning on October 1, 2012, hospitals with excess readmissions for AMI, CHF, and pneumonia are at risk for reimbursement reductions for all Medicare admissions in proportion to the rate of excess rehospitalizations. Some of the same concerns about the hospital VBP program outlined above have also been raised for this program, namely, whether readmission penalties will be large enough to impact hospital behavior, whether readmissions are even preventable,[33, 34] and whether adjustments in hospital‐level policies will reduce admissions that are known to be heavily influenced by patient economic and social factors that are outside of a hospital's control.[35, 36] Despite the limitations of VBP and the challenges that lie ahead, there is optimism that rewarding hospitals that provide high‐value rather than high‐volume care will not only improve outcomes of hospitalized patients in the United States, but will potentially be able to do so at a lower cost. Encouraging hospitals to improve their quality of care may also have important spillover effects on other healthcare domains. For example, hospitals that adopt systems to ensure prompt delivery of antibiotics to patients with pneumonia may also observe positive spillover effects with the prompt antibiotic management of other acute infectious illnesses that are not covered by VBP. VBP may have spillover effects on medical malpractice liability and defensive medicine as well. Indeed, financial incentives to practice higher‐quality evidenced‐based care may reduce medical malpractice liability and defensive medicine.

The government's ultimate goal in implementing VBP is to identify a broad and clinically relevant set of outcome measures that can be used to incentivize hospitals to deliver high‐quality as opposed to high‐volume healthcare. The first wave of outcome measures has already been instituted. It remains to be seen whether the incentive rewards of Medicare's hospital VBP program will be large enough that hospitals feel compelled to improve and compete for them.

References
  1. Centers for Medicare and Medicaid Services. Hospital Value‐Based Purchasing Web site. 2013. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html. Accessed March 4, 2013.
  2. VanLare JM, Conway PH. Value‐based purchasing—national programs to move from volume to value. N Engl J Med. 2012;367:292295.
  3. Joynt KE, Rosenthal MB. Hospital value‐based purchasing: will Medicare's new policy exacerbate disparities? Circ Cardiovasc Qual Outcomes. 2012;5:148149.
  4. Centers for Medicare and Medicaid Services. CMS/premier hospital quality incentive demonstration (QHID). 2013. Available at: https://www.premierinc.com/quality‐safety/tools‐services/p4p/hqi/faqs.jsp. Accessed March 5, 2013.
  5. Centers for Medicare and Medicaid Services. Hospital Compare Web site. 2013. Available at: http://www.medicare.gov/hospitalcompare. Accessed March 4, 2013.
  6. Brown J, Doloresco F, Mylotte JM. “Never events”: not every hospital‐acquired infection is preventable. Clin Infect Dis. 2009;49:743746.
  7. Epstein AM. Will pay for performance improve quality of care? The answer is in the details. N Engl J Med. 2012;367:18521853.
  8. Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367:18211828.
  9. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356:486496.
  10. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long‐term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  11. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157:889899.
  12. Centers for Medicare and Medicaid Services. Hospital Consumer Assessment Of Healthcare Providers and Systems Web site. 2013. Available at: http://www.hcahpsonline.org. Accessed March 5, 2013.
  13. Rau J. Medicare discloses hospitals' bonuses, penalties based on quality. Kaiser Health News. December 20, 2012. Available at: http://www.kaiserhealthnews.org/stories/2012/december/21/medicare‐hospitals‐value‐based‐purchasing.aspx?referrer=search. Accessed March 26, 2013.
  14. Yasaitis L, Fisher ES, Skinner JS, Chandra A. Hospital quality and intensity of spending: is there an association? Health Aff (Millwood). 2009;28:w566w572.
  15. Fonarow GC, Abraham WT, Albert NM, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA. 2007;297:6170.
  16. Rubin HR, Pronovost P, Diette GB. The advantages and disadvantages of process‐based measures of health care quality. Int J Qual Health Care. 2001;13:469474.
  17. Jacob BA. Accountability, incentives and behavior: the impact of high‐stakes testing in the Chicago public schools. J Public Econ. 2005;89:761796.
  18. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138:273287.
  19. Fisher ES. Medical care—is more always better? N Engl J Med. 2003;349:16651667.
  20. Romley JA, Jena AB, Goldman DP. Hospital spending and inpatient mortality: evidence from California: an observational study. Ann Intern Med. 2011;154:160167.
  21. James BC. Making it easy to do it right. N Engl J Med. 2001;345:991993.
  22. Christensen RD, Henry E, Ilstrup S, Baer VL. A high rate of compliance with neonatal intensive care unit transfusion guidelines persists even after a program to improve transfusion guideline compliance ended. Transfusion. 2011;51:25192520.
  23. Lester H, Schmittdiel J, Selby J, et al. The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators. BMJ. 2010;340:c1898.
  24. Werner RM, Dudley RA. Medicare's new hospital value‐based purchasing program is likely to have only a small impact on hospital payments. Health Aff (Millwood). 2012;31:19321940.
  25. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297:23732380.
  26. Mullen KJ, Frank RG, Rosenthal MB. Can you get what you pay for? Pay‐for‐performance and the quality of healthcare providers. Rand J Econ. 2010;41:6491.
  27. Romley JA, Jena AB, O'Leary JF, Goldman DP. Spending and mortality in US acute care hospitals. Am J Manag Care. 2013;19:e46e54.
  28. Barnato AE, Farrell MH, Chang CC, Lave JR, Roberts MS, Angus DC. Development and validation of hospital “end‐of‐life” treatment intensity measures. Med Care. 2009;47:10981105.
  29. Ong MK, Mangione CM, Romano PS, et al. Looking forward, looking back: assessing variations in hospital resource use and outcomes for elderly patients with heart failure. Circ Cardiovasc Qual Outcomes. 2009;2:548557.
  30. Stukel TA, Fisher ES, Alter DA, et al. Association of hospital spending intensity with mortality and readmission rates in Ontario hospitals. JAMA. 2012;307:10371045.
  31. Young GJ, Meterko M, Desai KR. Patient satisfaction with hospital care: effects of demographic and institutional characteristics. Med Care. 2000;38:325334.
  32. VanLare JM, Blum JD, Conway PH. Linking performance with payment: implementing the Physician Value‐Based Payment Modifier. JAMA. 2012;308:20892090.
  33. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183:E391E402.
  34. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183:E1067E1072.
  35. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366:13661369.
  36. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305:675681.
References
  1. Centers for Medicare and Medicaid Services. Hospital Value‐Based Purchasing Web site. 2013. Available at: http://www.cms.gov/Medicare/Quality‐Initiatives‐Patient‐Assessment‐Instruments/hospital‐value‐based‐purchasing/index.html. Accessed March 4, 2013.
  2. VanLare JM, Conway PH. Value‐based purchasing—national programs to move from volume to value. N Engl J Med. 2012;367:292295.
  3. Joynt KE, Rosenthal MB. Hospital value‐based purchasing: will Medicare's new policy exacerbate disparities? Circ Cardiovasc Qual Outcomes. 2012;5:148149.
  4. Centers for Medicare and Medicaid Services. CMS/premier hospital quality incentive demonstration (QHID). 2013. Available at: https://www.premierinc.com/quality‐safety/tools‐services/p4p/hqi/faqs.jsp. Accessed March 5, 2013.
  5. Centers for Medicare and Medicaid Services. Hospital Compare Web site. 2013. Available at: http://www.medicare.gov/hospitalcompare. Accessed March 4, 2013.
  6. Brown J, Doloresco F, Mylotte JM. “Never events”: not every hospital‐acquired infection is preventable. Clin Infect Dis. 2009;49:743746.
  7. Epstein AM. Will pay for performance improve quality of care? The answer is in the details. N Engl J Med. 2012;367:18521853.
  8. Sutton M, Nikolova S, Boaden R, Lester H, McDonald R, Roland M. Reduced mortality with hospital pay for performance in England. N Engl J Med. 2012;367:18211828.
  9. Lindenauer PK, Remus D, Roman S, et al. Public reporting and pay for performance in hospital quality improvement. N Engl J Med. 2007;356:486496.
  10. Jha AK, Joynt KE, Orav EJ, Epstein AM. The long‐term effect of premier pay for performance on patient outcomes. N Engl J Med. 2012;366:16061615.
  11. Houle SK, McAlister FA, Jackevicius CA, Chuck AW, Tsuyuki RT. Does performance‐based remuneration for individual health care practitioners affect patient care?: a systematic review. Ann Intern Med. 2012;157:889899.
  12. Centers for Medicare and Medicaid Services. Hospital Consumer Assessment Of Healthcare Providers and Systems Web site. 2013. Available at: http://www.hcahpsonline.org. Accessed March 5, 2013.
  13. Rau J. Medicare discloses hospitals' bonuses, penalties based on quality. Kaiser Health News. December 20, 2012. Available at: http://www.kaiserhealthnews.org/stories/2012/december/21/medicare‐hospitals‐value‐based‐purchasing.aspx?referrer=search. Accessed March 26, 2013.
  14. Yasaitis L, Fisher ES, Skinner JS, Chandra A. Hospital quality and intensity of spending: is there an association? Health Aff (Millwood). 2009;28:w566w572.
  15. Fonarow GC, Abraham WT, Albert NM, et al. Association between performance measures and clinical outcomes for patients hospitalized with heart failure. JAMA. 2007;297:6170.
  16. Rubin HR, Pronovost P, Diette GB. The advantages and disadvantages of process‐based measures of health care quality. Int J Qual Health Care. 2001;13:469474.
  17. Jacob BA. Accountability, incentives and behavior: the impact of high‐stakes testing in the Chicago public schools. J Public Econ. 2005;89:761796.
  18. Fisher ES, Wennberg DE, Stukel TA, Gottlieb DJ, Lucas FL, Pinder EL. The implications of regional variations in Medicare spending. Part 1: the content, quality, and accessibility of care. Ann Intern Med. 2003;138:273287.
  19. Fisher ES. Medical care—is more always better? N Engl J Med. 2003;349:16651667.
  20. Romley JA, Jena AB, Goldman DP. Hospital spending and inpatient mortality: evidence from California: an observational study. Ann Intern Med. 2011;154:160167.
  21. James BC. Making it easy to do it right. N Engl J Med. 2001;345:991993.
  22. Christensen RD, Henry E, Ilstrup S, Baer VL. A high rate of compliance with neonatal intensive care unit transfusion guidelines persists even after a program to improve transfusion guideline compliance ended. Transfusion. 2011;51:25192520.
  23. Lester H, Schmittdiel J, Selby J, et al. The impact of removing financial incentives from clinical quality indicators: longitudinal analysis of four Kaiser Permanente indicators. BMJ. 2010;340:c1898.
  24. Werner RM, Dudley RA. Medicare's new hospital value‐based purchasing program is likely to have only a small impact on hospital payments. Health Aff (Millwood). 2012;31:19321940.
  25. Glickman SW, Ou FS, DeLong ER, et al. Pay for performance, quality of care, and outcomes in acute myocardial infarction. JAMA. 2007;297:23732380.
  26. Mullen KJ, Frank RG, Rosenthal MB. Can you get what you pay for? Pay‐for‐performance and the quality of healthcare providers. Rand J Econ. 2010;41:6491.
  27. Romley JA, Jena AB, O'Leary JF, Goldman DP. Spending and mortality in US acute care hospitals. Am J Manag Care. 2013;19:e46e54.
  28. Barnato AE, Farrell MH, Chang CC, Lave JR, Roberts MS, Angus DC. Development and validation of hospital “end‐of‐life” treatment intensity measures. Med Care. 2009;47:10981105.
  29. Ong MK, Mangione CM, Romano PS, et al. Looking forward, looking back: assessing variations in hospital resource use and outcomes for elderly patients with heart failure. Circ Cardiovasc Qual Outcomes. 2009;2:548557.
  30. Stukel TA, Fisher ES, Alter DA, et al. Association of hospital spending intensity with mortality and readmission rates in Ontario hospitals. JAMA. 2012;307:10371045.
  31. Young GJ, Meterko M, Desai KR. Patient satisfaction with hospital care: effects of demographic and institutional characteristics. Med Care. 2000;38:325334.
  32. VanLare JM, Blum JD, Conway PH. Linking performance with payment: implementing the Physician Value‐Based Payment Modifier. JAMA. 2012;308:20892090.
  33. Walraven C, Bennett C, Jennings A, Austin PC, Forster AJ. Proportion of hospital readmissions deemed avoidable: a systematic review. CMAJ. 2011;183:E391E402.
  34. Walraven C, Jennings A, Taljaard M, et al. Incidence of potentially avoidable urgent readmissions and their relation to all‐cause urgent readmissions. CMAJ. 2011;183:E1067E1072.
  35. Joynt KE, Jha AK. Thirty‐day readmissions—truth and consequences. N Engl J Med. 2012;366:13661369.
  36. Joynt KE, Orav EJ, Jha AK. Thirty‐day readmission rates for Medicare beneficiaries by race and site of care. JAMA. 2011;305:675681.
Issue
Journal of Hospital Medicine - 8(5)
Issue
Journal of Hospital Medicine - 8(5)
Page Number
271-277
Page Number
271-277
Article Type
Display Headline
Hospital value‐based purchasing
Display Headline
Hospital value‐based purchasing
Sections
Article Source
Copyright © 2013 Society of Hospital Medicine
Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Anupam B. Jena, MD, PhD, Department of Health Care Policy, Harvard Medical School, 180 Longwood Avenue, Boston, MA 02115; Telephone: 617‐432‐8322; Fax: 617‐432‐0173. E‐mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Gating Strategy
First Peek Free
Article PDF Media
Media Files