User login
Stricter Duty-Hour Regulations Tied to Diminished Patient Care
A new report has linked recent changes made to hospital residents’ duty-hour regulations with a reduction in some aspects of patient care.
The study compared the work model for duty-hour regulations implemented in 2011 by the Accreditation Council for Graduate Medical Education (ACGME), which mostly limits first-year residents to a maximum 16-hour shift and older residents to 24 hours, with less restrictive guidelines adopted in 2003. Previously, 30-hour shifts were permitted for all residents.
Researchers at Johns Hopkins University in Baltimore measured residents’ sleep duration, hospital admission volumes, residents’ educational opportunities, the number of handoffs, and patient satisfaction surveys during shifts worked by internal-medicine house staff trainees under both models. The researchers used a three-month crossover design.
Residents slept longer, as expected, but the data showed more handoffs, fewer chances to attend teaching conferences, and reduced intern presence during daytime shifts when trainees followed the more recent work model. The study authors associated the model adopted in 2011 with deterioration in continuity of patient care and perceived quality of care. One of the four house staff teams perceived such a reduced quality of patient care that it terminated the project early.
However, one resident program director says much more research needs to be done to determine the efficacy of the new work-hour rules, particularly on patient and resident satisfaction. “There are things that go along with duty-hours, such as access to information and really well-designed handoff systems, that I think would bring out the safety advantages of duty-hours,” says Ethan Fried, MD, MS, FACP, associate professor of clinical medicine, Columbia College of Physicians and Surgeons and vice chair for education, department of medicine, St. Luke’s-Roosevelt Hospital, both in New York, and a former president of the Association of Program Directors in Internal Medicine.
“One of the reasons you’re not seeing an inflection in safety is because you have duty-hours, but you haven’t got the other system that you need to make duty-hours work. What people have been focused on is pure safety, and that we haven’t been able to demonstrate actual improvement in morbidity, mortality or complications,” he adds. “It’s one of those cases where I don’t know if we’re necessarily asking the right questions.”
Visit our website for more information on duty-hours.
A new report has linked recent changes made to hospital residents’ duty-hour regulations with a reduction in some aspects of patient care.
The study compared the work model for duty-hour regulations implemented in 2011 by the Accreditation Council for Graduate Medical Education (ACGME), which mostly limits first-year residents to a maximum 16-hour shift and older residents to 24 hours, with less restrictive guidelines adopted in 2003. Previously, 30-hour shifts were permitted for all residents.
Researchers at Johns Hopkins University in Baltimore measured residents’ sleep duration, hospital admission volumes, residents’ educational opportunities, the number of handoffs, and patient satisfaction surveys during shifts worked by internal-medicine house staff trainees under both models. The researchers used a three-month crossover design.
Residents slept longer, as expected, but the data showed more handoffs, fewer chances to attend teaching conferences, and reduced intern presence during daytime shifts when trainees followed the more recent work model. The study authors associated the model adopted in 2011 with deterioration in continuity of patient care and perceived quality of care. One of the four house staff teams perceived such a reduced quality of patient care that it terminated the project early.
However, one resident program director says much more research needs to be done to determine the efficacy of the new work-hour rules, particularly on patient and resident satisfaction. “There are things that go along with duty-hours, such as access to information and really well-designed handoff systems, that I think would bring out the safety advantages of duty-hours,” says Ethan Fried, MD, MS, FACP, associate professor of clinical medicine, Columbia College of Physicians and Surgeons and vice chair for education, department of medicine, St. Luke’s-Roosevelt Hospital, both in New York, and a former president of the Association of Program Directors in Internal Medicine.
“One of the reasons you’re not seeing an inflection in safety is because you have duty-hours, but you haven’t got the other system that you need to make duty-hours work. What people have been focused on is pure safety, and that we haven’t been able to demonstrate actual improvement in morbidity, mortality or complications,” he adds. “It’s one of those cases where I don’t know if we’re necessarily asking the right questions.”
Visit our website for more information on duty-hours.
A new report has linked recent changes made to hospital residents’ duty-hour regulations with a reduction in some aspects of patient care.
The study compared the work model for duty-hour regulations implemented in 2011 by the Accreditation Council for Graduate Medical Education (ACGME), which mostly limits first-year residents to a maximum 16-hour shift and older residents to 24 hours, with less restrictive guidelines adopted in 2003. Previously, 30-hour shifts were permitted for all residents.
Researchers at Johns Hopkins University in Baltimore measured residents’ sleep duration, hospital admission volumes, residents’ educational opportunities, the number of handoffs, and patient satisfaction surveys during shifts worked by internal-medicine house staff trainees under both models. The researchers used a three-month crossover design.
Residents slept longer, as expected, but the data showed more handoffs, fewer chances to attend teaching conferences, and reduced intern presence during daytime shifts when trainees followed the more recent work model. The study authors associated the model adopted in 2011 with deterioration in continuity of patient care and perceived quality of care. One of the four house staff teams perceived such a reduced quality of patient care that it terminated the project early.
However, one resident program director says much more research needs to be done to determine the efficacy of the new work-hour rules, particularly on patient and resident satisfaction. “There are things that go along with duty-hours, such as access to information and really well-designed handoff systems, that I think would bring out the safety advantages of duty-hours,” says Ethan Fried, MD, MS, FACP, associate professor of clinical medicine, Columbia College of Physicians and Surgeons and vice chair for education, department of medicine, St. Luke’s-Roosevelt Hospital, both in New York, and a former president of the Association of Program Directors in Internal Medicine.
“One of the reasons you’re not seeing an inflection in safety is because you have duty-hours, but you haven’t got the other system that you need to make duty-hours work. What people have been focused on is pure safety, and that we haven’t been able to demonstrate actual improvement in morbidity, mortality or complications,” he adds. “It’s one of those cases where I don’t know if we’re necessarily asking the right questions.”
Visit our website for more information on duty-hours.
In the Literature: Hospital-Based Research You Need to Know
Clinical question: Is routine preoperative urine screening beneficial?
Background: The value of preoperative urine screening is unproven, except before urologic procedures. Furthermore, treatment of asymptomatic bacteriuria may lead to adverse events, including diarrhea, allergic reactions, and Clostridium difficile infection (CDI).
Study design: Retrospective chart review.
Setting: Patients who underwent cardiothoracic, orthopedic, and vascular surgeries at the Minneapolis Veterans Affairs Medical Center in 2010.
Synopsis: A total of 1,934 procedures were performed on 1,699 patients, most of which were orthopedics procedures (1,291 in 1,115 patients). A urine culture was obtained before 25% of procedures with significant variation by service (cardiothoracic, 85%; vascular, 48%; orthopedic, 4%). Bacteriuria was detected in 11% of urine cultures (54 of 489), but antimicrobial drugs were dispensed to just 16 patients.
To identify correlates of preoperative urine culture use, patients with and without urine cultures were compared. The rate of surgical-site infection was similar for both groups. Postoperative UTI was more frequent among patients with bacteriuria. Rates of diarrhea, allergy, and CDI did not differ. Paradoxically, patients treated for preoperative UTI were more likely to develop surgical-site infections (45% vs. 14%; P=0.03). Postoperative UTI was also more frequent among treated patients versus untreated patients (18% vs. 7%).
Bottom line: This is the largest study to assess outcomes for routine preoperative urine cultures. These findings demonstrate that preoperative screening for, and treatment of, asymptomatic bacteriuria should be avoided in patients undergoing nonurologic surgical procedures.
Citation: Drekonja DM, Zarmbinski B, Johnson JR. Preoperative urine culture at a veterans affairs medical center. JAMA Intern Med. 2013;173(1):71-72.
Visit our website for more physician reviews of recent HM-relevant literature.
Clinical question: Is routine preoperative urine screening beneficial?
Background: The value of preoperative urine screening is unproven, except before urologic procedures. Furthermore, treatment of asymptomatic bacteriuria may lead to adverse events, including diarrhea, allergic reactions, and Clostridium difficile infection (CDI).
Study design: Retrospective chart review.
Setting: Patients who underwent cardiothoracic, orthopedic, and vascular surgeries at the Minneapolis Veterans Affairs Medical Center in 2010.
Synopsis: A total of 1,934 procedures were performed on 1,699 patients, most of which were orthopedics procedures (1,291 in 1,115 patients). A urine culture was obtained before 25% of procedures with significant variation by service (cardiothoracic, 85%; vascular, 48%; orthopedic, 4%). Bacteriuria was detected in 11% of urine cultures (54 of 489), but antimicrobial drugs were dispensed to just 16 patients.
To identify correlates of preoperative urine culture use, patients with and without urine cultures were compared. The rate of surgical-site infection was similar for both groups. Postoperative UTI was more frequent among patients with bacteriuria. Rates of diarrhea, allergy, and CDI did not differ. Paradoxically, patients treated for preoperative UTI were more likely to develop surgical-site infections (45% vs. 14%; P=0.03). Postoperative UTI was also more frequent among treated patients versus untreated patients (18% vs. 7%).
Bottom line: This is the largest study to assess outcomes for routine preoperative urine cultures. These findings demonstrate that preoperative screening for, and treatment of, asymptomatic bacteriuria should be avoided in patients undergoing nonurologic surgical procedures.
Citation: Drekonja DM, Zarmbinski B, Johnson JR. Preoperative urine culture at a veterans affairs medical center. JAMA Intern Med. 2013;173(1):71-72.
Visit our website for more physician reviews of recent HM-relevant literature.
Clinical question: Is routine preoperative urine screening beneficial?
Background: The value of preoperative urine screening is unproven, except before urologic procedures. Furthermore, treatment of asymptomatic bacteriuria may lead to adverse events, including diarrhea, allergic reactions, and Clostridium difficile infection (CDI).
Study design: Retrospective chart review.
Setting: Patients who underwent cardiothoracic, orthopedic, and vascular surgeries at the Minneapolis Veterans Affairs Medical Center in 2010.
Synopsis: A total of 1,934 procedures were performed on 1,699 patients, most of which were orthopedics procedures (1,291 in 1,115 patients). A urine culture was obtained before 25% of procedures with significant variation by service (cardiothoracic, 85%; vascular, 48%; orthopedic, 4%). Bacteriuria was detected in 11% of urine cultures (54 of 489), but antimicrobial drugs were dispensed to just 16 patients.
To identify correlates of preoperative urine culture use, patients with and without urine cultures were compared. The rate of surgical-site infection was similar for both groups. Postoperative UTI was more frequent among patients with bacteriuria. Rates of diarrhea, allergy, and CDI did not differ. Paradoxically, patients treated for preoperative UTI were more likely to develop surgical-site infections (45% vs. 14%; P=0.03). Postoperative UTI was also more frequent among treated patients versus untreated patients (18% vs. 7%).
Bottom line: This is the largest study to assess outcomes for routine preoperative urine cultures. These findings demonstrate that preoperative screening for, and treatment of, asymptomatic bacteriuria should be avoided in patients undergoing nonurologic surgical procedures.
Citation: Drekonja DM, Zarmbinski B, Johnson JR. Preoperative urine culture at a veterans affairs medical center. JAMA Intern Med. 2013;173(1):71-72.
Visit our website for more physician reviews of recent HM-relevant literature.
Caring for oneself to care for others: physicians and their self-care
It is well known that clinicians experience distress and grief in response to their patients’ suffering. Oncologists and palliative care specialists are no exception since they commonly experience patient loss and are often affected by unprocessed grief. These emotions can compromise clinicians’ personal well-being, since unexamined emotions may lead to burnout, moral distress, compassion fatigue, and poor clinical decisions which adversely affect patient care. One approach to mitigate this harm is selfcare, defined as a cadre of activities performed independently by an individual to promote and maintain personal well-being throughout life.
This article emphasizes the importance of having a self-care and self-awareness plan when caring for patients with life-limiting cancer and discusses validated methods to increase self-care, enhance self-awareness and improve patient care.
*Click on the PDF icon at the top of this introduction to read the full article.
It is well known that clinicians experience distress and grief in response to their patients’ suffering. Oncologists and palliative care specialists are no exception since they commonly experience patient loss and are often affected by unprocessed grief. These emotions can compromise clinicians’ personal well-being, since unexamined emotions may lead to burnout, moral distress, compassion fatigue, and poor clinical decisions which adversely affect patient care. One approach to mitigate this harm is selfcare, defined as a cadre of activities performed independently by an individual to promote and maintain personal well-being throughout life.
This article emphasizes the importance of having a self-care and self-awareness plan when caring for patients with life-limiting cancer and discusses validated methods to increase self-care, enhance self-awareness and improve patient care.
*Click on the PDF icon at the top of this introduction to read the full article.
It is well known that clinicians experience distress and grief in response to their patients’ suffering. Oncologists and palliative care specialists are no exception since they commonly experience patient loss and are often affected by unprocessed grief. These emotions can compromise clinicians’ personal well-being, since unexamined emotions may lead to burnout, moral distress, compassion fatigue, and poor clinical decisions which adversely affect patient care. One approach to mitigate this harm is selfcare, defined as a cadre of activities performed independently by an individual to promote and maintain personal well-being throughout life.
This article emphasizes the importance of having a self-care and self-awareness plan when caring for patients with life-limiting cancer and discusses validated methods to increase self-care, enhance self-awareness and improve patient care.
*Click on the PDF icon at the top of this introduction to read the full article.
Pulsed-dye laser erased evidence of breast radiation
BOSTON – The appearance of radiation-induced telangectasias of the breast can be significantly improved by treatment with a pulsed-dye laser, investigators reported at the annual meeting of the American Society for Laser Medicine and Surgery.
There were no adverse treatment-associated effects, and the treatment was safe to use in breast cancer patients and women with reconstructed breasts, said Dr. Anthony Rossi, a fellow in procedural dermatology/Mohs surgery at Memorial Sloan-Kettering Cancer Center in New York.
"After treatment, all patients reported improvement, including an improved sense of confidence and aesthetic appearance, and one patient commented that she was now able to change in front of her partner without embarrassment," said Dr. Rossi.
Chronic radiation dermatitis can occur within 1 or 2 years of treatment for breast cancer. In one study, 59% of women had telangectasias within 5 years of undergoing electron-beam radiotherapy, and 72% had telangectasias at the treatment site within 7 years (Br. J. Radiol. 2002;75:444-7).
The clinical characteristics include skin atrophy, hypo- or hyperpigmentation, and prominent lesions believed to be caused by dilation of reduced or poorly supported skin vasculature. Telangectasias of the breast are typically confined to the site of the highest radiation dose and to areas that received radiation boosts, such as surgical scars.
For women who have undergone breast cancer therapy, telangectasias "can serve as a reminder of their cancer, almost akin to a surgical scar, and can prompt fears of recurrence or even social anxiety," Dr. Rossi said.
He and his colleagues conducted a retrospective study of 11 patients treated with a pulsed-dye laser for radiation-induced telangectasias, looking at radiation type and dose received; onset, color, thickness, and distribution of telangectasias; laser fluence parameters; and complications. They also evaluated patient perceptions and quality of life, and had pre- and postlaser clinical photos assessed by two independent raters to judge percentage clearance of telangectasias.
The women had received an average of 5,000 cGy (50 Gy) in 25 fractions, often with radiation boosts to the surgical scars. The telangectasias developed a mean of 3.7 years after radiation exposure.
Five patients were treated with a 595-nm pulsed-dye laser, and two with a 585-nm laser. The endpoint for all treatments was transient purpura.
The mean clearance was 72.7% (range, 50%-90%), after a mean of 4.3 treatments (2-9). The average laser fluence used was 7.2 J/cm2. The energy was applied with a 10-mm spot size in 3- to 6-ms pulses.
The investigators saw no adverse effects of therapy, including in women with reconstructed breasts, whether with implants or flaps.
Based on their findings, the investigators are embarking on a prospective study designed to evaluate the effect of radiation-induced telangiectasias on patient quality of life and changes in quality of life measures after laser therapy, using the Skindex-16 and BREAST-Q validated scales. They also plan to assess long-term effects on quality of life and recurrence, if any, of treated telangectasias.
The study was internally funded. Dr. Rossi reported having no financial disclosures.
BOSTON – The appearance of radiation-induced telangectasias of the breast can be significantly improved by treatment with a pulsed-dye laser, investigators reported at the annual meeting of the American Society for Laser Medicine and Surgery.
There were no adverse treatment-associated effects, and the treatment was safe to use in breast cancer patients and women with reconstructed breasts, said Dr. Anthony Rossi, a fellow in procedural dermatology/Mohs surgery at Memorial Sloan-Kettering Cancer Center in New York.
"After treatment, all patients reported improvement, including an improved sense of confidence and aesthetic appearance, and one patient commented that she was now able to change in front of her partner without embarrassment," said Dr. Rossi.
Chronic radiation dermatitis can occur within 1 or 2 years of treatment for breast cancer. In one study, 59% of women had telangectasias within 5 years of undergoing electron-beam radiotherapy, and 72% had telangectasias at the treatment site within 7 years (Br. J. Radiol. 2002;75:444-7).
The clinical characteristics include skin atrophy, hypo- or hyperpigmentation, and prominent lesions believed to be caused by dilation of reduced or poorly supported skin vasculature. Telangectasias of the breast are typically confined to the site of the highest radiation dose and to areas that received radiation boosts, such as surgical scars.
For women who have undergone breast cancer therapy, telangectasias "can serve as a reminder of their cancer, almost akin to a surgical scar, and can prompt fears of recurrence or even social anxiety," Dr. Rossi said.
He and his colleagues conducted a retrospective study of 11 patients treated with a pulsed-dye laser for radiation-induced telangectasias, looking at radiation type and dose received; onset, color, thickness, and distribution of telangectasias; laser fluence parameters; and complications. They also evaluated patient perceptions and quality of life, and had pre- and postlaser clinical photos assessed by two independent raters to judge percentage clearance of telangectasias.
The women had received an average of 5,000 cGy (50 Gy) in 25 fractions, often with radiation boosts to the surgical scars. The telangectasias developed a mean of 3.7 years after radiation exposure.
Five patients were treated with a 595-nm pulsed-dye laser, and two with a 585-nm laser. The endpoint for all treatments was transient purpura.
The mean clearance was 72.7% (range, 50%-90%), after a mean of 4.3 treatments (2-9). The average laser fluence used was 7.2 J/cm2. The energy was applied with a 10-mm spot size in 3- to 6-ms pulses.
The investigators saw no adverse effects of therapy, including in women with reconstructed breasts, whether with implants or flaps.
Based on their findings, the investigators are embarking on a prospective study designed to evaluate the effect of radiation-induced telangiectasias on patient quality of life and changes in quality of life measures after laser therapy, using the Skindex-16 and BREAST-Q validated scales. They also plan to assess long-term effects on quality of life and recurrence, if any, of treated telangectasias.
The study was internally funded. Dr. Rossi reported having no financial disclosures.
BOSTON – The appearance of radiation-induced telangectasias of the breast can be significantly improved by treatment with a pulsed-dye laser, investigators reported at the annual meeting of the American Society for Laser Medicine and Surgery.
There were no adverse treatment-associated effects, and the treatment was safe to use in breast cancer patients and women with reconstructed breasts, said Dr. Anthony Rossi, a fellow in procedural dermatology/Mohs surgery at Memorial Sloan-Kettering Cancer Center in New York.
"After treatment, all patients reported improvement, including an improved sense of confidence and aesthetic appearance, and one patient commented that she was now able to change in front of her partner without embarrassment," said Dr. Rossi.
Chronic radiation dermatitis can occur within 1 or 2 years of treatment for breast cancer. In one study, 59% of women had telangectasias within 5 years of undergoing electron-beam radiotherapy, and 72% had telangectasias at the treatment site within 7 years (Br. J. Radiol. 2002;75:444-7).
The clinical characteristics include skin atrophy, hypo- or hyperpigmentation, and prominent lesions believed to be caused by dilation of reduced or poorly supported skin vasculature. Telangectasias of the breast are typically confined to the site of the highest radiation dose and to areas that received radiation boosts, such as surgical scars.
For women who have undergone breast cancer therapy, telangectasias "can serve as a reminder of their cancer, almost akin to a surgical scar, and can prompt fears of recurrence or even social anxiety," Dr. Rossi said.
He and his colleagues conducted a retrospective study of 11 patients treated with a pulsed-dye laser for radiation-induced telangectasias, looking at radiation type and dose received; onset, color, thickness, and distribution of telangectasias; laser fluence parameters; and complications. They also evaluated patient perceptions and quality of life, and had pre- and postlaser clinical photos assessed by two independent raters to judge percentage clearance of telangectasias.
The women had received an average of 5,000 cGy (50 Gy) in 25 fractions, often with radiation boosts to the surgical scars. The telangectasias developed a mean of 3.7 years after radiation exposure.
Five patients were treated with a 595-nm pulsed-dye laser, and two with a 585-nm laser. The endpoint for all treatments was transient purpura.
The mean clearance was 72.7% (range, 50%-90%), after a mean of 4.3 treatments (2-9). The average laser fluence used was 7.2 J/cm2. The energy was applied with a 10-mm spot size in 3- to 6-ms pulses.
The investigators saw no adverse effects of therapy, including in women with reconstructed breasts, whether with implants or flaps.
Based on their findings, the investigators are embarking on a prospective study designed to evaluate the effect of radiation-induced telangiectasias on patient quality of life and changes in quality of life measures after laser therapy, using the Skindex-16 and BREAST-Q validated scales. They also plan to assess long-term effects on quality of life and recurrence, if any, of treated telangectasias.
The study was internally funded. Dr. Rossi reported having no financial disclosures.
AT LASER 2013
Major finding: The mean clearance of radiation-induced telangiectasias with a pulsed-dye laser was 72.7% (range, 50%-90%), after a mean of 4.3 treatments (2-9).
Data source: Retrospective case series of 11 breast cancer patients.
Disclosures: The study was internally funded. Dr. Rossi reported having no financial disclosures
A Multifaceted Case
Box
1
The approach to clinical conundrums by an expert clinician is revealed through the presentation of an actual patient's case in an approach typical of a morning report. Similarly to patient care, sequential pieces of information are provided to the clinician, who is unfamiliar with the case. The focus is on the thought processes of both the clinical team caring for the patient and the discussant.
Box
2
This icon represents the patient's case. Each paragraph that follows represents the discussant's thoughts.
A 67‐year‐old male presented to an outside hospital with a 1‐day history of fevers up to 39.4C, bilateral upper extremity weakness, and confusion. Forty‐eight hours prior to his presentation he had undergone uncomplicated bilateral carpal tunnel release surgery for the complaint of bilateral upper extremity paresthesias.
Bilateral carpal tunnel syndrome should prompt consideration of systemic diseases that infiltrate or impinge both canals (eg, rheumatoid arthritis, acromegaly, hypothyroidism, amyloidosis), although it is most frequently explained by a bilateral repetitive stress (eg, workplace typing). The development of upper extremity weakness suggests that an alternative condition such as cervical myelopathy, bilateral radiculopathy, or a rapidly progressive peripheral neuropathy may be responsible for his paresthesias. It would be unusual for a central nervous system process to selectively cause bilateral upper extremity weakness. Occasionally, patients emerge from surgery with limb weakness caused by peripheral nerve injury sustained from malpositioning of the extremity, but this would have been evident immediately following the operation.
Postoperative fevers are frequently unexplained, but require a search for common healthcare‐associated infections, such as pneumonia, urinary tract infection, intravenous catheter thrombophlebitis, wound infection, or Clostridium difficile colitis. However, such complications are unlikely following an ambulatory procedure. Confusion and fever together point to a central nervous system infection (meningoencephalitis or brain abscess) or a systemic infection that has impaired cognition. Malignancies can cause fever and altered mental status, but these are typically asynchronous events.
His past medical history was notable for hypertension, dyslipidemia, gout, actinic keratosis, and gastroesophageal reflux. His surgical history included bilateral knee replacements, repair of a left rotator cuff injury, and a herniorrhaphy. He was a nonsmoker who consumed 4 to 6 beers daily. His medications included clonidine, colchicine, atorvastatin, extended release metoprolol, triamterene‐hydrochlorothiazide, probenecid, and as‐needed ibuprofen and omeprazole.
Upon presentation he was cooperative and in no distress. Temperature was 38.9C, pulse 119 beats per minute, blood pressure 140/90 mm Hg, and oxygen saturation 94% on room air. He was noted to have logical thinking but impaired concentration. His upper extremity movement was restricted because of postoperative discomfort and swelling rather than true weakness. The rest of the exam was normal.
Metabolic, infectious, structural (intracranial), and toxic disorders can cause altered mental status. His heavy alcohol use puts him at risk for alcohol withdrawal and infections (such as Listeria meningitis), both of which may explain his fever and altered mental status. Signs and symptoms of meningitis are absent at this time. His knee prostheses could have harbored an infection preoperatively and therefore warrant close examination. Patients sometimes have adverse reactions to medications they have been prescribed but are not exposed to until hospitalization, although his surgical procedure was likely done on an outpatient basis. Empiric thiamine should be administered early given his confusion and alcohol habits.
Basic laboratories revealed a hemoglobin of 11.2 g/dL, white blood cell (WBC) count of 6,900/mm3 with 75% neutrophils, platelets of 206,000/mm3. Mean corpuscular volume was 97 mm3. Serum albumin was 2.4 g/dl, sodium 134 mmol/L, potassium 3.9 mmol/L, blood urea nitrogen 12 mg/dL, and creatinine 0.9 mg/dL. The aspartate aminotransferase was 93 U/L, alanine aminotransferase 73 U/L, alkaline phosphatase 254 U/L, and total bilirubin 1.0 mg/dL. Urinalysis was normal. Over the next 16 days fevers and waxing and waning mentation continued. The following studies were normal or negative: blood and urine cultures; transthoracic echocardiogram, antinuclear antibodies, hepatitis B surface antigen, hepatitis C antibody, and human immunodeficiency virus antibody; magnetic resonance imaging of the brain, electroencephalogram, and lower extremity venous ultrasound.
Hypoalbuminemia may signal chronic illness, hypoproduction from liver disease (caused by his heavy alcohol use), or losses from the kidney or gastrointestinal tract. His anemia may reflect chronic disease or point toward a specific underlying disorder. For example, fever and anemia could arise from hemolytic processes such as thrombotic thrombocytopenic purpura or clostridial infections.
An extensive workup has not revealed a cause for his prolonged fever (eg, infection, malignancy, autoimmune condition, or toxin). Likewise, an explanation for confusion is lacking. Because systemic illness and structural brain disease have not been uncovered, a lumbar puncture is indicated.
A lumbar puncture under fluoroscopic guidance revealed a cerebrospinal fluid (CSF) WBC count of 6/mm3, red blood cell count (RBC) 2255/mm3, protein 49 mg/dL, and glucose 54 mg/dL. The WBC differential was not reported. No growth was reported on bacterial cultures. Polymerase chain reactions for enterovirus and herpes simplex viruses 1 and 2 were negative. Cryptococcal antigen and Venereal Disease Research Laboratory serologies were also negative.
A CSF WBC count of 6 is out of the normal range, but could be explained by a traumatic tap given the elevated RBC; the protein and glucose are likewise at the border of normal. Collectively, these are nonspecific findings that could point to an infectious or noninfectious cause of intrathecal or paraspinous inflammation, but are not suggestive of bacterial meningitis.
The patient developed pneumonia, for which he received ertapenem. On hospital day 17 he was intubated for hypoxia and respiratory distress and was extubated after 4 days of mechanical ventilation. Increasing weakness in all extremities prompted magnetic resonance imaging of the spine, which revealed fluid and enhancement involving the soft tissues around C3‐C4 and C5‐C6, raising concerns for discitis and osteomyelitis. Possible septic arthritis at the C3‐C4 and C4‐C5 facets was noted. Ring enhancing fluid collections from T2‐T8 compatible with an epidural abscess with cord compression at T4‐T5 and T6‐T7 were seen. Enhancement and fluid involving the facet joints between T2‐T7 was also consistent with septic arthritis (Figure 1).

His pneumonia appears to have developed many days into his hospitalization, and therefore is unlikely to account for his initial fever and confusion. Blood cultures and echocardiogram have not suggested an endovascular infection that could account for such widespread vertebral and epidural deposition. A wide number of bacteria can cause epidural abscesses and septic arthritis, most commonly Staphylococcus aureus. Less common pathogens with a predilection for osteoarticular involvement, such as Brucella species, warrant consideration when there is appropriate epidemiologic risk.
Systemic bacterial infection remains a concern with his alcoholism rendering him partially immunosuppressed. However, a large number of adjacent spinal joints harboring a bacterial infection is unusual, and a working diagnosis of multilevel spinal infection, therefore, should prompt consideration of noninfectious processes. When a patient develops a swollen peripheral joint and fever in the postoperative setting, gout or pseudogout is a leading consideration. That same thinking should be applied to the vertebrae, where spinal gout can manifest. Surgery itself or associated changes in alcohol consumption patterns or changes in medications (at least 4 of which are relevant to goutcolchicine, hydrochlorothiazide, probenecid, and ibuprofen) could predispose him to a flare.
Aspiration of the epidural collection yielded a negative Gram stain and culture. He developed swelling in the bilateral proximal interphalangeal joints and was treated with steroids and colchicine for suspected gout flare. Vancomycin and piperacillin‐tazobactam were initiated, and on hospital day 22 the patient was transferred to another hospital for further evaluation by neurosurgery.
The negative Gram stain and culture argues against septic arthritis, but these are imperfect tests and will not detect atypical pathogens (eg, spinal tuberculosis). Reexamination of the aspirate for urate and calcium pyrophosphate crystals would be useful. Initiation of steroids in the setting of potentially undiagnosed infection requires a careful risk/benefit analysis. It may be reasonable to treat the patient with colchicine alone while withholding steroids and avoiding nonsteroidal agents in case invasive procedures are planned.
On exam his temperature was 36C, blood pressure 156/92 mm Hg, pulse 100 beats per minute, respirations 21 per minute, and oxygenation 97% on room air. He was not in acute distress and was only oriented to self. Bilateral 2+ lower extremity pitting edema up to the knees was noted. Examination of the heart and lungs was unremarkable. Gouty tophi were noted over both elbows. His joints were normal.
Cranial nerves IIXII were normal. Motor exam revealed normal muscle tone and bulk. Muscle strength was approximately 3/5 in the right upper extremity and 4+/5 in the left upper extremity. Bilateral lower extremity strength was 3/5 in hip flexion, knee flexion, and knee extension. Dorsiflexion and plantar flexion were approximately 2/5 bilaterally. Sensation was intact to light touch and pinprick, and proprioception was normal. Gait was not tested. A Foley catheter was in place.
This examination confirms ongoing encephalopathy and incomplete quadriplegia. The lower extremity weakness is nearly equal proximally and distally, which can be seen with an advanced peripheral neuropathy but is more characteristic of myelopathy. The expected concomitant sensory deficit of myelopathy is not present, although this may be difficult to detect in a confused patient. Reflex testing would help in distinguishing myelopathy (favored because of the imaging findings) from a rapid progressive peripheral motor neuropathy (eg, acute inflammatory demyelinating polyneuropathy or acute intermittent porphyria).
The pitting edema likely represents fluid overload, which can be nonspecific after prolonged immobility during hospitalization; hypoalbuminemia is oftentimes speculated to play a role when this develops. His alcohol use puts him at risk for heart failure (although there is no evidence of this on exam) and liver disease (which his liver function tests suggest). The tophi speak to the extent and chronicity of his hyperuricemia.
On arrival he reported recent onset diarrhea. Medications at transfer included metoprolol, omeprazole, prednisone, piperacillin/tazobactam, vancomycin, and colchicine; acetaminophen, bisacodyl, diphenhydramine, fentanyl, subcutaneous insulin, and labetalol were administered as needed. Laboratory studies included a hemoglobin of 9.5 g/dL, WBC count of 7,300/mm3 with 95% neutrophils, platelets 301,000/mm3, sodium 151 mmol/L, potassium 2.9 mmol/L, blood urea nitrogen 76 mg/dL, creatinine 2.0 mg/dL, aspartate aminotransferase 171 U/L, and alanine aminotransferase 127 U/L. Serum albumin was 1.7 g/dL.
At least 3 of his medicationsdiphenhydramine, fentanyl, and prednisonemay be contributing to his ongoing altered mental status, which may be further compounded by hypernatremia. Although his liver disease remains uncharacterized, hepatic encephalopathy may be contributing to his confusion as well.
Colchicine is likely responsible for his diarrhea, which would be the most readily available explanation for his hypernatremia, hypokalemia, and acute kidney injury (AKI). Acute kidney injury could result from progressive liver disease (hepatorenal syndrome), decreased arterial perfusion (suggested by third spacing or his diarrhea), acute tubular necrosis (from infection or medication), or urinary retention secondary to catheter obstruction. Acute hyperuricemia can also cause AKI (urate nephropathy).
Anemia has progressed and requires evaluation for blood loss as well as hemolysis. Hepatotoxicity from any of his medications (eg, acetaminophen) must be considered. Coagulation studies and review of the previous abdominal computed tomography would help determine the extent of his liver disease.
Neurosurgical consultation was obtained and the patient and his family elected to proceed with a thoracic laminectomy. Cheesy fluid was identified at the facet joints at T6‐T7, which was found to contain rare deposits of monosodium urate crystals. Surgical specimen cultures were sterile. His mental status and strength slowly improved to baseline following the surgery. He was discharged on postoperative day 7 to a rehabilitation facility. On the telephone follow‐up he reported that he has regained his strength completely.
The fluid analysis and clinical course confirms spinal gout. The presenting encephalopathy remains unexplained; I am unaware of gout leading to altered mental status.
COMMENTARY
Gout is an inflammatory condition triggered by the deposition of monosodium urate crystals in tissues in association with hyperuricemia.[1] Based on the 20072008 National Health and Nutrition Examination Survey, the prevalence of gout among US adults was 3.9% (8.3 million individuals).[2] These rates are increasing and are thought to be spurred by the aging population, increasing rates of obesity, and changing dietary habits including increases in the consumption of soft drinks and red meat.[3, 4, 5] The development of gout during hospitalization can prolong length of stay, and the implementation of a management protocol appears to help decrease treatment delays and the inappropriate discontinuation of gout prophylaxis.[6, 7] Surgery, with its associated physiologic stressors, can trigger gout, which is often polyarticular and presents with fever leading to testing and consultations for the febrile episode.[8]
Gout is an ancient disease that is familiar to most clinicians. In 1666, Daniel Sennert, a German physician, described gout as the physician's shame because of its infrequent recognition.[9] Clinical gout spans 3 stages: asymptomatic hyperuricemia, acute and intercritical gout, and chronic gouty arthritis. The typical acute presentation is monoarticular with the abrupt onset of pain, swelling, warmth, and erythema in a peripheral joint. It manifests most characteristically in the first metatarsophalangeal joint (podagra), but also frequently involves the midfoot, ankle, knee, and wrist and sometimes affects multiple joints simultaneously (polyarticular gout).[1, 10] The visualization of monosodium urate crystals either in synovial fluid or from a tophus is diagnostic of gout; however, guidelines recognize that a classic presentation of gout may be diagnosed based on clinical criteria alone.[11] Dual energy computerized tomography and ultrasonography are emerging as techniques for the visualization of monosodium urate crystals; however, they are not currently routinely recommended.[12]
There are many unusual presentations of gout, with an increase in such reports paralleling both the overall increase in the prevalence of gout and improvements in available imaging techniques.[13] Atypical presentations present diagnostic challenges and are often caused by tophaceous deposits in unusual locations. Reports of atypical gout have described entrapment neuropathies (eg, gouty deposits inducing carpal tunnel syndrome), ocular gout manifested as conjunctival deposits and uveitis, pancreatic gout presenting as a mass, and dermatologic manifestations including panniculitis.[13, 14]
Spinal gout (also known as axial gout) manifests when crystal‐induced inflammation, erosive arthritis, and tophaceous deposits occur along the spinal column. A cross‐sectional study of patients with poorly controlled gout reported the prevalence of spinal gout diagnosed by computerized tomography to be 35%. These radiographic findings were not consistently correlated with back pain.[15] Imaging features that are suggestive of spinal gout include intra‐articular and juxta‐articular erosions with sclerotic margins and density greater than the surrounding muscle. Periosteal new bone formation adjacent to bony destruction can form overhanging edges.[16] When retrospectively presented with the final diagnosis, the radiologist at our institution noted that the appearance was typical gout in an atypical location.
Spinal gout can be confused with spinal metastasis, infection, and stenosis. It can remain asymptomatic or present with back pain, radiculopathy, or cord compression. The lumbar spine is the most frequently affected site.[17, 18] Many patients with spinal gout have had chronic tophaceous gout with radiologic evidence of erosions in the peripheral joints.[15] Patients with spinal gout also have elevated urate levels and markers of inflammation.[18] Surgical decompression and stabilization is recommended when there is frank cord compression, progressive neurologic compromise, or lack of improvement with gout therapy alone.[18]
This patient's male gender, history of gout, hypertension, alcohol consumption, and thiazide diuretic use placed him at an increased risk of a gout attack.[19, 20] The possible interruption of urate‐lowering therapy for the surgical procedure and surgery itself further heightened his risk of suffering acute gouty arthritis in the perioperative period.[21] The patient's encephalopathy may have masked back pain and precluded an accurate neurologic exam. There is one case report to our knowledge describing encephalopathy that improved with colchicine and was possibly related to gout.[22] This patient's encephalopathy was deemed multifactorial and attributed to alcohol withdrawal, medications (including opioids and steroids), and infection (pneumonia).
Gout is best known for its peripheral arthritis and is rarely invoked in the consideration of spinal and myelopathic processes where more pressing competing diagnoses, such as infection and malignancy, are typically considered. In addition, when surgical specimens are submitted for examination for pathology in formaldehyde (rather than alcohol), monosodium urate crystals are dissolved and are thus difficult to identify in the specimen.
This case reminds us that gout remains a diagnostic challenge and should be considered in the differential of an inflammatory process. Recognition of the multifaceted nature of gout can allow for the earlier recognition and treatment of the less typical presentations of this ancient malady.
KEY TEACHING POINTS
- Crystalline disease is a common cause of postoperative arthritis.
- Gout (and pseudogout) should be considered in cases of focal inflammation (detected by examination or imaging) when the evidence or predisposition for infection is limited or nonexistent.
- Spinal gout presents with back pain, radiculopathy, or cord compression and may be confused with spinal metastasis, infection, and stenosis.
Acknowledgements
The authors thank Dr. Kari Waddell and Elaine Bammerlin for their assistance in the preparation of this manuscript.
Disclosure: Nothing to report.
- , . Clinical features and treatment of gout. In: Firestein GS, Budd RC, Gabriel SE, McInnes IB, O'Dell JR, eds. Kelley's Textbook of Rheumatology. Vol 2. 9th ed. Philadelphia, PA: Elsevier/Saunders; 2013:1544–1575.
- , , . Prevalence of gout and hyperuricemia in the US general population: the National Health and Nutrition Examination Survey 2007–2008. Arthritis Rheum. 2011;63(10):3136–3141.
- , , , . Increasing prevalence of gout and hyperuricemia over 10 years among older adults in a managed care population. J Rheumatol. 2004;31(8):1582–1587.
- , , , , . Purine‐rich foods, dairy and protein intake, and the risk of gout in men. New Engl J Med. 2004;350(11):1093–1103.
- , , . Fructose‐rich beverages and risk of gout in women. JAMA. 2010;304(20):2270–2278.
- , . Healthcare burden of in‐hospital gout. Intern Med J. 2012;42(11):1261–1263.
- , , , , , . Improved management of acute gout during hospitalization following introduction of a protocol. Int J Rheum Dis. 2012;15(6):512–520.
- , , . Postsurgical gout. Am Surg. 1995;61(1):56–59.
- , . Evolution of modern medicine. Arch Intern Med. 1960;105(4):640–644.
- . Clinical practice. Gout. N Engl J Med. 2011;364(5):443–452.
- . Management of gout: a 57‐year‐old man with a history of podagra, hyperuricemia, and mild renal insufficiency. JAMA. 2012;308(20):2133–2141.
- , , , et al. Diagnostic imaging of gout: comparison of high‐resolution US versus conventional X‐ray. Eur Radiol. 2008;18(3):621–630.
- , . The broad spectrum of urate crystal deposition: unusual presentations of gouty tophi. Semin Arthritis Rheum. 2012;42(2):146–154.
- , . Unusual clinical presentations of gout. Curr Opin Rheumatol. 2010;22(2):181–187.
- , , , , , . Correlates of axial gout: a cross‐sectional study. J Rheumatol. 2012;39(7):1445–1449.
- , , , , , . Axial gouty arthropathy. Am J Med Sci. 2009;338(2):140–146.
- , , . Axial (spinal) gout. Curr Rheumatol Rep. 2012;14(2):161–164.
- , , , . Spinal gout in a renal transplant patient: a case report and literature review. Surg Neurol. 2007;67(1):65–73.
- , , , et al. Alcohol consumption as a trigger of recurrent gout attacks. Am J Med. 2006;119(9):800.e11–800.e16.
- , , , , , . Recent diuretic use and the risk of recurrent gout attacks: the online case‐crossover gout study. J Rheumatol. 2006;33(7):1341–1345.
- , , , , . Clinical features and risk factors of postsurgical gout. Ann Rheum Dis. 2008;67(9):1271–1275.
- , , , . Gouty encephalopathy: myth or reality [in French]? Rev Med Interne. 1997;18(6):474–476.
Box
1
The approach to clinical conundrums by an expert clinician is revealed through the presentation of an actual patient's case in an approach typical of a morning report. Similarly to patient care, sequential pieces of information are provided to the clinician, who is unfamiliar with the case. The focus is on the thought processes of both the clinical team caring for the patient and the discussant.
Box
2
This icon represents the patient's case. Each paragraph that follows represents the discussant's thoughts.
A 67‐year‐old male presented to an outside hospital with a 1‐day history of fevers up to 39.4C, bilateral upper extremity weakness, and confusion. Forty‐eight hours prior to his presentation he had undergone uncomplicated bilateral carpal tunnel release surgery for the complaint of bilateral upper extremity paresthesias.
Bilateral carpal tunnel syndrome should prompt consideration of systemic diseases that infiltrate or impinge both canals (eg, rheumatoid arthritis, acromegaly, hypothyroidism, amyloidosis), although it is most frequently explained by a bilateral repetitive stress (eg, workplace typing). The development of upper extremity weakness suggests that an alternative condition such as cervical myelopathy, bilateral radiculopathy, or a rapidly progressive peripheral neuropathy may be responsible for his paresthesias. It would be unusual for a central nervous system process to selectively cause bilateral upper extremity weakness. Occasionally, patients emerge from surgery with limb weakness caused by peripheral nerve injury sustained from malpositioning of the extremity, but this would have been evident immediately following the operation.
Postoperative fevers are frequently unexplained, but require a search for common healthcare‐associated infections, such as pneumonia, urinary tract infection, intravenous catheter thrombophlebitis, wound infection, or Clostridium difficile colitis. However, such complications are unlikely following an ambulatory procedure. Confusion and fever together point to a central nervous system infection (meningoencephalitis or brain abscess) or a systemic infection that has impaired cognition. Malignancies can cause fever and altered mental status, but these are typically asynchronous events.
His past medical history was notable for hypertension, dyslipidemia, gout, actinic keratosis, and gastroesophageal reflux. His surgical history included bilateral knee replacements, repair of a left rotator cuff injury, and a herniorrhaphy. He was a nonsmoker who consumed 4 to 6 beers daily. His medications included clonidine, colchicine, atorvastatin, extended release metoprolol, triamterene‐hydrochlorothiazide, probenecid, and as‐needed ibuprofen and omeprazole.
Upon presentation he was cooperative and in no distress. Temperature was 38.9C, pulse 119 beats per minute, blood pressure 140/90 mm Hg, and oxygen saturation 94% on room air. He was noted to have logical thinking but impaired concentration. His upper extremity movement was restricted because of postoperative discomfort and swelling rather than true weakness. The rest of the exam was normal.
Metabolic, infectious, structural (intracranial), and toxic disorders can cause altered mental status. His heavy alcohol use puts him at risk for alcohol withdrawal and infections (such as Listeria meningitis), both of which may explain his fever and altered mental status. Signs and symptoms of meningitis are absent at this time. His knee prostheses could have harbored an infection preoperatively and therefore warrant close examination. Patients sometimes have adverse reactions to medications they have been prescribed but are not exposed to until hospitalization, although his surgical procedure was likely done on an outpatient basis. Empiric thiamine should be administered early given his confusion and alcohol habits.
Basic laboratories revealed a hemoglobin of 11.2 g/dL, white blood cell (WBC) count of 6,900/mm3 with 75% neutrophils, platelets of 206,000/mm3. Mean corpuscular volume was 97 mm3. Serum albumin was 2.4 g/dl, sodium 134 mmol/L, potassium 3.9 mmol/L, blood urea nitrogen 12 mg/dL, and creatinine 0.9 mg/dL. The aspartate aminotransferase was 93 U/L, alanine aminotransferase 73 U/L, alkaline phosphatase 254 U/L, and total bilirubin 1.0 mg/dL. Urinalysis was normal. Over the next 16 days fevers and waxing and waning mentation continued. The following studies were normal or negative: blood and urine cultures; transthoracic echocardiogram, antinuclear antibodies, hepatitis B surface antigen, hepatitis C antibody, and human immunodeficiency virus antibody; magnetic resonance imaging of the brain, electroencephalogram, and lower extremity venous ultrasound.
Hypoalbuminemia may signal chronic illness, hypoproduction from liver disease (caused by his heavy alcohol use), or losses from the kidney or gastrointestinal tract. His anemia may reflect chronic disease or point toward a specific underlying disorder. For example, fever and anemia could arise from hemolytic processes such as thrombotic thrombocytopenic purpura or clostridial infections.
An extensive workup has not revealed a cause for his prolonged fever (eg, infection, malignancy, autoimmune condition, or toxin). Likewise, an explanation for confusion is lacking. Because systemic illness and structural brain disease have not been uncovered, a lumbar puncture is indicated.
A lumbar puncture under fluoroscopic guidance revealed a cerebrospinal fluid (CSF) WBC count of 6/mm3, red blood cell count (RBC) 2255/mm3, protein 49 mg/dL, and glucose 54 mg/dL. The WBC differential was not reported. No growth was reported on bacterial cultures. Polymerase chain reactions for enterovirus and herpes simplex viruses 1 and 2 were negative. Cryptococcal antigen and Venereal Disease Research Laboratory serologies were also negative.
A CSF WBC count of 6 is out of the normal range, but could be explained by a traumatic tap given the elevated RBC; the protein and glucose are likewise at the border of normal. Collectively, these are nonspecific findings that could point to an infectious or noninfectious cause of intrathecal or paraspinous inflammation, but are not suggestive of bacterial meningitis.
The patient developed pneumonia, for which he received ertapenem. On hospital day 17 he was intubated for hypoxia and respiratory distress and was extubated after 4 days of mechanical ventilation. Increasing weakness in all extremities prompted magnetic resonance imaging of the spine, which revealed fluid and enhancement involving the soft tissues around C3‐C4 and C5‐C6, raising concerns for discitis and osteomyelitis. Possible septic arthritis at the C3‐C4 and C4‐C5 facets was noted. Ring enhancing fluid collections from T2‐T8 compatible with an epidural abscess with cord compression at T4‐T5 and T6‐T7 were seen. Enhancement and fluid involving the facet joints between T2‐T7 was also consistent with septic arthritis (Figure 1).

His pneumonia appears to have developed many days into his hospitalization, and therefore is unlikely to account for his initial fever and confusion. Blood cultures and echocardiogram have not suggested an endovascular infection that could account for such widespread vertebral and epidural deposition. A wide number of bacteria can cause epidural abscesses and septic arthritis, most commonly Staphylococcus aureus. Less common pathogens with a predilection for osteoarticular involvement, such as Brucella species, warrant consideration when there is appropriate epidemiologic risk.
Systemic bacterial infection remains a concern with his alcoholism rendering him partially immunosuppressed. However, a large number of adjacent spinal joints harboring a bacterial infection is unusual, and a working diagnosis of multilevel spinal infection, therefore, should prompt consideration of noninfectious processes. When a patient develops a swollen peripheral joint and fever in the postoperative setting, gout or pseudogout is a leading consideration. That same thinking should be applied to the vertebrae, where spinal gout can manifest. Surgery itself or associated changes in alcohol consumption patterns or changes in medications (at least 4 of which are relevant to goutcolchicine, hydrochlorothiazide, probenecid, and ibuprofen) could predispose him to a flare.
Aspiration of the epidural collection yielded a negative Gram stain and culture. He developed swelling in the bilateral proximal interphalangeal joints and was treated with steroids and colchicine for suspected gout flare. Vancomycin and piperacillin‐tazobactam were initiated, and on hospital day 22 the patient was transferred to another hospital for further evaluation by neurosurgery.
The negative Gram stain and culture argues against septic arthritis, but these are imperfect tests and will not detect atypical pathogens (eg, spinal tuberculosis). Reexamination of the aspirate for urate and calcium pyrophosphate crystals would be useful. Initiation of steroids in the setting of potentially undiagnosed infection requires a careful risk/benefit analysis. It may be reasonable to treat the patient with colchicine alone while withholding steroids and avoiding nonsteroidal agents in case invasive procedures are planned.
On exam his temperature was 36C, blood pressure 156/92 mm Hg, pulse 100 beats per minute, respirations 21 per minute, and oxygenation 97% on room air. He was not in acute distress and was only oriented to self. Bilateral 2+ lower extremity pitting edema up to the knees was noted. Examination of the heart and lungs was unremarkable. Gouty tophi were noted over both elbows. His joints were normal.
Cranial nerves IIXII were normal. Motor exam revealed normal muscle tone and bulk. Muscle strength was approximately 3/5 in the right upper extremity and 4+/5 in the left upper extremity. Bilateral lower extremity strength was 3/5 in hip flexion, knee flexion, and knee extension. Dorsiflexion and plantar flexion were approximately 2/5 bilaterally. Sensation was intact to light touch and pinprick, and proprioception was normal. Gait was not tested. A Foley catheter was in place.
This examination confirms ongoing encephalopathy and incomplete quadriplegia. The lower extremity weakness is nearly equal proximally and distally, which can be seen with an advanced peripheral neuropathy but is more characteristic of myelopathy. The expected concomitant sensory deficit of myelopathy is not present, although this may be difficult to detect in a confused patient. Reflex testing would help in distinguishing myelopathy (favored because of the imaging findings) from a rapid progressive peripheral motor neuropathy (eg, acute inflammatory demyelinating polyneuropathy or acute intermittent porphyria).
The pitting edema likely represents fluid overload, which can be nonspecific after prolonged immobility during hospitalization; hypoalbuminemia is oftentimes speculated to play a role when this develops. His alcohol use puts him at risk for heart failure (although there is no evidence of this on exam) and liver disease (which his liver function tests suggest). The tophi speak to the extent and chronicity of his hyperuricemia.
On arrival he reported recent onset diarrhea. Medications at transfer included metoprolol, omeprazole, prednisone, piperacillin/tazobactam, vancomycin, and colchicine; acetaminophen, bisacodyl, diphenhydramine, fentanyl, subcutaneous insulin, and labetalol were administered as needed. Laboratory studies included a hemoglobin of 9.5 g/dL, WBC count of 7,300/mm3 with 95% neutrophils, platelets 301,000/mm3, sodium 151 mmol/L, potassium 2.9 mmol/L, blood urea nitrogen 76 mg/dL, creatinine 2.0 mg/dL, aspartate aminotransferase 171 U/L, and alanine aminotransferase 127 U/L. Serum albumin was 1.7 g/dL.
At least 3 of his medicationsdiphenhydramine, fentanyl, and prednisonemay be contributing to his ongoing altered mental status, which may be further compounded by hypernatremia. Although his liver disease remains uncharacterized, hepatic encephalopathy may be contributing to his confusion as well.
Colchicine is likely responsible for his diarrhea, which would be the most readily available explanation for his hypernatremia, hypokalemia, and acute kidney injury (AKI). Acute kidney injury could result from progressive liver disease (hepatorenal syndrome), decreased arterial perfusion (suggested by third spacing or his diarrhea), acute tubular necrosis (from infection or medication), or urinary retention secondary to catheter obstruction. Acute hyperuricemia can also cause AKI (urate nephropathy).
Anemia has progressed and requires evaluation for blood loss as well as hemolysis. Hepatotoxicity from any of his medications (eg, acetaminophen) must be considered. Coagulation studies and review of the previous abdominal computed tomography would help determine the extent of his liver disease.
Neurosurgical consultation was obtained and the patient and his family elected to proceed with a thoracic laminectomy. Cheesy fluid was identified at the facet joints at T6‐T7, which was found to contain rare deposits of monosodium urate crystals. Surgical specimen cultures were sterile. His mental status and strength slowly improved to baseline following the surgery. He was discharged on postoperative day 7 to a rehabilitation facility. On the telephone follow‐up he reported that he has regained his strength completely.
The fluid analysis and clinical course confirms spinal gout. The presenting encephalopathy remains unexplained; I am unaware of gout leading to altered mental status.
COMMENTARY
Gout is an inflammatory condition triggered by the deposition of monosodium urate crystals in tissues in association with hyperuricemia.[1] Based on the 20072008 National Health and Nutrition Examination Survey, the prevalence of gout among US adults was 3.9% (8.3 million individuals).[2] These rates are increasing and are thought to be spurred by the aging population, increasing rates of obesity, and changing dietary habits including increases in the consumption of soft drinks and red meat.[3, 4, 5] The development of gout during hospitalization can prolong length of stay, and the implementation of a management protocol appears to help decrease treatment delays and the inappropriate discontinuation of gout prophylaxis.[6, 7] Surgery, with its associated physiologic stressors, can trigger gout, which is often polyarticular and presents with fever leading to testing and consultations for the febrile episode.[8]
Gout is an ancient disease that is familiar to most clinicians. In 1666, Daniel Sennert, a German physician, described gout as the physician's shame because of its infrequent recognition.[9] Clinical gout spans 3 stages: asymptomatic hyperuricemia, acute and intercritical gout, and chronic gouty arthritis. The typical acute presentation is monoarticular with the abrupt onset of pain, swelling, warmth, and erythema in a peripheral joint. It manifests most characteristically in the first metatarsophalangeal joint (podagra), but also frequently involves the midfoot, ankle, knee, and wrist and sometimes affects multiple joints simultaneously (polyarticular gout).[1, 10] The visualization of monosodium urate crystals either in synovial fluid or from a tophus is diagnostic of gout; however, guidelines recognize that a classic presentation of gout may be diagnosed based on clinical criteria alone.[11] Dual energy computerized tomography and ultrasonography are emerging as techniques for the visualization of monosodium urate crystals; however, they are not currently routinely recommended.[12]
There are many unusual presentations of gout, with an increase in such reports paralleling both the overall increase in the prevalence of gout and improvements in available imaging techniques.[13] Atypical presentations present diagnostic challenges and are often caused by tophaceous deposits in unusual locations. Reports of atypical gout have described entrapment neuropathies (eg, gouty deposits inducing carpal tunnel syndrome), ocular gout manifested as conjunctival deposits and uveitis, pancreatic gout presenting as a mass, and dermatologic manifestations including panniculitis.[13, 14]
Spinal gout (also known as axial gout) manifests when crystal‐induced inflammation, erosive arthritis, and tophaceous deposits occur along the spinal column. A cross‐sectional study of patients with poorly controlled gout reported the prevalence of spinal gout diagnosed by computerized tomography to be 35%. These radiographic findings were not consistently correlated with back pain.[15] Imaging features that are suggestive of spinal gout include intra‐articular and juxta‐articular erosions with sclerotic margins and density greater than the surrounding muscle. Periosteal new bone formation adjacent to bony destruction can form overhanging edges.[16] When retrospectively presented with the final diagnosis, the radiologist at our institution noted that the appearance was typical gout in an atypical location.
Spinal gout can be confused with spinal metastasis, infection, and stenosis. It can remain asymptomatic or present with back pain, radiculopathy, or cord compression. The lumbar spine is the most frequently affected site.[17, 18] Many patients with spinal gout have had chronic tophaceous gout with radiologic evidence of erosions in the peripheral joints.[15] Patients with spinal gout also have elevated urate levels and markers of inflammation.[18] Surgical decompression and stabilization is recommended when there is frank cord compression, progressive neurologic compromise, or lack of improvement with gout therapy alone.[18]
This patient's male gender, history of gout, hypertension, alcohol consumption, and thiazide diuretic use placed him at an increased risk of a gout attack.[19, 20] The possible interruption of urate‐lowering therapy for the surgical procedure and surgery itself further heightened his risk of suffering acute gouty arthritis in the perioperative period.[21] The patient's encephalopathy may have masked back pain and precluded an accurate neurologic exam. There is one case report to our knowledge describing encephalopathy that improved with colchicine and was possibly related to gout.[22] This patient's encephalopathy was deemed multifactorial and attributed to alcohol withdrawal, medications (including opioids and steroids), and infection (pneumonia).
Gout is best known for its peripheral arthritis and is rarely invoked in the consideration of spinal and myelopathic processes where more pressing competing diagnoses, such as infection and malignancy, are typically considered. In addition, when surgical specimens are submitted for examination for pathology in formaldehyde (rather than alcohol), monosodium urate crystals are dissolved and are thus difficult to identify in the specimen.
This case reminds us that gout remains a diagnostic challenge and should be considered in the differential of an inflammatory process. Recognition of the multifaceted nature of gout can allow for the earlier recognition and treatment of the less typical presentations of this ancient malady.
KEY TEACHING POINTS
- Crystalline disease is a common cause of postoperative arthritis.
- Gout (and pseudogout) should be considered in cases of focal inflammation (detected by examination or imaging) when the evidence or predisposition for infection is limited or nonexistent.
- Spinal gout presents with back pain, radiculopathy, or cord compression and may be confused with spinal metastasis, infection, and stenosis.
Acknowledgements
The authors thank Dr. Kari Waddell and Elaine Bammerlin for their assistance in the preparation of this manuscript.
Disclosure: Nothing to report.
Box
1
The approach to clinical conundrums by an expert clinician is revealed through the presentation of an actual patient's case in an approach typical of a morning report. Similarly to patient care, sequential pieces of information are provided to the clinician, who is unfamiliar with the case. The focus is on the thought processes of both the clinical team caring for the patient and the discussant.
Box
2
This icon represents the patient's case. Each paragraph that follows represents the discussant's thoughts.
A 67‐year‐old male presented to an outside hospital with a 1‐day history of fevers up to 39.4C, bilateral upper extremity weakness, and confusion. Forty‐eight hours prior to his presentation he had undergone uncomplicated bilateral carpal tunnel release surgery for the complaint of bilateral upper extremity paresthesias.
Bilateral carpal tunnel syndrome should prompt consideration of systemic diseases that infiltrate or impinge both canals (eg, rheumatoid arthritis, acromegaly, hypothyroidism, amyloidosis), although it is most frequently explained by a bilateral repetitive stress (eg, workplace typing). The development of upper extremity weakness suggests that an alternative condition such as cervical myelopathy, bilateral radiculopathy, or a rapidly progressive peripheral neuropathy may be responsible for his paresthesias. It would be unusual for a central nervous system process to selectively cause bilateral upper extremity weakness. Occasionally, patients emerge from surgery with limb weakness caused by peripheral nerve injury sustained from malpositioning of the extremity, but this would have been evident immediately following the operation.
Postoperative fevers are frequently unexplained, but require a search for common healthcare‐associated infections, such as pneumonia, urinary tract infection, intravenous catheter thrombophlebitis, wound infection, or Clostridium difficile colitis. However, such complications are unlikely following an ambulatory procedure. Confusion and fever together point to a central nervous system infection (meningoencephalitis or brain abscess) or a systemic infection that has impaired cognition. Malignancies can cause fever and altered mental status, but these are typically asynchronous events.
His past medical history was notable for hypertension, dyslipidemia, gout, actinic keratosis, and gastroesophageal reflux. His surgical history included bilateral knee replacements, repair of a left rotator cuff injury, and a herniorrhaphy. He was a nonsmoker who consumed 4 to 6 beers daily. His medications included clonidine, colchicine, atorvastatin, extended release metoprolol, triamterene‐hydrochlorothiazide, probenecid, and as‐needed ibuprofen and omeprazole.
Upon presentation he was cooperative and in no distress. Temperature was 38.9C, pulse 119 beats per minute, blood pressure 140/90 mm Hg, and oxygen saturation 94% on room air. He was noted to have logical thinking but impaired concentration. His upper extremity movement was restricted because of postoperative discomfort and swelling rather than true weakness. The rest of the exam was normal.
Metabolic, infectious, structural (intracranial), and toxic disorders can cause altered mental status. His heavy alcohol use puts him at risk for alcohol withdrawal and infections (such as Listeria meningitis), both of which may explain his fever and altered mental status. Signs and symptoms of meningitis are absent at this time. His knee prostheses could have harbored an infection preoperatively and therefore warrant close examination. Patients sometimes have adverse reactions to medications they have been prescribed but are not exposed to until hospitalization, although his surgical procedure was likely done on an outpatient basis. Empiric thiamine should be administered early given his confusion and alcohol habits.
Basic laboratories revealed a hemoglobin of 11.2 g/dL, white blood cell (WBC) count of 6,900/mm3 with 75% neutrophils, platelets of 206,000/mm3. Mean corpuscular volume was 97 mm3. Serum albumin was 2.4 g/dl, sodium 134 mmol/L, potassium 3.9 mmol/L, blood urea nitrogen 12 mg/dL, and creatinine 0.9 mg/dL. The aspartate aminotransferase was 93 U/L, alanine aminotransferase 73 U/L, alkaline phosphatase 254 U/L, and total bilirubin 1.0 mg/dL. Urinalysis was normal. Over the next 16 days fevers and waxing and waning mentation continued. The following studies were normal or negative: blood and urine cultures; transthoracic echocardiogram, antinuclear antibodies, hepatitis B surface antigen, hepatitis C antibody, and human immunodeficiency virus antibody; magnetic resonance imaging of the brain, electroencephalogram, and lower extremity venous ultrasound.
Hypoalbuminemia may signal chronic illness, hypoproduction from liver disease (caused by his heavy alcohol use), or losses from the kidney or gastrointestinal tract. His anemia may reflect chronic disease or point toward a specific underlying disorder. For example, fever and anemia could arise from hemolytic processes such as thrombotic thrombocytopenic purpura or clostridial infections.
An extensive workup has not revealed a cause for his prolonged fever (eg, infection, malignancy, autoimmune condition, or toxin). Likewise, an explanation for confusion is lacking. Because systemic illness and structural brain disease have not been uncovered, a lumbar puncture is indicated.
A lumbar puncture under fluoroscopic guidance revealed a cerebrospinal fluid (CSF) WBC count of 6/mm3, red blood cell count (RBC) 2255/mm3, protein 49 mg/dL, and glucose 54 mg/dL. The WBC differential was not reported. No growth was reported on bacterial cultures. Polymerase chain reactions for enterovirus and herpes simplex viruses 1 and 2 were negative. Cryptococcal antigen and Venereal Disease Research Laboratory serologies were also negative.
A CSF WBC count of 6 is out of the normal range, but could be explained by a traumatic tap given the elevated RBC; the protein and glucose are likewise at the border of normal. Collectively, these are nonspecific findings that could point to an infectious or noninfectious cause of intrathecal or paraspinous inflammation, but are not suggestive of bacterial meningitis.
The patient developed pneumonia, for which he received ertapenem. On hospital day 17 he was intubated for hypoxia and respiratory distress and was extubated after 4 days of mechanical ventilation. Increasing weakness in all extremities prompted magnetic resonance imaging of the spine, which revealed fluid and enhancement involving the soft tissues around C3‐C4 and C5‐C6, raising concerns for discitis and osteomyelitis. Possible septic arthritis at the C3‐C4 and C4‐C5 facets was noted. Ring enhancing fluid collections from T2‐T8 compatible with an epidural abscess with cord compression at T4‐T5 and T6‐T7 were seen. Enhancement and fluid involving the facet joints between T2‐T7 was also consistent with septic arthritis (Figure 1).

His pneumonia appears to have developed many days into his hospitalization, and therefore is unlikely to account for his initial fever and confusion. Blood cultures and echocardiogram have not suggested an endovascular infection that could account for such widespread vertebral and epidural deposition. A wide number of bacteria can cause epidural abscesses and septic arthritis, most commonly Staphylococcus aureus. Less common pathogens with a predilection for osteoarticular involvement, such as Brucella species, warrant consideration when there is appropriate epidemiologic risk.
Systemic bacterial infection remains a concern with his alcoholism rendering him partially immunosuppressed. However, a large number of adjacent spinal joints harboring a bacterial infection is unusual, and a working diagnosis of multilevel spinal infection, therefore, should prompt consideration of noninfectious processes. When a patient develops a swollen peripheral joint and fever in the postoperative setting, gout or pseudogout is a leading consideration. That same thinking should be applied to the vertebrae, where spinal gout can manifest. Surgery itself or associated changes in alcohol consumption patterns or changes in medications (at least 4 of which are relevant to goutcolchicine, hydrochlorothiazide, probenecid, and ibuprofen) could predispose him to a flare.
Aspiration of the epidural collection yielded a negative Gram stain and culture. He developed swelling in the bilateral proximal interphalangeal joints and was treated with steroids and colchicine for suspected gout flare. Vancomycin and piperacillin‐tazobactam were initiated, and on hospital day 22 the patient was transferred to another hospital for further evaluation by neurosurgery.
The negative Gram stain and culture argues against septic arthritis, but these are imperfect tests and will not detect atypical pathogens (eg, spinal tuberculosis). Reexamination of the aspirate for urate and calcium pyrophosphate crystals would be useful. Initiation of steroids in the setting of potentially undiagnosed infection requires a careful risk/benefit analysis. It may be reasonable to treat the patient with colchicine alone while withholding steroids and avoiding nonsteroidal agents in case invasive procedures are planned.
On exam his temperature was 36C, blood pressure 156/92 mm Hg, pulse 100 beats per minute, respirations 21 per minute, and oxygenation 97% on room air. He was not in acute distress and was only oriented to self. Bilateral 2+ lower extremity pitting edema up to the knees was noted. Examination of the heart and lungs was unremarkable. Gouty tophi were noted over both elbows. His joints were normal.
Cranial nerves IIXII were normal. Motor exam revealed normal muscle tone and bulk. Muscle strength was approximately 3/5 in the right upper extremity and 4+/5 in the left upper extremity. Bilateral lower extremity strength was 3/5 in hip flexion, knee flexion, and knee extension. Dorsiflexion and plantar flexion were approximately 2/5 bilaterally. Sensation was intact to light touch and pinprick, and proprioception was normal. Gait was not tested. A Foley catheter was in place.
This examination confirms ongoing encephalopathy and incomplete quadriplegia. The lower extremity weakness is nearly equal proximally and distally, which can be seen with an advanced peripheral neuropathy but is more characteristic of myelopathy. The expected concomitant sensory deficit of myelopathy is not present, although this may be difficult to detect in a confused patient. Reflex testing would help in distinguishing myelopathy (favored because of the imaging findings) from a rapid progressive peripheral motor neuropathy (eg, acute inflammatory demyelinating polyneuropathy or acute intermittent porphyria).
The pitting edema likely represents fluid overload, which can be nonspecific after prolonged immobility during hospitalization; hypoalbuminemia is oftentimes speculated to play a role when this develops. His alcohol use puts him at risk for heart failure (although there is no evidence of this on exam) and liver disease (which his liver function tests suggest). The tophi speak to the extent and chronicity of his hyperuricemia.
On arrival he reported recent onset diarrhea. Medications at transfer included metoprolol, omeprazole, prednisone, piperacillin/tazobactam, vancomycin, and colchicine; acetaminophen, bisacodyl, diphenhydramine, fentanyl, subcutaneous insulin, and labetalol were administered as needed. Laboratory studies included a hemoglobin of 9.5 g/dL, WBC count of 7,300/mm3 with 95% neutrophils, platelets 301,000/mm3, sodium 151 mmol/L, potassium 2.9 mmol/L, blood urea nitrogen 76 mg/dL, creatinine 2.0 mg/dL, aspartate aminotransferase 171 U/L, and alanine aminotransferase 127 U/L. Serum albumin was 1.7 g/dL.
At least 3 of his medicationsdiphenhydramine, fentanyl, and prednisonemay be contributing to his ongoing altered mental status, which may be further compounded by hypernatremia. Although his liver disease remains uncharacterized, hepatic encephalopathy may be contributing to his confusion as well.
Colchicine is likely responsible for his diarrhea, which would be the most readily available explanation for his hypernatremia, hypokalemia, and acute kidney injury (AKI). Acute kidney injury could result from progressive liver disease (hepatorenal syndrome), decreased arterial perfusion (suggested by third spacing or his diarrhea), acute tubular necrosis (from infection or medication), or urinary retention secondary to catheter obstruction. Acute hyperuricemia can also cause AKI (urate nephropathy).
Anemia has progressed and requires evaluation for blood loss as well as hemolysis. Hepatotoxicity from any of his medications (eg, acetaminophen) must be considered. Coagulation studies and review of the previous abdominal computed tomography would help determine the extent of his liver disease.
Neurosurgical consultation was obtained and the patient and his family elected to proceed with a thoracic laminectomy. Cheesy fluid was identified at the facet joints at T6‐T7, which was found to contain rare deposits of monosodium urate crystals. Surgical specimen cultures were sterile. His mental status and strength slowly improved to baseline following the surgery. He was discharged on postoperative day 7 to a rehabilitation facility. On the telephone follow‐up he reported that he has regained his strength completely.
The fluid analysis and clinical course confirms spinal gout. The presenting encephalopathy remains unexplained; I am unaware of gout leading to altered mental status.
COMMENTARY
Gout is an inflammatory condition triggered by the deposition of monosodium urate crystals in tissues in association with hyperuricemia.[1] Based on the 20072008 National Health and Nutrition Examination Survey, the prevalence of gout among US adults was 3.9% (8.3 million individuals).[2] These rates are increasing and are thought to be spurred by the aging population, increasing rates of obesity, and changing dietary habits including increases in the consumption of soft drinks and red meat.[3, 4, 5] The development of gout during hospitalization can prolong length of stay, and the implementation of a management protocol appears to help decrease treatment delays and the inappropriate discontinuation of gout prophylaxis.[6, 7] Surgery, with its associated physiologic stressors, can trigger gout, which is often polyarticular and presents with fever leading to testing and consultations for the febrile episode.[8]
Gout is an ancient disease that is familiar to most clinicians. In 1666, Daniel Sennert, a German physician, described gout as the physician's shame because of its infrequent recognition.[9] Clinical gout spans 3 stages: asymptomatic hyperuricemia, acute and intercritical gout, and chronic gouty arthritis. The typical acute presentation is monoarticular with the abrupt onset of pain, swelling, warmth, and erythema in a peripheral joint. It manifests most characteristically in the first metatarsophalangeal joint (podagra), but also frequently involves the midfoot, ankle, knee, and wrist and sometimes affects multiple joints simultaneously (polyarticular gout).[1, 10] The visualization of monosodium urate crystals either in synovial fluid or from a tophus is diagnostic of gout; however, guidelines recognize that a classic presentation of gout may be diagnosed based on clinical criteria alone.[11] Dual energy computerized tomography and ultrasonography are emerging as techniques for the visualization of monosodium urate crystals; however, they are not currently routinely recommended.[12]
There are many unusual presentations of gout, with an increase in such reports paralleling both the overall increase in the prevalence of gout and improvements in available imaging techniques.[13] Atypical presentations present diagnostic challenges and are often caused by tophaceous deposits in unusual locations. Reports of atypical gout have described entrapment neuropathies (eg, gouty deposits inducing carpal tunnel syndrome), ocular gout manifested as conjunctival deposits and uveitis, pancreatic gout presenting as a mass, and dermatologic manifestations including panniculitis.[13, 14]
Spinal gout (also known as axial gout) manifests when crystal‐induced inflammation, erosive arthritis, and tophaceous deposits occur along the spinal column. A cross‐sectional study of patients with poorly controlled gout reported the prevalence of spinal gout diagnosed by computerized tomography to be 35%. These radiographic findings were not consistently correlated with back pain.[15] Imaging features that are suggestive of spinal gout include intra‐articular and juxta‐articular erosions with sclerotic margins and density greater than the surrounding muscle. Periosteal new bone formation adjacent to bony destruction can form overhanging edges.[16] When retrospectively presented with the final diagnosis, the radiologist at our institution noted that the appearance was typical gout in an atypical location.
Spinal gout can be confused with spinal metastasis, infection, and stenosis. It can remain asymptomatic or present with back pain, radiculopathy, or cord compression. The lumbar spine is the most frequently affected site.[17, 18] Many patients with spinal gout have had chronic tophaceous gout with radiologic evidence of erosions in the peripheral joints.[15] Patients with spinal gout also have elevated urate levels and markers of inflammation.[18] Surgical decompression and stabilization is recommended when there is frank cord compression, progressive neurologic compromise, or lack of improvement with gout therapy alone.[18]
This patient's male gender, history of gout, hypertension, alcohol consumption, and thiazide diuretic use placed him at an increased risk of a gout attack.[19, 20] The possible interruption of urate‐lowering therapy for the surgical procedure and surgery itself further heightened his risk of suffering acute gouty arthritis in the perioperative period.[21] The patient's encephalopathy may have masked back pain and precluded an accurate neurologic exam. There is one case report to our knowledge describing encephalopathy that improved with colchicine and was possibly related to gout.[22] This patient's encephalopathy was deemed multifactorial and attributed to alcohol withdrawal, medications (including opioids and steroids), and infection (pneumonia).
Gout is best known for its peripheral arthritis and is rarely invoked in the consideration of spinal and myelopathic processes where more pressing competing diagnoses, such as infection and malignancy, are typically considered. In addition, when surgical specimens are submitted for examination for pathology in formaldehyde (rather than alcohol), monosodium urate crystals are dissolved and are thus difficult to identify in the specimen.
This case reminds us that gout remains a diagnostic challenge and should be considered in the differential of an inflammatory process. Recognition of the multifaceted nature of gout can allow for the earlier recognition and treatment of the less typical presentations of this ancient malady.
KEY TEACHING POINTS
- Crystalline disease is a common cause of postoperative arthritis.
- Gout (and pseudogout) should be considered in cases of focal inflammation (detected by examination or imaging) when the evidence or predisposition for infection is limited or nonexistent.
- Spinal gout presents with back pain, radiculopathy, or cord compression and may be confused with spinal metastasis, infection, and stenosis.
Acknowledgements
The authors thank Dr. Kari Waddell and Elaine Bammerlin for their assistance in the preparation of this manuscript.
Disclosure: Nothing to report.
- , . Clinical features and treatment of gout. In: Firestein GS, Budd RC, Gabriel SE, McInnes IB, O'Dell JR, eds. Kelley's Textbook of Rheumatology. Vol 2. 9th ed. Philadelphia, PA: Elsevier/Saunders; 2013:1544–1575.
- , , . Prevalence of gout and hyperuricemia in the US general population: the National Health and Nutrition Examination Survey 2007–2008. Arthritis Rheum. 2011;63(10):3136–3141.
- , , , . Increasing prevalence of gout and hyperuricemia over 10 years among older adults in a managed care population. J Rheumatol. 2004;31(8):1582–1587.
- , , , , . Purine‐rich foods, dairy and protein intake, and the risk of gout in men. New Engl J Med. 2004;350(11):1093–1103.
- , , . Fructose‐rich beverages and risk of gout in women. JAMA. 2010;304(20):2270–2278.
- , . Healthcare burden of in‐hospital gout. Intern Med J. 2012;42(11):1261–1263.
- , , , , , . Improved management of acute gout during hospitalization following introduction of a protocol. Int J Rheum Dis. 2012;15(6):512–520.
- , , . Postsurgical gout. Am Surg. 1995;61(1):56–59.
- , . Evolution of modern medicine. Arch Intern Med. 1960;105(4):640–644.
- . Clinical practice. Gout. N Engl J Med. 2011;364(5):443–452.
- . Management of gout: a 57‐year‐old man with a history of podagra, hyperuricemia, and mild renal insufficiency. JAMA. 2012;308(20):2133–2141.
- , , , et al. Diagnostic imaging of gout: comparison of high‐resolution US versus conventional X‐ray. Eur Radiol. 2008;18(3):621–630.
- , . The broad spectrum of urate crystal deposition: unusual presentations of gouty tophi. Semin Arthritis Rheum. 2012;42(2):146–154.
- , . Unusual clinical presentations of gout. Curr Opin Rheumatol. 2010;22(2):181–187.
- , , , , , . Correlates of axial gout: a cross‐sectional study. J Rheumatol. 2012;39(7):1445–1449.
- , , , , , . Axial gouty arthropathy. Am J Med Sci. 2009;338(2):140–146.
- , , . Axial (spinal) gout. Curr Rheumatol Rep. 2012;14(2):161–164.
- , , , . Spinal gout in a renal transplant patient: a case report and literature review. Surg Neurol. 2007;67(1):65–73.
- , , , et al. Alcohol consumption as a trigger of recurrent gout attacks. Am J Med. 2006;119(9):800.e11–800.e16.
- , , , , , . Recent diuretic use and the risk of recurrent gout attacks: the online case‐crossover gout study. J Rheumatol. 2006;33(7):1341–1345.
- , , , , . Clinical features and risk factors of postsurgical gout. Ann Rheum Dis. 2008;67(9):1271–1275.
- , , , . Gouty encephalopathy: myth or reality [in French]? Rev Med Interne. 1997;18(6):474–476.
- , . Clinical features and treatment of gout. In: Firestein GS, Budd RC, Gabriel SE, McInnes IB, O'Dell JR, eds. Kelley's Textbook of Rheumatology. Vol 2. 9th ed. Philadelphia, PA: Elsevier/Saunders; 2013:1544–1575.
- , , . Prevalence of gout and hyperuricemia in the US general population: the National Health and Nutrition Examination Survey 2007–2008. Arthritis Rheum. 2011;63(10):3136–3141.
- , , , . Increasing prevalence of gout and hyperuricemia over 10 years among older adults in a managed care population. J Rheumatol. 2004;31(8):1582–1587.
- , , , , . Purine‐rich foods, dairy and protein intake, and the risk of gout in men. New Engl J Med. 2004;350(11):1093–1103.
- , , . Fructose‐rich beverages and risk of gout in women. JAMA. 2010;304(20):2270–2278.
- , . Healthcare burden of in‐hospital gout. Intern Med J. 2012;42(11):1261–1263.
- , , , , , . Improved management of acute gout during hospitalization following introduction of a protocol. Int J Rheum Dis. 2012;15(6):512–520.
- , , . Postsurgical gout. Am Surg. 1995;61(1):56–59.
- , . Evolution of modern medicine. Arch Intern Med. 1960;105(4):640–644.
- . Clinical practice. Gout. N Engl J Med. 2011;364(5):443–452.
- . Management of gout: a 57‐year‐old man with a history of podagra, hyperuricemia, and mild renal insufficiency. JAMA. 2012;308(20):2133–2141.
- , , , et al. Diagnostic imaging of gout: comparison of high‐resolution US versus conventional X‐ray. Eur Radiol. 2008;18(3):621–630.
- , . The broad spectrum of urate crystal deposition: unusual presentations of gouty tophi. Semin Arthritis Rheum. 2012;42(2):146–154.
- , . Unusual clinical presentations of gout. Curr Opin Rheumatol. 2010;22(2):181–187.
- , , , , , . Correlates of axial gout: a cross‐sectional study. J Rheumatol. 2012;39(7):1445–1449.
- , , , , , . Axial gouty arthropathy. Am J Med Sci. 2009;338(2):140–146.
- , , . Axial (spinal) gout. Curr Rheumatol Rep. 2012;14(2):161–164.
- , , , . Spinal gout in a renal transplant patient: a case report and literature review. Surg Neurol. 2007;67(1):65–73.
- , , , et al. Alcohol consumption as a trigger of recurrent gout attacks. Am J Med. 2006;119(9):800.e11–800.e16.
- , , , , , . Recent diuretic use and the risk of recurrent gout attacks: the online case‐crossover gout study. J Rheumatol. 2006;33(7):1341–1345.
- , , , , . Clinical features and risk factors of postsurgical gout. Ann Rheum Dis. 2008;67(9):1271–1275.
- , , , . Gouty encephalopathy: myth or reality [in French]? Rev Med Interne. 1997;18(6):474–476.
Rapid Response Systems
In 2006,[1] we questioned whether rapid response systems (RRSs) were an effective strategy for detecting and managing deteriorating general ward patients. Since then, the implementation of RRSs has flourished, especially in the United States where accreditors (Joint Commission)[2] and patient‐safety organizations (Institute for Healthcare Improvement 100,000 Live Campaign)[3] have strongly supported RRSs. Decades of evidence show that general ward patients often experience unrecognized deterioration and cardiorespiratory arrest (CA). The low sensitivity and accuracy of periodic assessments by staff are thought to be a major reason for these lapses, as are imbalances between patient needs and clinician (primarily nursing) resources. Additionally, a medical culture that punishes speaking up or bypassing the chain of command are also likely contributors to the problem. A system that effectively recognizes the early signs of deterioration and quickly responds should catch problems before they become life threatening. Over the last decade, RRSs have been the primary intervention implemented to do this. The potential for RRSs to improve outcomes has strong face validity, but researchers have struggled to demonstrate consistent improvements in outcomes across institutions. Given this, are RRSs the best intervention to prevent this failure to rescue? In this editorial we examine the progress of RRSs, how they compare to other options, and we consider whether we should continue to question their implementation.
In our 2007 systematic review,[4] we concluded there was weak to moderate evidence supporting RRSs. Since then, 6 other systematic reviews of the effectiveness or implementation of RRSs have been published. One high‐quality review of effectiveness studies published through 2008 by Chan et al.[5] found that RRSs significantly reduced non‐intensive care unit (ICU) CA (relative risk [RR], 0.66; 95% confidence interval [CI], 0.54‐0.80), but not total hospital mortality (RR, 0.96; 95% CI, 0.84‐1.09) in adult inpatients. In pediatric inpatients, RRSs led to significant improvements in both non‐ICU CA (RR, 0.62; 95% CI, 0.46 to 0.84) and total hospital mortality (RR, 0.79; 95% CI, 0.63 to 0.98). Subsequent to 2008, a structured search[6] finds 26 additional studies.[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] Although the benefit for CA in both adults and children has remained robust, even more so since Chan's review, mortality reductions in adult patients appear to have had the most notable shift. In aggregate, the point estimate (for those studies providing analyzable data), for adult mortality has strengthened to 0.88, with a confidence interval of 0.82‐0.96 in favor of the RRS strategy.
This change has occurred as the analyzable studies since 2008 have all had favorable point estimates, and 4 have had statistically significant confidence intervals. Prior to 2008, 5 had unfavorable point estimates, and only 2 had favorable confidence intervals. As RRSs expand, the benefits, although not universal (some hospitals still experience no improvement in outcomes), seem to be getting stronger and more consistent. This may be secondary to maturation of the intervention and implementation strategies, or it may be the result of secular trends outside of the RRS intervention, although studies controlling for this found it not to be the case.[10] The factors associated with successful implementation of the RRS or improved outcomes include knowledge of activation criteria, communication, teamwork, lack of criticism for activating the RRS, and better attitudes about the team's positive effect on nurses and patients. Many of these factors relate to an improved safety culture in general. Additionally, activation rates may have increased in more recent studies, as greater utilization is associated with improved outcomes.[31] Finally, RRSs, like other patient‐safety and quality interventions, mature with time, often taking several years before they have a full effect on outcomes.[31, 32]
Despite these more favorable results for RRSs, we still see a large discrepancy between the magnitude of benefit for CA and mortality. This may partly be because the exposure groups are different; most studies examined non‐ICU CA, yet studies reporting mortality used total hospital mortality (ICU and non‐ICU). Additionally, although RRSs may effectively prevent CA, this intervention may have a more limited effect in preventing the patient's ultimate demise (particularly in the ICU).
We also still see that effectiveness reports for RRSs continue to be of low to moderate quality. Many reports give no statistics or denominator data or have missing data. Few control for secular trends in providers, outcomes, and confounders. Outcome measures vary widely, and none conducted blinded outcome assessments. Most studies use a pre‐post design without concurrent controls, substantially increasing the risk of bias. The better‐designed studies that use concurrent controls or cluster randomization (Priestley,[33] Bristow,[34] and the MERIT trial[35]) tend to show lower treatment effects, although interestingly in the MERIT trial, while the cluster‐randomized data showed no benefit, the pre‐post data showed significant improvement in the RRS intervention hospitals. These results have been attributed to the control hospitals using their code teams for RRS activities,[36] negating a comparative improvement in the intervention hospitals.
Can we improve RRS research? Likely, yes. We can begin by being more careful about defining the exposure group. Ideally, studies should not include data from the ICU or the emergency department because these patient populations are not part of the exposure group. Although most studies removed ICU and emergency department data for CA, they did not do so for hospital mortality. ICU mortality is likely biased, because only a small proportion of ICU patients have been exposed to an RRS. Definitions also need to be stringent and uniform. For example, CA may be defined in a variety of ways such as calling the code team versus documented cardiopulmonary resuscitation. Unexpected hospital mortality is often defined as excluding patients with do not resuscitate (DNR) orders, but this may or may not accurately exclude expected deaths. We also need to better attempt to control for confounders and secular trends. Outcomes such as CA and mortality are strongly influenced by changes in patient case‐mix over time, the frequency of care limitation/DNR orders, or by poor triage decisions.[37] Outcomes such as unanticipated ICU admission are indirect and may be heavily influenced by local cultural factors. Finally, authors need to provide robust statistical data and clear numerators and denominators to support their conclusions.
Although we need to do our best to improve the quality of the RRS literature, the near ubiquitous presence of this patient‐safety intervention in North American hospitals raises a crucial question, Do we even need more effectiveness studies and if so what kind? Randomized controlled trials are not likely. It is hard to argue that we still sit at a position of equipoise, and randomizing patients who are deteriorating to standard care versus an RRS is neither practical nor ethical. Finding appropriate concurrent control hospitals that have not implemented some type of RRS would also be very difficult.
We should, however, continue to test the effectiveness of RRSs but in a more diverse manner. RRSs should be more directly compared to other interventions that can improve the problem of failure to rescue such as increased nurse staffing[38, 39, 40] and hospitalist staffing.[41] The low sensitivity and accuracy of monitoring vital signs on general wards by staff is also an area strongly deserving of investigation, as it is likely central to the problem. Researchers have sought to use various combinations of vital signs, including aggregated or weighted scoring systems, and recent data suggest some approaches may be superior to others.[42] Many have advocated for continuous monitoring of a limited set of vital signs similar to the ICU, and there are some recent data indicating that this might be effective.[43, 44] This work is in the early stages, and we do not yet know whether this strategy will affect outcomes. It is conceivable that if the false alarm rate can be kept very low and we can minimize the failure to recognize deteriorating patients (good sensitivity, specificity, and positive predictive value), the need for the RRS response team may be reduced or even eliminated. Additionally, as electronic medical records (EMRs) have expanded, there has been growing interest in leveraging these systems to improve the effectiveness of RRSs.[45] There is a tremendous amount of information within the EMRs that can be used to complement vital‐sign monitoring (manual or continuous), because baseline medical problems, laboratory values, and recent history may have a strong impact on the predictive value of changes in vital signs.
Research should also focus on the possible unintended consequences, costs, and the cost‐effectiveness of RRSs compared with other interventions that can or may reduce the rate of failure to rescue. Certainly, establishing RRSs has costs including staff time and the need to pull staff from other clinical duties to respond. Unintended harm, such as diversion of ICU staff from their usual care, are often mentioned but never rigorously evaluated. Increasing nurse staffing has very substantial costs, but how these costs compare to the costs of the RRS are unclear, although likely the comparison would be very favorable to the RRS, because staffing typically relies on existing employees with expertise in caring for the critically ill as opposed to workforce expansion. Given the current healthcare economic climate, any model that relies on additional employees is not likely to gain support. Establishing continuous monitoring systems have up‐front capital costs, although they may reduce other costs in the long run (eg, staff, medical liability). They also have intangible costs for provider workload if the false alarm rates are too high. Again, this strategy is too new to know the answers to these concerns. As we move forward, such evaluations are needed to guide policy decisions.
We also need more evaluation of RRS implementation science. The optimal way to organize, train, and staff RRSs is unknown. Most programs use physician‐led teams, although some use nurse‐led teams. Few studies have compared the various models, although 1 study that compared a resident‐led to an attending‐led team found no difference.[17] Education is ubiquitous, although actual staff training (simulation for example) is not commonly described. In addition, there is wide variation in the frequency of RRS activation. We know nurses and residents often feel pressured not to activate RRSs, and much of the success of the RRS relies on nurses identifying deteriorating patients and calling the response team. The use of continuous monitoring combined with automatic notification of staff may reduce the barriers to activating RRSs, increasing activation rates, but until then we need more understanding of how to break down these barriers. Family/patient access to activation has also gained ground (1 program demonstrated outcome improvement only after this was established[13]), but is not yet widespread.
The role of the RRS in improving processes of care, such as the appropriate institution of DNR orders, end of life/palliative care discussions, and early goal‐directed therapy for sepsis, have been presented in several studies[46, 47] but remain inadequately evaluated. Here too, there is much to learn about how we might realize the full effectiveness of this patient‐safety strategy beyond outcomes such as CA and hospital mortality. Ideally, if all appropriate patients had DNR orders and we stopped failing to recognize and respond to deteriorating ward patients, CAs on general hospital wards could be nearly eliminated.
RRSs have been described as a band‐aid for a failed model of general ward care.[37] What is clear is that many patients suffer preventable harm from unrecognized deterioration. This needs to be challenged, but are RRSs the best intervention? Despite the Joint Commission's Patient Safety Goal 16, should we still question their implementation? Should we (and the Joint Commission) reconsider our approach and prioritize our efforts elsewhere or should we feel comfortable with the investment that we have made in these systems? Even though there are many unknowns, and the quality of RRS studies needs improvement, the literature is accumulating that RRSs do reduce non‐ICU CA and improve hospital mortality. Without direct comparison studies demonstrating superiority of other expensive strategies, there is little reason to reconsider the RRS concept or question their implementation and our investment. We should instead invest further in this foundational patient‐safety strategy to make it as effective as it can be.
Disclosures: Dr. Pronovost reports the following potential conflicts of interest: grant or contract support from the Agency for Healthcare Research and Quality, and the Gordon and Betty Moore Foundation (research related to patient safety and quality of care), and the National Institutes of Health (acute lung injury research); consulting fees from the Association of Professionals in Infection Control and Epidemiology, Inc.; honoraria from various hospitals, health systems, and the Leigh Bureau to speak on quality and patient safety; book royalties from the Penguin Group; and board membership for the Cantel Medical Group. Dr. Winters reports the following potential conflicts of interest: contract or grant support from Masimo Corporation, honoraria from 3M Corporation and various hospitals and health systems, royalties from Lippincott Williams &Wilkins (UptoDate), and consulting fees from several legal firms for medical legal consulting.
- , , . Rapid response teams: walk, don't run. JAMA. 2006;296:1645–1647.
- Joint Commission requirement: The Joint Commission announces the 2008 National Patient Safety Goals and Requirements. Jt Comm Perspect. 2007;27(7):1–22.
- Institute for Healthcare Improvement. 5 million lives campaign: overview. Available at: http://www.ihi.org/offerings/Initiatives/PastStrategicInitiatives/5MillionLivesCampaign/Pages/default.aspx. Accessed November 28, 2012.
- , , , , , . Rapid response systems: a systematic review. Crit Care Med. 2007;35:1238–1243.
- , , , , . Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170:18–26.
- , , , , , . Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:417–425.
- , , , , , . Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300:2506–2513.
- , , , . Experience of pediatric rapid response team in a tertiary care hospital in Pakistan. Indian J Pediatr. 2010;77:273–276.
- , , , et al. Rescue me: saving the vulnerable non‐ICU patient population. Jt Comm J Qual Patient Saf. 2009;35:199–205.
- , , , , . Reduction in hospital‐wide mortality after implementation of a rapid response team: a long‐term cohort study. Crit Care. 2011;15:R269.
- , , , , . Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf. 2008;34:743–747.
- , , , , , . Immediate and long‐term impact of medical emergency teams on cardiac arrest prevalence and mortality: a plea for periodic basic life‐support training programs. Crit Care Med. 2009;37:3054–3061.
- , , , , , . Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81:1676–1681.
- , , , et al. A reduction in cardiac arrests and duration of clinical instability after implementation of a paediatric rapid response system. Qual Saf Health Care. 2009;18:500–504.
- , , , et al. Implementing a rapid response team to decrease emergencies outside the ICU: one hospital's experience. Medsurg Nurs. 2009;18:84–90, 126.
- , , , et al. Sustained effectiveness of a primary‐team‐based rapid response system. Crit Care Med. 2012;40:2562–2568.
- , , , . Association between implementation of an intensivist‐led medical emergency team and mortality. BMJ Qual Saf. 2012;21:152–159.
- , , , , , . Reducing in‐hospital cardiac arrests and hospital mortality by introducing a medical emergency team. Intensive Care Med. 2010;36:100–106.
- , , , et al. Implementation of a multicenter rapid response system in pediatric academic hospitals is effective. Pediatrics. 2011;128:72–78.
- , . The impact of medical emergency teams on ICU admission rates, cardiopulmonary arrests and mortality in a regional hospital. Resuscitation. 2011;82:707–712.
- , , , . Introduction of a rapid response system at a United States veterans affairs hospital reduced cardiac arrests. Anesth Analg. 2010;111:679–686.
- , , , . The effect of the medical emergency team on unexpected cardiac arrest and death at the VA Caribbean healthcare system: a retrospective study. Crit Care Shock. 2010;13:98–105.
- , , , , . Four years' experience with a hospitalist‐led medical emergency team: an interrupted time series. J Hosp Med. 2012;7:98–103.
- , , . Changing cardiac arrest and hospital mortality rates through a medical emergency team takes time and constant review. Crit Care Med. 2010;38:445–450.
- , , , et al. Clinical emergencies and outcomes in patients admitted to a surgical versus medical service. Resuscitation. 2011;82:415–418.
- , , , . Evaluating a new rapid response team: NP‐led versus intensivist‐led comparisons. AACN Adv Crit Care. 2012;23:32–42.
- , . Implementation of a rapid response team: a success story. Crit Care Nurse. 2009;29:66–75.
- , , , . Rapid response team in an academic institution: does it make a difference? Chest. 2011;139:1361–1367.
- , . Reduction of hospital mortality and of preventable cardiac arrest and death on introduction of a pediatric medical emergency team. Ped Crit Care Med. 2009;10:306–312.
- , . Medical emergency teams are associated with reduced mortality across a major metropolitan health network after two years service: a retrospective study using government administrative data. Crit Care. 2012;16:R210.
- , , , , et al. Long term effect of a medical emergency team on cardiac arrests in a teaching hospital. Crit Care. 2005;9:R808–R815.
- , , , . Six year audit of cardiac arrests and medical emergency team calls in an Australian outer metropolitan teaching hospital. BMJ. 2007;335:1210–1212.
- , , , et al. Introducing Critical Care Outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30:1398–1404.
- , , , et al. Rates of in‐hospital arrests, deaths, and intensive care admissions: the effect of a medical emergency team. Med J Aust. 2000;173:236–240.
- , , , et al. Introduction of the medical emergency team (MET) system: a cluster randomised controlled trial. Lancet. 2005;365:2091–2097.
- , , , , , . The effectiveness of implementation of the medical emergency team (MET) system and factors associated with use during the MERIT study. Crit Care Resusc. 2007;9:206–212.
- , . Rethinking rapid response teams. JAMA. 2010;304:1375–1376.
- , , . Lower mortality for abdominal aortic aneurysm repair in high‐volume hospitals is contingent upon nurse staffing [published online ahead of print October 22, 2012]. Health Serv Res. doi: 10.1111/1475–6773.12004.
- , , , , . Nurse‐staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:1715–1722.
- . The association of registered nurse staffing levels and patient outcomes: systematic review and meta‐analysis. Med Care. 2007;45:1195–1204.
- , , , , , . Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:2589–2600.
- , , , , . The ability of the National Early Warning Score (NEWS) to discriminate patients at risk of early cardiac arrest, unanticipated intensive care unit admission, and death. Resuscitation. 2013;84:465–470.
- , , , . Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112:282–287.
- , , , et al. A controlled trial of electronic automated advisory vital signs monitoring in general hospital wards. Crit Care Med. 2012;40:2349–2361.
- Agency for Healthcare Research and Quality. Early warning scoring system proactively identifies patients at risk of deterioration, leading to fewer cardiopulmonary emergencies and deaths. Available at: http://www.innovations.ahrq.gov/content.aspx?id=2607. Accessed March 26, 2013.
- , , , et al. Effect of a rapid response system for patients in shock on time to treatment and mortality during 5 years. Crit Care Med. 2007;35:2568–2575.
- , , , , , . The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9:151–156.
In 2006,[1] we questioned whether rapid response systems (RRSs) were an effective strategy for detecting and managing deteriorating general ward patients. Since then, the implementation of RRSs has flourished, especially in the United States where accreditors (Joint Commission)[2] and patient‐safety organizations (Institute for Healthcare Improvement 100,000 Live Campaign)[3] have strongly supported RRSs. Decades of evidence show that general ward patients often experience unrecognized deterioration and cardiorespiratory arrest (CA). The low sensitivity and accuracy of periodic assessments by staff are thought to be a major reason for these lapses, as are imbalances between patient needs and clinician (primarily nursing) resources. Additionally, a medical culture that punishes speaking up or bypassing the chain of command are also likely contributors to the problem. A system that effectively recognizes the early signs of deterioration and quickly responds should catch problems before they become life threatening. Over the last decade, RRSs have been the primary intervention implemented to do this. The potential for RRSs to improve outcomes has strong face validity, but researchers have struggled to demonstrate consistent improvements in outcomes across institutions. Given this, are RRSs the best intervention to prevent this failure to rescue? In this editorial we examine the progress of RRSs, how they compare to other options, and we consider whether we should continue to question their implementation.
In our 2007 systematic review,[4] we concluded there was weak to moderate evidence supporting RRSs. Since then, 6 other systematic reviews of the effectiveness or implementation of RRSs have been published. One high‐quality review of effectiveness studies published through 2008 by Chan et al.[5] found that RRSs significantly reduced non‐intensive care unit (ICU) CA (relative risk [RR], 0.66; 95% confidence interval [CI], 0.54‐0.80), but not total hospital mortality (RR, 0.96; 95% CI, 0.84‐1.09) in adult inpatients. In pediatric inpatients, RRSs led to significant improvements in both non‐ICU CA (RR, 0.62; 95% CI, 0.46 to 0.84) and total hospital mortality (RR, 0.79; 95% CI, 0.63 to 0.98). Subsequent to 2008, a structured search[6] finds 26 additional studies.[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] Although the benefit for CA in both adults and children has remained robust, even more so since Chan's review, mortality reductions in adult patients appear to have had the most notable shift. In aggregate, the point estimate (for those studies providing analyzable data), for adult mortality has strengthened to 0.88, with a confidence interval of 0.82‐0.96 in favor of the RRS strategy.
This change has occurred as the analyzable studies since 2008 have all had favorable point estimates, and 4 have had statistically significant confidence intervals. Prior to 2008, 5 had unfavorable point estimates, and only 2 had favorable confidence intervals. As RRSs expand, the benefits, although not universal (some hospitals still experience no improvement in outcomes), seem to be getting stronger and more consistent. This may be secondary to maturation of the intervention and implementation strategies, or it may be the result of secular trends outside of the RRS intervention, although studies controlling for this found it not to be the case.[10] The factors associated with successful implementation of the RRS or improved outcomes include knowledge of activation criteria, communication, teamwork, lack of criticism for activating the RRS, and better attitudes about the team's positive effect on nurses and patients. Many of these factors relate to an improved safety culture in general. Additionally, activation rates may have increased in more recent studies, as greater utilization is associated with improved outcomes.[31] Finally, RRSs, like other patient‐safety and quality interventions, mature with time, often taking several years before they have a full effect on outcomes.[31, 32]
Despite these more favorable results for RRSs, we still see a large discrepancy between the magnitude of benefit for CA and mortality. This may partly be because the exposure groups are different; most studies examined non‐ICU CA, yet studies reporting mortality used total hospital mortality (ICU and non‐ICU). Additionally, although RRSs may effectively prevent CA, this intervention may have a more limited effect in preventing the patient's ultimate demise (particularly in the ICU).
We also still see that effectiveness reports for RRSs continue to be of low to moderate quality. Many reports give no statistics or denominator data or have missing data. Few control for secular trends in providers, outcomes, and confounders. Outcome measures vary widely, and none conducted blinded outcome assessments. Most studies use a pre‐post design without concurrent controls, substantially increasing the risk of bias. The better‐designed studies that use concurrent controls or cluster randomization (Priestley,[33] Bristow,[34] and the MERIT trial[35]) tend to show lower treatment effects, although interestingly in the MERIT trial, while the cluster‐randomized data showed no benefit, the pre‐post data showed significant improvement in the RRS intervention hospitals. These results have been attributed to the control hospitals using their code teams for RRS activities,[36] negating a comparative improvement in the intervention hospitals.
Can we improve RRS research? Likely, yes. We can begin by being more careful about defining the exposure group. Ideally, studies should not include data from the ICU or the emergency department because these patient populations are not part of the exposure group. Although most studies removed ICU and emergency department data for CA, they did not do so for hospital mortality. ICU mortality is likely biased, because only a small proportion of ICU patients have been exposed to an RRS. Definitions also need to be stringent and uniform. For example, CA may be defined in a variety of ways such as calling the code team versus documented cardiopulmonary resuscitation. Unexpected hospital mortality is often defined as excluding patients with do not resuscitate (DNR) orders, but this may or may not accurately exclude expected deaths. We also need to better attempt to control for confounders and secular trends. Outcomes such as CA and mortality are strongly influenced by changes in patient case‐mix over time, the frequency of care limitation/DNR orders, or by poor triage decisions.[37] Outcomes such as unanticipated ICU admission are indirect and may be heavily influenced by local cultural factors. Finally, authors need to provide robust statistical data and clear numerators and denominators to support their conclusions.
Although we need to do our best to improve the quality of the RRS literature, the near ubiquitous presence of this patient‐safety intervention in North American hospitals raises a crucial question, Do we even need more effectiveness studies and if so what kind? Randomized controlled trials are not likely. It is hard to argue that we still sit at a position of equipoise, and randomizing patients who are deteriorating to standard care versus an RRS is neither practical nor ethical. Finding appropriate concurrent control hospitals that have not implemented some type of RRS would also be very difficult.
We should, however, continue to test the effectiveness of RRSs but in a more diverse manner. RRSs should be more directly compared to other interventions that can improve the problem of failure to rescue such as increased nurse staffing[38, 39, 40] and hospitalist staffing.[41] The low sensitivity and accuracy of monitoring vital signs on general wards by staff is also an area strongly deserving of investigation, as it is likely central to the problem. Researchers have sought to use various combinations of vital signs, including aggregated or weighted scoring systems, and recent data suggest some approaches may be superior to others.[42] Many have advocated for continuous monitoring of a limited set of vital signs similar to the ICU, and there are some recent data indicating that this might be effective.[43, 44] This work is in the early stages, and we do not yet know whether this strategy will affect outcomes. It is conceivable that if the false alarm rate can be kept very low and we can minimize the failure to recognize deteriorating patients (good sensitivity, specificity, and positive predictive value), the need for the RRS response team may be reduced or even eliminated. Additionally, as electronic medical records (EMRs) have expanded, there has been growing interest in leveraging these systems to improve the effectiveness of RRSs.[45] There is a tremendous amount of information within the EMRs that can be used to complement vital‐sign monitoring (manual or continuous), because baseline medical problems, laboratory values, and recent history may have a strong impact on the predictive value of changes in vital signs.
Research should also focus on the possible unintended consequences, costs, and the cost‐effectiveness of RRSs compared with other interventions that can or may reduce the rate of failure to rescue. Certainly, establishing RRSs has costs including staff time and the need to pull staff from other clinical duties to respond. Unintended harm, such as diversion of ICU staff from their usual care, are often mentioned but never rigorously evaluated. Increasing nurse staffing has very substantial costs, but how these costs compare to the costs of the RRS are unclear, although likely the comparison would be very favorable to the RRS, because staffing typically relies on existing employees with expertise in caring for the critically ill as opposed to workforce expansion. Given the current healthcare economic climate, any model that relies on additional employees is not likely to gain support. Establishing continuous monitoring systems have up‐front capital costs, although they may reduce other costs in the long run (eg, staff, medical liability). They also have intangible costs for provider workload if the false alarm rates are too high. Again, this strategy is too new to know the answers to these concerns. As we move forward, such evaluations are needed to guide policy decisions.
We also need more evaluation of RRS implementation science. The optimal way to organize, train, and staff RRSs is unknown. Most programs use physician‐led teams, although some use nurse‐led teams. Few studies have compared the various models, although 1 study that compared a resident‐led to an attending‐led team found no difference.[17] Education is ubiquitous, although actual staff training (simulation for example) is not commonly described. In addition, there is wide variation in the frequency of RRS activation. We know nurses and residents often feel pressured not to activate RRSs, and much of the success of the RRS relies on nurses identifying deteriorating patients and calling the response team. The use of continuous monitoring combined with automatic notification of staff may reduce the barriers to activating RRSs, increasing activation rates, but until then we need more understanding of how to break down these barriers. Family/patient access to activation has also gained ground (1 program demonstrated outcome improvement only after this was established[13]), but is not yet widespread.
The role of the RRS in improving processes of care, such as the appropriate institution of DNR orders, end of life/palliative care discussions, and early goal‐directed therapy for sepsis, have been presented in several studies[46, 47] but remain inadequately evaluated. Here too, there is much to learn about how we might realize the full effectiveness of this patient‐safety strategy beyond outcomes such as CA and hospital mortality. Ideally, if all appropriate patients had DNR orders and we stopped failing to recognize and respond to deteriorating ward patients, CAs on general hospital wards could be nearly eliminated.
RRSs have been described as a band‐aid for a failed model of general ward care.[37] What is clear is that many patients suffer preventable harm from unrecognized deterioration. This needs to be challenged, but are RRSs the best intervention? Despite the Joint Commission's Patient Safety Goal 16, should we still question their implementation? Should we (and the Joint Commission) reconsider our approach and prioritize our efforts elsewhere or should we feel comfortable with the investment that we have made in these systems? Even though there are many unknowns, and the quality of RRS studies needs improvement, the literature is accumulating that RRSs do reduce non‐ICU CA and improve hospital mortality. Without direct comparison studies demonstrating superiority of other expensive strategies, there is little reason to reconsider the RRS concept or question their implementation and our investment. We should instead invest further in this foundational patient‐safety strategy to make it as effective as it can be.
Disclosures: Dr. Pronovost reports the following potential conflicts of interest: grant or contract support from the Agency for Healthcare Research and Quality, and the Gordon and Betty Moore Foundation (research related to patient safety and quality of care), and the National Institutes of Health (acute lung injury research); consulting fees from the Association of Professionals in Infection Control and Epidemiology, Inc.; honoraria from various hospitals, health systems, and the Leigh Bureau to speak on quality and patient safety; book royalties from the Penguin Group; and board membership for the Cantel Medical Group. Dr. Winters reports the following potential conflicts of interest: contract or grant support from Masimo Corporation, honoraria from 3M Corporation and various hospitals and health systems, royalties from Lippincott Williams &Wilkins (UptoDate), and consulting fees from several legal firms for medical legal consulting.
In 2006,[1] we questioned whether rapid response systems (RRSs) were an effective strategy for detecting and managing deteriorating general ward patients. Since then, the implementation of RRSs has flourished, especially in the United States where accreditors (Joint Commission)[2] and patient‐safety organizations (Institute for Healthcare Improvement 100,000 Live Campaign)[3] have strongly supported RRSs. Decades of evidence show that general ward patients often experience unrecognized deterioration and cardiorespiratory arrest (CA). The low sensitivity and accuracy of periodic assessments by staff are thought to be a major reason for these lapses, as are imbalances between patient needs and clinician (primarily nursing) resources. Additionally, a medical culture that punishes speaking up or bypassing the chain of command are also likely contributors to the problem. A system that effectively recognizes the early signs of deterioration and quickly responds should catch problems before they become life threatening. Over the last decade, RRSs have been the primary intervention implemented to do this. The potential for RRSs to improve outcomes has strong face validity, but researchers have struggled to demonstrate consistent improvements in outcomes across institutions. Given this, are RRSs the best intervention to prevent this failure to rescue? In this editorial we examine the progress of RRSs, how they compare to other options, and we consider whether we should continue to question their implementation.
In our 2007 systematic review,[4] we concluded there was weak to moderate evidence supporting RRSs. Since then, 6 other systematic reviews of the effectiveness or implementation of RRSs have been published. One high‐quality review of effectiveness studies published through 2008 by Chan et al.[5] found that RRSs significantly reduced non‐intensive care unit (ICU) CA (relative risk [RR], 0.66; 95% confidence interval [CI], 0.54‐0.80), but not total hospital mortality (RR, 0.96; 95% CI, 0.84‐1.09) in adult inpatients. In pediatric inpatients, RRSs led to significant improvements in both non‐ICU CA (RR, 0.62; 95% CI, 0.46 to 0.84) and total hospital mortality (RR, 0.79; 95% CI, 0.63 to 0.98). Subsequent to 2008, a structured search[6] finds 26 additional studies.[7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29, 30] Although the benefit for CA in both adults and children has remained robust, even more so since Chan's review, mortality reductions in adult patients appear to have had the most notable shift. In aggregate, the point estimate (for those studies providing analyzable data), for adult mortality has strengthened to 0.88, with a confidence interval of 0.82‐0.96 in favor of the RRS strategy.
This change has occurred as the analyzable studies since 2008 have all had favorable point estimates, and 4 have had statistically significant confidence intervals. Prior to 2008, 5 had unfavorable point estimates, and only 2 had favorable confidence intervals. As RRSs expand, the benefits, although not universal (some hospitals still experience no improvement in outcomes), seem to be getting stronger and more consistent. This may be secondary to maturation of the intervention and implementation strategies, or it may be the result of secular trends outside of the RRS intervention, although studies controlling for this found it not to be the case.[10] The factors associated with successful implementation of the RRS or improved outcomes include knowledge of activation criteria, communication, teamwork, lack of criticism for activating the RRS, and better attitudes about the team's positive effect on nurses and patients. Many of these factors relate to an improved safety culture in general. Additionally, activation rates may have increased in more recent studies, as greater utilization is associated with improved outcomes.[31] Finally, RRSs, like other patient‐safety and quality interventions, mature with time, often taking several years before they have a full effect on outcomes.[31, 32]
Despite these more favorable results for RRSs, we still see a large discrepancy between the magnitude of benefit for CA and mortality. This may partly be because the exposure groups are different; most studies examined non‐ICU CA, yet studies reporting mortality used total hospital mortality (ICU and non‐ICU). Additionally, although RRSs may effectively prevent CA, this intervention may have a more limited effect in preventing the patient's ultimate demise (particularly in the ICU).
We also still see that effectiveness reports for RRSs continue to be of low to moderate quality. Many reports give no statistics or denominator data or have missing data. Few control for secular trends in providers, outcomes, and confounders. Outcome measures vary widely, and none conducted blinded outcome assessments. Most studies use a pre‐post design without concurrent controls, substantially increasing the risk of bias. The better‐designed studies that use concurrent controls or cluster randomization (Priestley,[33] Bristow,[34] and the MERIT trial[35]) tend to show lower treatment effects, although interestingly in the MERIT trial, while the cluster‐randomized data showed no benefit, the pre‐post data showed significant improvement in the RRS intervention hospitals. These results have been attributed to the control hospitals using their code teams for RRS activities,[36] negating a comparative improvement in the intervention hospitals.
Can we improve RRS research? Likely, yes. We can begin by being more careful about defining the exposure group. Ideally, studies should not include data from the ICU or the emergency department because these patient populations are not part of the exposure group. Although most studies removed ICU and emergency department data for CA, they did not do so for hospital mortality. ICU mortality is likely biased, because only a small proportion of ICU patients have been exposed to an RRS. Definitions also need to be stringent and uniform. For example, CA may be defined in a variety of ways such as calling the code team versus documented cardiopulmonary resuscitation. Unexpected hospital mortality is often defined as excluding patients with do not resuscitate (DNR) orders, but this may or may not accurately exclude expected deaths. We also need to better attempt to control for confounders and secular trends. Outcomes such as CA and mortality are strongly influenced by changes in patient case‐mix over time, the frequency of care limitation/DNR orders, or by poor triage decisions.[37] Outcomes such as unanticipated ICU admission are indirect and may be heavily influenced by local cultural factors. Finally, authors need to provide robust statistical data and clear numerators and denominators to support their conclusions.
Although we need to do our best to improve the quality of the RRS literature, the near ubiquitous presence of this patient‐safety intervention in North American hospitals raises a crucial question, Do we even need more effectiveness studies and if so what kind? Randomized controlled trials are not likely. It is hard to argue that we still sit at a position of equipoise, and randomizing patients who are deteriorating to standard care versus an RRS is neither practical nor ethical. Finding appropriate concurrent control hospitals that have not implemented some type of RRS would also be very difficult.
We should, however, continue to test the effectiveness of RRSs but in a more diverse manner. RRSs should be more directly compared to other interventions that can improve the problem of failure to rescue such as increased nurse staffing[38, 39, 40] and hospitalist staffing.[41] The low sensitivity and accuracy of monitoring vital signs on general wards by staff is also an area strongly deserving of investigation, as it is likely central to the problem. Researchers have sought to use various combinations of vital signs, including aggregated or weighted scoring systems, and recent data suggest some approaches may be superior to others.[42] Many have advocated for continuous monitoring of a limited set of vital signs similar to the ICU, and there are some recent data indicating that this might be effective.[43, 44] This work is in the early stages, and we do not yet know whether this strategy will affect outcomes. It is conceivable that if the false alarm rate can be kept very low and we can minimize the failure to recognize deteriorating patients (good sensitivity, specificity, and positive predictive value), the need for the RRS response team may be reduced or even eliminated. Additionally, as electronic medical records (EMRs) have expanded, there has been growing interest in leveraging these systems to improve the effectiveness of RRSs.[45] There is a tremendous amount of information within the EMRs that can be used to complement vital‐sign monitoring (manual or continuous), because baseline medical problems, laboratory values, and recent history may have a strong impact on the predictive value of changes in vital signs.
Research should also focus on the possible unintended consequences, costs, and the cost‐effectiveness of RRSs compared with other interventions that can or may reduce the rate of failure to rescue. Certainly, establishing RRSs has costs including staff time and the need to pull staff from other clinical duties to respond. Unintended harm, such as diversion of ICU staff from their usual care, are often mentioned but never rigorously evaluated. Increasing nurse staffing has very substantial costs, but how these costs compare to the costs of the RRS are unclear, although likely the comparison would be very favorable to the RRS, because staffing typically relies on existing employees with expertise in caring for the critically ill as opposed to workforce expansion. Given the current healthcare economic climate, any model that relies on additional employees is not likely to gain support. Establishing continuous monitoring systems have up‐front capital costs, although they may reduce other costs in the long run (eg, staff, medical liability). They also have intangible costs for provider workload if the false alarm rates are too high. Again, this strategy is too new to know the answers to these concerns. As we move forward, such evaluations are needed to guide policy decisions.
We also need more evaluation of RRS implementation science. The optimal way to organize, train, and staff RRSs is unknown. Most programs use physician‐led teams, although some use nurse‐led teams. Few studies have compared the various models, although 1 study that compared a resident‐led to an attending‐led team found no difference.[17] Education is ubiquitous, although actual staff training (simulation for example) is not commonly described. In addition, there is wide variation in the frequency of RRS activation. We know nurses and residents often feel pressured not to activate RRSs, and much of the success of the RRS relies on nurses identifying deteriorating patients and calling the response team. The use of continuous monitoring combined with automatic notification of staff may reduce the barriers to activating RRSs, increasing activation rates, but until then we need more understanding of how to break down these barriers. Family/patient access to activation has also gained ground (1 program demonstrated outcome improvement only after this was established[13]), but is not yet widespread.
The role of the RRS in improving processes of care, such as the appropriate institution of DNR orders, end of life/palliative care discussions, and early goal‐directed therapy for sepsis, have been presented in several studies[46, 47] but remain inadequately evaluated. Here too, there is much to learn about how we might realize the full effectiveness of this patient‐safety strategy beyond outcomes such as CA and hospital mortality. Ideally, if all appropriate patients had DNR orders and we stopped failing to recognize and respond to deteriorating ward patients, CAs on general hospital wards could be nearly eliminated.
RRSs have been described as a band‐aid for a failed model of general ward care.[37] What is clear is that many patients suffer preventable harm from unrecognized deterioration. This needs to be challenged, but are RRSs the best intervention? Despite the Joint Commission's Patient Safety Goal 16, should we still question their implementation? Should we (and the Joint Commission) reconsider our approach and prioritize our efforts elsewhere or should we feel comfortable with the investment that we have made in these systems? Even though there are many unknowns, and the quality of RRS studies needs improvement, the literature is accumulating that RRSs do reduce non‐ICU CA and improve hospital mortality. Without direct comparison studies demonstrating superiority of other expensive strategies, there is little reason to reconsider the RRS concept or question their implementation and our investment. We should instead invest further in this foundational patient‐safety strategy to make it as effective as it can be.
Disclosures: Dr. Pronovost reports the following potential conflicts of interest: grant or contract support from the Agency for Healthcare Research and Quality, and the Gordon and Betty Moore Foundation (research related to patient safety and quality of care), and the National Institutes of Health (acute lung injury research); consulting fees from the Association of Professionals in Infection Control and Epidemiology, Inc.; honoraria from various hospitals, health systems, and the Leigh Bureau to speak on quality and patient safety; book royalties from the Penguin Group; and board membership for the Cantel Medical Group. Dr. Winters reports the following potential conflicts of interest: contract or grant support from Masimo Corporation, honoraria from 3M Corporation and various hospitals and health systems, royalties from Lippincott Williams &Wilkins (UptoDate), and consulting fees from several legal firms for medical legal consulting.
- , , . Rapid response teams: walk, don't run. JAMA. 2006;296:1645–1647.
- Joint Commission requirement: The Joint Commission announces the 2008 National Patient Safety Goals and Requirements. Jt Comm Perspect. 2007;27(7):1–22.
- Institute for Healthcare Improvement. 5 million lives campaign: overview. Available at: http://www.ihi.org/offerings/Initiatives/PastStrategicInitiatives/5MillionLivesCampaign/Pages/default.aspx. Accessed November 28, 2012.
- , , , , , . Rapid response systems: a systematic review. Crit Care Med. 2007;35:1238–1243.
- , , , , . Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170:18–26.
- , , , , , . Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:417–425.
- , , , , , . Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300:2506–2513.
- , , , . Experience of pediatric rapid response team in a tertiary care hospital in Pakistan. Indian J Pediatr. 2010;77:273–276.
- , , , et al. Rescue me: saving the vulnerable non‐ICU patient population. Jt Comm J Qual Patient Saf. 2009;35:199–205.
- , , , , . Reduction in hospital‐wide mortality after implementation of a rapid response team: a long‐term cohort study. Crit Care. 2011;15:R269.
- , , , , . Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf. 2008;34:743–747.
- , , , , , . Immediate and long‐term impact of medical emergency teams on cardiac arrest prevalence and mortality: a plea for periodic basic life‐support training programs. Crit Care Med. 2009;37:3054–3061.
- , , , , , . Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81:1676–1681.
- , , , et al. A reduction in cardiac arrests and duration of clinical instability after implementation of a paediatric rapid response system. Qual Saf Health Care. 2009;18:500–504.
- , , , et al. Implementing a rapid response team to decrease emergencies outside the ICU: one hospital's experience. Medsurg Nurs. 2009;18:84–90, 126.
- , , , et al. Sustained effectiveness of a primary‐team‐based rapid response system. Crit Care Med. 2012;40:2562–2568.
- , , , . Association between implementation of an intensivist‐led medical emergency team and mortality. BMJ Qual Saf. 2012;21:152–159.
- , , , , , . Reducing in‐hospital cardiac arrests and hospital mortality by introducing a medical emergency team. Intensive Care Med. 2010;36:100–106.
- , , , et al. Implementation of a multicenter rapid response system in pediatric academic hospitals is effective. Pediatrics. 2011;128:72–78.
- , . The impact of medical emergency teams on ICU admission rates, cardiopulmonary arrests and mortality in a regional hospital. Resuscitation. 2011;82:707–712.
- , , , . Introduction of a rapid response system at a United States veterans affairs hospital reduced cardiac arrests. Anesth Analg. 2010;111:679–686.
- , , , . The effect of the medical emergency team on unexpected cardiac arrest and death at the VA Caribbean healthcare system: a retrospective study. Crit Care Shock. 2010;13:98–105.
- , , , , . Four years' experience with a hospitalist‐led medical emergency team: an interrupted time series. J Hosp Med. 2012;7:98–103.
- , , . Changing cardiac arrest and hospital mortality rates through a medical emergency team takes time and constant review. Crit Care Med. 2010;38:445–450.
- , , , et al. Clinical emergencies and outcomes in patients admitted to a surgical versus medical service. Resuscitation. 2011;82:415–418.
- , , , . Evaluating a new rapid response team: NP‐led versus intensivist‐led comparisons. AACN Adv Crit Care. 2012;23:32–42.
- , . Implementation of a rapid response team: a success story. Crit Care Nurse. 2009;29:66–75.
- , , , . Rapid response team in an academic institution: does it make a difference? Chest. 2011;139:1361–1367.
- , . Reduction of hospital mortality and of preventable cardiac arrest and death on introduction of a pediatric medical emergency team. Ped Crit Care Med. 2009;10:306–312.
- , . Medical emergency teams are associated with reduced mortality across a major metropolitan health network after two years service: a retrospective study using government administrative data. Crit Care. 2012;16:R210.
- , , , , et al. Long term effect of a medical emergency team on cardiac arrests in a teaching hospital. Crit Care. 2005;9:R808–R815.
- , , , . Six year audit of cardiac arrests and medical emergency team calls in an Australian outer metropolitan teaching hospital. BMJ. 2007;335:1210–1212.
- , , , et al. Introducing Critical Care Outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30:1398–1404.
- , , , et al. Rates of in‐hospital arrests, deaths, and intensive care admissions: the effect of a medical emergency team. Med J Aust. 2000;173:236–240.
- , , , et al. Introduction of the medical emergency team (MET) system: a cluster randomised controlled trial. Lancet. 2005;365:2091–2097.
- , , , , , . The effectiveness of implementation of the medical emergency team (MET) system and factors associated with use during the MERIT study. Crit Care Resusc. 2007;9:206–212.
- , . Rethinking rapid response teams. JAMA. 2010;304:1375–1376.
- , , . Lower mortality for abdominal aortic aneurysm repair in high‐volume hospitals is contingent upon nurse staffing [published online ahead of print October 22, 2012]. Health Serv Res. doi: 10.1111/1475–6773.12004.
- , , , , . Nurse‐staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:1715–1722.
- . The association of registered nurse staffing levels and patient outcomes: systematic review and meta‐analysis. Med Care. 2007;45:1195–1204.
- , , , , , . Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:2589–2600.
- , , , , . The ability of the National Early Warning Score (NEWS) to discriminate patients at risk of early cardiac arrest, unanticipated intensive care unit admission, and death. Resuscitation. 2013;84:465–470.
- , , , . Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112:282–287.
- , , , et al. A controlled trial of electronic automated advisory vital signs monitoring in general hospital wards. Crit Care Med. 2012;40:2349–2361.
- Agency for Healthcare Research and Quality. Early warning scoring system proactively identifies patients at risk of deterioration, leading to fewer cardiopulmonary emergencies and deaths. Available at: http://www.innovations.ahrq.gov/content.aspx?id=2607. Accessed March 26, 2013.
- , , , et al. Effect of a rapid response system for patients in shock on time to treatment and mortality during 5 years. Crit Care Med. 2007;35:2568–2575.
- , , , , , . The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9:151–156.
- , , . Rapid response teams: walk, don't run. JAMA. 2006;296:1645–1647.
- Joint Commission requirement: The Joint Commission announces the 2008 National Patient Safety Goals and Requirements. Jt Comm Perspect. 2007;27(7):1–22.
- Institute for Healthcare Improvement. 5 million lives campaign: overview. Available at: http://www.ihi.org/offerings/Initiatives/PastStrategicInitiatives/5MillionLivesCampaign/Pages/default.aspx. Accessed November 28, 2012.
- , , , , , . Rapid response systems: a systematic review. Crit Care Med. 2007;35:1238–1243.
- , , , , . Rapid response teams: a systematic review and meta‐analysis. Arch Intern Med. 2010;170:18–26.
- , , , , , . Rapid‐response systems as a patient safety strategy: a systematic review. Ann Intern Med. 2013;158:417–425.
- , , , , , . Hospital‐wide code rates and mortality before and after implementation of a rapid response team. JAMA. 2008;300:2506–2513.
- , , , . Experience of pediatric rapid response team in a tertiary care hospital in Pakistan. Indian J Pediatr. 2010;77:273–276.
- , , , et al. Rescue me: saving the vulnerable non‐ICU patient population. Jt Comm J Qual Patient Saf. 2009;35:199–205.
- , , , , . Reduction in hospital‐wide mortality after implementation of a rapid response team: a long‐term cohort study. Crit Care. 2011;15:R269.
- , , , , . Using an advanced practice nursing model for a rapid response team. Jt Comm J Qual Patient Saf. 2008;34:743–747.
- , , , , , . Immediate and long‐term impact of medical emergency teams on cardiac arrest prevalence and mortality: a plea for periodic basic life‐support training programs. Crit Care Med. 2009;37:3054–3061.
- , , , , , . Successful implementation of a family and patient activated rapid response team in an adult level 1 trauma center. Resuscitation. 2010;81:1676–1681.
- , , , et al. A reduction in cardiac arrests and duration of clinical instability after implementation of a paediatric rapid response system. Qual Saf Health Care. 2009;18:500–504.
- , , , et al. Implementing a rapid response team to decrease emergencies outside the ICU: one hospital's experience. Medsurg Nurs. 2009;18:84–90, 126.
- , , , et al. Sustained effectiveness of a primary‐team‐based rapid response system. Crit Care Med. 2012;40:2562–2568.
- , , , . Association between implementation of an intensivist‐led medical emergency team and mortality. BMJ Qual Saf. 2012;21:152–159.
- , , , , , . Reducing in‐hospital cardiac arrests and hospital mortality by introducing a medical emergency team. Intensive Care Med. 2010;36:100–106.
- , , , et al. Implementation of a multicenter rapid response system in pediatric academic hospitals is effective. Pediatrics. 2011;128:72–78.
- , . The impact of medical emergency teams on ICU admission rates, cardiopulmonary arrests and mortality in a regional hospital. Resuscitation. 2011;82:707–712.
- , , , . Introduction of a rapid response system at a United States veterans affairs hospital reduced cardiac arrests. Anesth Analg. 2010;111:679–686.
- , , , . The effect of the medical emergency team on unexpected cardiac arrest and death at the VA Caribbean healthcare system: a retrospective study. Crit Care Shock. 2010;13:98–105.
- , , , , . Four years' experience with a hospitalist‐led medical emergency team: an interrupted time series. J Hosp Med. 2012;7:98–103.
- , , . Changing cardiac arrest and hospital mortality rates through a medical emergency team takes time and constant review. Crit Care Med. 2010;38:445–450.
- , , , et al. Clinical emergencies and outcomes in patients admitted to a surgical versus medical service. Resuscitation. 2011;82:415–418.
- , , , . Evaluating a new rapid response team: NP‐led versus intensivist‐led comparisons. AACN Adv Crit Care. 2012;23:32–42.
- , . Implementation of a rapid response team: a success story. Crit Care Nurse. 2009;29:66–75.
- , , , . Rapid response team in an academic institution: does it make a difference? Chest. 2011;139:1361–1367.
- , . Reduction of hospital mortality and of preventable cardiac arrest and death on introduction of a pediatric medical emergency team. Ped Crit Care Med. 2009;10:306–312.
- , . Medical emergency teams are associated with reduced mortality across a major metropolitan health network after two years service: a retrospective study using government administrative data. Crit Care. 2012;16:R210.
- , , , , et al. Long term effect of a medical emergency team on cardiac arrests in a teaching hospital. Crit Care. 2005;9:R808–R815.
- , , , . Six year audit of cardiac arrests and medical emergency team calls in an Australian outer metropolitan teaching hospital. BMJ. 2007;335:1210–1212.
- , , , et al. Introducing Critical Care Outreach: a ward‐randomised trial of phased introduction in a general hospital. Intensive Care Med. 2004;30:1398–1404.
- , , , et al. Rates of in‐hospital arrests, deaths, and intensive care admissions: the effect of a medical emergency team. Med J Aust. 2000;173:236–240.
- , , , et al. Introduction of the medical emergency team (MET) system: a cluster randomised controlled trial. Lancet. 2005;365:2091–2097.
- , , , , , . The effectiveness of implementation of the medical emergency team (MET) system and factors associated with use during the MERIT study. Crit Care Resusc. 2007;9:206–212.
- , . Rethinking rapid response teams. JAMA. 2010;304:1375–1376.
- , , . Lower mortality for abdominal aortic aneurysm repair in high‐volume hospitals is contingent upon nurse staffing [published online ahead of print October 22, 2012]. Health Serv Res. doi: 10.1111/1475–6773.12004.
- , , , , . Nurse‐staffing levels and the quality of care in hospitals. N Engl J Med. 2002;346:1715–1722.
- . The association of registered nurse staffing levels and patient outcomes: systematic review and meta‐analysis. Med Care. 2007;45:1195–1204.
- , , , , , . Outcomes of care by hospitalists, general internists, and family physicians. N Engl J Med. 2007;357:2589–2600.
- , , , , . The ability of the National Early Warning Score (NEWS) to discriminate patients at risk of early cardiac arrest, unanticipated intensive care unit admission, and death. Resuscitation. 2013;84:465–470.
- , , , . Impact of pulse oximetry surveillance on rescue events and intensive care unit transfers: a before‐and‐after concurrence study. Anesthesiology. 2010;112:282–287.
- , , , et al. A controlled trial of electronic automated advisory vital signs monitoring in general hospital wards. Crit Care Med. 2012;40:2349–2361.
- Agency for Healthcare Research and Quality. Early warning scoring system proactively identifies patients at risk of deterioration, leading to fewer cardiopulmonary emergencies and deaths. Available at: http://www.innovations.ahrq.gov/content.aspx?id=2607. Accessed March 26, 2013.
- , , , et al. Effect of a rapid response system for patients in shock on time to treatment and mortality during 5 years. Crit Care Med. 2007;35:2568–2575.
- , , , , , . The medical emergency team and end‐of‐life care: a pilot study. Crit Care Resusc. 2007;9:151–156.
ONLINE EXCLUSIVE: From Hospitalists, for Hospitalists: Top 10 Reasons To Come To HM13
In March, Noah J. Finkel, MD, FHM, of Lahey Health System-Lahey Hospital emailed his hospitalist colleagues and encouraged them to join him at HM13 next month. For hospitalists still undecided about attending the largest conference specifically for hospitalists—especially academic hospitalists and those interested in health IT—SHM offers Dr. Finkel’s “Top 10” reasons to register for the annual meeting, which kicks off May 16 at the Gaylord National Resort and Conference Center in National Harbor, Md.
- HM13 offers 22.5 CME credits (sometimes better just to spend a few days cramming it in).
- Hospitalists on the Hill (hospitalmedicine2013.org/advocacy.php): great opportunity to meet with members of Congress and discuss issues important to HM (because you really don’t understand the SGR for Medicare reimbursement).
- Network with other hospitalists from across the country (avoid “local” medical thinking).
- Academic medicine track courses to enhance your teaching and research expertise (please admit that you were probably never formally trained).
- Comanagement pre-course and track to help with Medicine consult and orthopedic comanagement (is it a good time to start a beta-blocker?).
- Updates in the evidence-based medicine track to make sure that you know about the latest research before the medical students do (never good to be revealed as practicing “old medicine”).
- Hospitalist career track lecture to make sure you are climbing the ladder (do you have a career plan at all?).
- CPOE guidelines for inpatient medical care: perfect role for a hospitalist.
- Bob Wachter’s keynote on quality, safety, and IT. He’s the father of HM—’nuff said.
- ZDoggMD, the funniest hospitalist there is!
Dr. Finkel is medical director of information technology, hospital medicine; assistant professor, Tufts University School of Medicine, Lahey Health System-Lahey Hospital, Boston
In March, Noah J. Finkel, MD, FHM, of Lahey Health System-Lahey Hospital emailed his hospitalist colleagues and encouraged them to join him at HM13 next month. For hospitalists still undecided about attending the largest conference specifically for hospitalists—especially academic hospitalists and those interested in health IT—SHM offers Dr. Finkel’s “Top 10” reasons to register for the annual meeting, which kicks off May 16 at the Gaylord National Resort and Conference Center in National Harbor, Md.
- HM13 offers 22.5 CME credits (sometimes better just to spend a few days cramming it in).
- Hospitalists on the Hill (hospitalmedicine2013.org/advocacy.php): great opportunity to meet with members of Congress and discuss issues important to HM (because you really don’t understand the SGR for Medicare reimbursement).
- Network with other hospitalists from across the country (avoid “local” medical thinking).
- Academic medicine track courses to enhance your teaching and research expertise (please admit that you were probably never formally trained).
- Comanagement pre-course and track to help with Medicine consult and orthopedic comanagement (is it a good time to start a beta-blocker?).
- Updates in the evidence-based medicine track to make sure that you know about the latest research before the medical students do (never good to be revealed as practicing “old medicine”).
- Hospitalist career track lecture to make sure you are climbing the ladder (do you have a career plan at all?).
- CPOE guidelines for inpatient medical care: perfect role for a hospitalist.
- Bob Wachter’s keynote on quality, safety, and IT. He’s the father of HM—’nuff said.
- ZDoggMD, the funniest hospitalist there is!
Dr. Finkel is medical director of information technology, hospital medicine; assistant professor, Tufts University School of Medicine, Lahey Health System-Lahey Hospital, Boston
In March, Noah J. Finkel, MD, FHM, of Lahey Health System-Lahey Hospital emailed his hospitalist colleagues and encouraged them to join him at HM13 next month. For hospitalists still undecided about attending the largest conference specifically for hospitalists—especially academic hospitalists and those interested in health IT—SHM offers Dr. Finkel’s “Top 10” reasons to register for the annual meeting, which kicks off May 16 at the Gaylord National Resort and Conference Center in National Harbor, Md.
- HM13 offers 22.5 CME credits (sometimes better just to spend a few days cramming it in).
- Hospitalists on the Hill (hospitalmedicine2013.org/advocacy.php): great opportunity to meet with members of Congress and discuss issues important to HM (because you really don’t understand the SGR for Medicare reimbursement).
- Network with other hospitalists from across the country (avoid “local” medical thinking).
- Academic medicine track courses to enhance your teaching and research expertise (please admit that you were probably never formally trained).
- Comanagement pre-course and track to help with Medicine consult and orthopedic comanagement (is it a good time to start a beta-blocker?).
- Updates in the evidence-based medicine track to make sure that you know about the latest research before the medical students do (never good to be revealed as practicing “old medicine”).
- Hospitalist career track lecture to make sure you are climbing the ladder (do you have a career plan at all?).
- CPOE guidelines for inpatient medical care: perfect role for a hospitalist.
- Bob Wachter’s keynote on quality, safety, and IT. He’s the father of HM—’nuff said.
- ZDoggMD, the funniest hospitalist there is!
Dr. Finkel is medical director of information technology, hospital medicine; assistant professor, Tufts University School of Medicine, Lahey Health System-Lahey Hospital, Boston
Chlorhexidine-impregnated washcloths lowers risk of hospital-acquired infections
Clinical question
In patients at high risk, does daily bathing with chlorhexidine-impregnated washcloths reduce the risk of hospital-acquired bloodstream infections?
Bottom line
For hospitalized patients at high-risk of nosocomial infections, daily bathing with chlorhexidine-impregnated washcloths reduces the rate of methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococcus (VRE) acquisition, but was not found to reduce the rate of bloodstream infections from these organisms. The rate of hospital-acquired bloodstream infections overall was significantly reduced; this included infections from other organisms such as coagulase-negative staphylococci (CoNS) and fungi. LOE = 1b
Reference
Study design
Randomized controlled trial (nonblinded)
Funding source
Industry + government
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
In this multicenter trial, investigators enrolled patients in 8 intensive care units and one bone marrow transplantation unit. Each unit was randomized to bathe patients daily with either nonantimicrobial washcloths or 2% chlorhexidine-impregnated washcloths for 6 months, followed by the alternate product for the next 6 months. Bathing was performed by using the washcloths on all body surfaces sequentially, excluding the face, per manufacturer’s instructions. Active surveillance testing for MRSA and VRE was performed on all units during the study period. Analysis was by intention to treat. Use of chlorhexidine washcloths lowered the risk of MRSA or VRE acquisition by 23% (5.10 vs 6.60 cases per 1000 patient-days; P = .03). The rate of hospital-acquired bloodstream infections also decreased by 28% with the use of these washcloths (3.78 vs 6.60 cases per 1000 patient-days; P = .007). Specifically, both primary bloodstream infections and central catheter-associated bloodstream infections occurred less frequently with the intervention (31% decrease in primary infections, P = .006; 53% decrease in catheter-related infections, P = .004). Thirty percent of the 221 bloodstream infections detected during both the intervention and control periods were due to staphylococci, either Staphylcoccus aureus or CoNS. Use of the chlorhexidine washcloths decreased CoNS bloodstream infections by 56% (0.60 vs 1.36 cases per 1000 patient-days; P = .008) and fungal central catheter-associated infections by 90% (0.07 vs 0.77 cases per 1000 catheter-days; P < .001). There were no serious adverse events associated with the chlorhexidine washcloths. The MRSA and VRE isolates that were acquired did not show increased resistance to chlorhexidine although this does not allay the concern regarding longer-term emergence of high-level resistance.
Clinical question
In patients at high risk, does daily bathing with chlorhexidine-impregnated washcloths reduce the risk of hospital-acquired bloodstream infections?
Bottom line
For hospitalized patients at high-risk of nosocomial infections, daily bathing with chlorhexidine-impregnated washcloths reduces the rate of methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococcus (VRE) acquisition, but was not found to reduce the rate of bloodstream infections from these organisms. The rate of hospital-acquired bloodstream infections overall was significantly reduced; this included infections from other organisms such as coagulase-negative staphylococci (CoNS) and fungi. LOE = 1b
Reference
Study design
Randomized controlled trial (nonblinded)
Funding source
Industry + government
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
In this multicenter trial, investigators enrolled patients in 8 intensive care units and one bone marrow transplantation unit. Each unit was randomized to bathe patients daily with either nonantimicrobial washcloths or 2% chlorhexidine-impregnated washcloths for 6 months, followed by the alternate product for the next 6 months. Bathing was performed by using the washcloths on all body surfaces sequentially, excluding the face, per manufacturer’s instructions. Active surveillance testing for MRSA and VRE was performed on all units during the study period. Analysis was by intention to treat. Use of chlorhexidine washcloths lowered the risk of MRSA or VRE acquisition by 23% (5.10 vs 6.60 cases per 1000 patient-days; P = .03). The rate of hospital-acquired bloodstream infections also decreased by 28% with the use of these washcloths (3.78 vs 6.60 cases per 1000 patient-days; P = .007). Specifically, both primary bloodstream infections and central catheter-associated bloodstream infections occurred less frequently with the intervention (31% decrease in primary infections, P = .006; 53% decrease in catheter-related infections, P = .004). Thirty percent of the 221 bloodstream infections detected during both the intervention and control periods were due to staphylococci, either Staphylcoccus aureus or CoNS. Use of the chlorhexidine washcloths decreased CoNS bloodstream infections by 56% (0.60 vs 1.36 cases per 1000 patient-days; P = .008) and fungal central catheter-associated infections by 90% (0.07 vs 0.77 cases per 1000 catheter-days; P < .001). There were no serious adverse events associated with the chlorhexidine washcloths. The MRSA and VRE isolates that were acquired did not show increased resistance to chlorhexidine although this does not allay the concern regarding longer-term emergence of high-level resistance.
Clinical question
In patients at high risk, does daily bathing with chlorhexidine-impregnated washcloths reduce the risk of hospital-acquired bloodstream infections?
Bottom line
For hospitalized patients at high-risk of nosocomial infections, daily bathing with chlorhexidine-impregnated washcloths reduces the rate of methicillin-resistant Staphylococcus aureus (MRSA) and vancomycin-resistant enterococcus (VRE) acquisition, but was not found to reduce the rate of bloodstream infections from these organisms. The rate of hospital-acquired bloodstream infections overall was significantly reduced; this included infections from other organisms such as coagulase-negative staphylococci (CoNS) and fungi. LOE = 1b
Reference
Study design
Randomized controlled trial (nonblinded)
Funding source
Industry + government
Allocation
Uncertain
Setting
Inpatient (any location)
Synopsis
In this multicenter trial, investigators enrolled patients in 8 intensive care units and one bone marrow transplantation unit. Each unit was randomized to bathe patients daily with either nonantimicrobial washcloths or 2% chlorhexidine-impregnated washcloths for 6 months, followed by the alternate product for the next 6 months. Bathing was performed by using the washcloths on all body surfaces sequentially, excluding the face, per manufacturer’s instructions. Active surveillance testing for MRSA and VRE was performed on all units during the study period. Analysis was by intention to treat. Use of chlorhexidine washcloths lowered the risk of MRSA or VRE acquisition by 23% (5.10 vs 6.60 cases per 1000 patient-days; P = .03). The rate of hospital-acquired bloodstream infections also decreased by 28% with the use of these washcloths (3.78 vs 6.60 cases per 1000 patient-days; P = .007). Specifically, both primary bloodstream infections and central catheter-associated bloodstream infections occurred less frequently with the intervention (31% decrease in primary infections, P = .006; 53% decrease in catheter-related infections, P = .004). Thirty percent of the 221 bloodstream infections detected during both the intervention and control periods were due to staphylococci, either Staphylcoccus aureus or CoNS. Use of the chlorhexidine washcloths decreased CoNS bloodstream infections by 56% (0.60 vs 1.36 cases per 1000 patient-days; P = .008) and fungal central catheter-associated infections by 90% (0.07 vs 0.77 cases per 1000 catheter-days; P < .001). There were no serious adverse events associated with the chlorhexidine washcloths. The MRSA and VRE isolates that were acquired did not show increased resistance to chlorhexidine although this does not allay the concern regarding longer-term emergence of high-level resistance.
10 days enoxaparin better than 35 days rivaroxaban for medical inpatient thromboprophylaxis (MAGELLAN)
Clinical question
Is rivaroxaban for 35 days better than enoxaparin for 10 days to prevent venous thromboembolism in medical inpatients?
Bottom line
Enoxaparin for 10 days provides similar protection to rivaroxaban for 35 days against symptomatic venous thromboembolism (VTE) or VTE-related death, and the extended use of rivaroxaban leads to an increase in clinically relevant and major bleeding. Rivaroxaban cannot be recommended for this indication. The larger question is whether we should be routinely anticoagulating these patients at all. Although the 2012 guidelines from the American College of Chest Physicians (http://guideline.gov/content.aspx?id=35263) recommend prophylaxis for inpatients at increased risk for VTE, a recent American College of Physicians' guideline calls this practice into question, noting that for every 4 pulmonary emboli prevented, you cause 9 major bleeding events (Ann Intern Med 2011;155:602). LOE = 1b
Reference
Study design
Randomized controlled trial (double-blinded)
Funding source
Industry
Allocation
Concealed
Setting
Inpatient (any location)
Synopsis
Patients admitted within 72 hours of an acute medical illness who were older than 40 years and who had reduced mobility were randomized to receive either enoxaparin 40 mg once daily for 10 days plus 35 days oral placebo or rivaroxaban 10 mg twice daily for 35 days plus 10 days of subcutaneous placebo. There were a total of 8101 patients in the study (average age = 71 years). Patients were hospitalized for infectious disease (45%), heart failure (32%), respiratory insufficiency (27%), stroke (17%) or active cancer (7%), and at least 1 day of immobilization was anticipated with decreased mobility for at least 4 days. This study was performed in 52 countries, many of which must keep their patients longer in the hospital than we do in the United States, as the median duration of hospitalization was a whopping 11 days. There was an extensive list of inclusion criteria, with patients having at least one risk factor for VTE and not having any obvious bleeding risks. Groups were balanced at the start of the study, analysis was by intention to treat, and outcomes were blindly adjudicated. Patients underwent ultrasound to detect asymptomatic deep vein thrombosis (DVT) at 10 days and 35 days, and underwent imaging to detect VTE if they were symptomatic at any time. The composite efficacy outcome was a combination of asymptomatic proximal DVT, symptomatic DVT or PE, and VTE-related death at 10 and 35 days, and the safety outcome was major or fatal bleeding at 10 and 35 days. Only approximately 75% of the patients are included in the efficacy outcome, because approximately one fourth in each group failed to have the follow-up ultrasound to detect asymptomatic DVT. Although the authors point to the superiority of rivaroxaban at 35 days (4.4% vs 5.7%; P = .02; number needed to treat [NNT] = 77), this is only because of a decrease in asymptomatic DVTs; that is, DVTs we would never have known about were it not for the mandated study ultrasound. There was no significant difference in the likelihood of symptomatic VTE or VTE-related death. Major bleeding was more common in the rivaroxaban group at 35 days (4.1% vs 1.7%; P < .001; number needed to treat to harm = 42). This includes 7 fatal bleeds in the rivaroxaban group and only 1 in the enoxaparin group. The authors do not perform statistical testing for fatal bleeds (and a number of other outcomes that appear unfavorable to rivaroxaban) but I did and found that it was statistically significant (two-tailed chi-square = 4.7; P = .03). All-cause mortality was similar between groups. The "net benefit" was the composite of the primary efficacy and primary safety outcomes and favors enoxaparin (7.8% vs 9.4%; P = .02; NNT = 62). Rather than stating the obvious (enoxaparin was superior to rivaroxaban for the net benefit outcome), the authors spin this by stating that "the prespecified analysis of net clinical benefit or harm did not show a benefit with rivaroxaban at either day 10 or 35."
Clinical question
Is rivaroxaban for 35 days better than enoxaparin for 10 days to prevent venous thromboembolism in medical inpatients?
Bottom line
Enoxaparin for 10 days provides similar protection to rivaroxaban for 35 days against symptomatic venous thromboembolism (VTE) or VTE-related death, and the extended use of rivaroxaban leads to an increase in clinically relevant and major bleeding. Rivaroxaban cannot be recommended for this indication. The larger question is whether we should be routinely anticoagulating these patients at all. Although the 2012 guidelines from the American College of Chest Physicians (http://guideline.gov/content.aspx?id=35263) recommend prophylaxis for inpatients at increased risk for VTE, a recent American College of Physicians' guideline calls this practice into question, noting that for every 4 pulmonary emboli prevented, you cause 9 major bleeding events (Ann Intern Med 2011;155:602). LOE = 1b
Reference
Study design
Randomized controlled trial (double-blinded)
Funding source
Industry
Allocation
Concealed
Setting
Inpatient (any location)
Synopsis
Patients admitted within 72 hours of an acute medical illness who were older than 40 years and who had reduced mobility were randomized to receive either enoxaparin 40 mg once daily for 10 days plus 35 days oral placebo or rivaroxaban 10 mg twice daily for 35 days plus 10 days of subcutaneous placebo. There were a total of 8101 patients in the study (average age = 71 years). Patients were hospitalized for infectious disease (45%), heart failure (32%), respiratory insufficiency (27%), stroke (17%) or active cancer (7%), and at least 1 day of immobilization was anticipated with decreased mobility for at least 4 days. This study was performed in 52 countries, many of which must keep their patients longer in the hospital than we do in the United States, as the median duration of hospitalization was a whopping 11 days. There was an extensive list of inclusion criteria, with patients having at least one risk factor for VTE and not having any obvious bleeding risks. Groups were balanced at the start of the study, analysis was by intention to treat, and outcomes were blindly adjudicated. Patients underwent ultrasound to detect asymptomatic deep vein thrombosis (DVT) at 10 days and 35 days, and underwent imaging to detect VTE if they were symptomatic at any time. The composite efficacy outcome was a combination of asymptomatic proximal DVT, symptomatic DVT or PE, and VTE-related death at 10 and 35 days, and the safety outcome was major or fatal bleeding at 10 and 35 days. Only approximately 75% of the patients are included in the efficacy outcome, because approximately one fourth in each group failed to have the follow-up ultrasound to detect asymptomatic DVT. Although the authors point to the superiority of rivaroxaban at 35 days (4.4% vs 5.7%; P = .02; number needed to treat [NNT] = 77), this is only because of a decrease in asymptomatic DVTs; that is, DVTs we would never have known about were it not for the mandated study ultrasound. There was no significant difference in the likelihood of symptomatic VTE or VTE-related death. Major bleeding was more common in the rivaroxaban group at 35 days (4.1% vs 1.7%; P < .001; number needed to treat to harm = 42). This includes 7 fatal bleeds in the rivaroxaban group and only 1 in the enoxaparin group. The authors do not perform statistical testing for fatal bleeds (and a number of other outcomes that appear unfavorable to rivaroxaban) but I did and found that it was statistically significant (two-tailed chi-square = 4.7; P = .03). All-cause mortality was similar between groups. The "net benefit" was the composite of the primary efficacy and primary safety outcomes and favors enoxaparin (7.8% vs 9.4%; P = .02; NNT = 62). Rather than stating the obvious (enoxaparin was superior to rivaroxaban for the net benefit outcome), the authors spin this by stating that "the prespecified analysis of net clinical benefit or harm did not show a benefit with rivaroxaban at either day 10 or 35."
Clinical question
Is rivaroxaban for 35 days better than enoxaparin for 10 days to prevent venous thromboembolism in medical inpatients?
Bottom line
Enoxaparin for 10 days provides similar protection to rivaroxaban for 35 days against symptomatic venous thromboembolism (VTE) or VTE-related death, and the extended use of rivaroxaban leads to an increase in clinically relevant and major bleeding. Rivaroxaban cannot be recommended for this indication. The larger question is whether we should be routinely anticoagulating these patients at all. Although the 2012 guidelines from the American College of Chest Physicians (http://guideline.gov/content.aspx?id=35263) recommend prophylaxis for inpatients at increased risk for VTE, a recent American College of Physicians' guideline calls this practice into question, noting that for every 4 pulmonary emboli prevented, you cause 9 major bleeding events (Ann Intern Med 2011;155:602). LOE = 1b
Reference
Study design
Randomized controlled trial (double-blinded)
Funding source
Industry
Allocation
Concealed
Setting
Inpatient (any location)
Synopsis
Patients admitted within 72 hours of an acute medical illness who were older than 40 years and who had reduced mobility were randomized to receive either enoxaparin 40 mg once daily for 10 days plus 35 days oral placebo or rivaroxaban 10 mg twice daily for 35 days plus 10 days of subcutaneous placebo. There were a total of 8101 patients in the study (average age = 71 years). Patients were hospitalized for infectious disease (45%), heart failure (32%), respiratory insufficiency (27%), stroke (17%) or active cancer (7%), and at least 1 day of immobilization was anticipated with decreased mobility for at least 4 days. This study was performed in 52 countries, many of which must keep their patients longer in the hospital than we do in the United States, as the median duration of hospitalization was a whopping 11 days. There was an extensive list of inclusion criteria, with patients having at least one risk factor for VTE and not having any obvious bleeding risks. Groups were balanced at the start of the study, analysis was by intention to treat, and outcomes were blindly adjudicated. Patients underwent ultrasound to detect asymptomatic deep vein thrombosis (DVT) at 10 days and 35 days, and underwent imaging to detect VTE if they were symptomatic at any time. The composite efficacy outcome was a combination of asymptomatic proximal DVT, symptomatic DVT or PE, and VTE-related death at 10 and 35 days, and the safety outcome was major or fatal bleeding at 10 and 35 days. Only approximately 75% of the patients are included in the efficacy outcome, because approximately one fourth in each group failed to have the follow-up ultrasound to detect asymptomatic DVT. Although the authors point to the superiority of rivaroxaban at 35 days (4.4% vs 5.7%; P = .02; number needed to treat [NNT] = 77), this is only because of a decrease in asymptomatic DVTs; that is, DVTs we would never have known about were it not for the mandated study ultrasound. There was no significant difference in the likelihood of symptomatic VTE or VTE-related death. Major bleeding was more common in the rivaroxaban group at 35 days (4.1% vs 1.7%; P < .001; number needed to treat to harm = 42). This includes 7 fatal bleeds in the rivaroxaban group and only 1 in the enoxaparin group. The authors do not perform statistical testing for fatal bleeds (and a number of other outcomes that appear unfavorable to rivaroxaban) but I did and found that it was statistically significant (two-tailed chi-square = 4.7; P = .03). All-cause mortality was similar between groups. The "net benefit" was the composite of the primary efficacy and primary safety outcomes and favors enoxaparin (7.8% vs 9.4%; P = .02; NNT = 62). Rather than stating the obvious (enoxaparin was superior to rivaroxaban for the net benefit outcome), the authors spin this by stating that "the prespecified analysis of net clinical benefit or harm did not show a benefit with rivaroxaban at either day 10 or 35."
ONLINE EXCLUSIVE: Study: Tracheostomy Collar Facilitates Quicker Transition
Each day a patient spends on a ventilator increases pneumonia risk by about 1% (Am J Respir Crit Care Med. 2002;165[7]:867-903). Being unable to move or talk also might induce a sense of helplessness. As a result, many clinicians wean off a ventilator sooner rather than later.
A recent study (JAMA. 2013;309[7]:671-677) has found that unassisted breathing via a tracheostomy collar facilitates a quicker transition than breathing with pressure support after prolonged mechanical ventilation (>21 days). Investigators reported their findings at the Society of Critical Care Medicine’s 42nd Congress in January in San Juan, Puerto Rico.
On average, patients were able to successfully wean four days earlier with unassisted breathing versus pressure support—a significant difference, says lead investigator Amal Jubran, MD, section chief of pulmonary and critical-care medicine at the Edward Hines Jr. VA Hospital in Chicago. No major differences were reported in survival between the two groups at six-month and 12-month intervals after enrollment in the study.
“The faster pace of weaning in the tracheostomy collar group may be related to its effect on clinical decision-making,” says Dr. Jubran, a professor at Loyola University Chicago’s Stritch School of Medicine. “Observing a patient breathing through a tracheostomy collar provides the clinician with a clear view of the patient’s respiratory capabilities.”
Amid this uncertainty, Dr. Jubran adds, clinicians are more likely to accelerate the weaning process in patients who unexpectedly respond well during a tracheostomy collar challenge than in those receiving a low level of pressure support.
In the study, less than 10% of 312 patients—most of whom were elderly—required reconnection to a ventilator after being weaned successfully. Weaning efforts should be restarted only after cardiopulmonary stability has been reached, she says.
Factoring into the equation are the measurements for blood pressure and respiratory rate and the amounts of oxygenation and sedation in patients on ventilators, says Paul Odenbach, MD, SHM, a hospitalist at Abbott Northwestern Hospital in Minneapolis.
“I look at them clinically overall,” he says. “The most important piece is eyeballing them from where they are in their disease trajectory.
Are they awake enough to be protecting their airway once they are extubated?” he adds. He has found that a stable airway is more easily achieved with a tracheostomy collar.
Listen to Dr. Odenbach explain what hospitalists should watch out for when weaning patients off mechanical ventilation, especially in critical-care situations.
Managing heart failure, treating infections, and optimizing nutrition are crucial before weaning off ventilation, says geriatrician Joel Sender, MD, section chief of pulmonary medicine at St. Barnabas Hospital in Bronx, N.Y., and medical director of its Rehabilitation & Continuing Care Center.
“It is important to identify the best candidates for weaning and then apply the best methods,” says Dr. Sender. “Sadly, many patients are not good candidates, and only a portion are successfully weaned.” That’s why “there’s a great need to have a frank discussion with the family to answer their questions and to promote a realistic set of treatment goals.” TH
Susan Kreimer is a freelance writer in New York.
Key Takeaways for Hospitalists
- The biggest obstacle in weaning management is the delay in starting to assess whether a patient is ready for weaning.
- Weaning off mechanical ventilation should be attempted as soon as cardiopulmonary instability has been resolved.
- Patients requiring prolonged mechanical ventilation should be weaned with daily trials of unassisted breathing through a tracheostomy collar and not with pressure support.
Each day a patient spends on a ventilator increases pneumonia risk by about 1% (Am J Respir Crit Care Med. 2002;165[7]:867-903). Being unable to move or talk also might induce a sense of helplessness. As a result, many clinicians wean off a ventilator sooner rather than later.
A recent study (JAMA. 2013;309[7]:671-677) has found that unassisted breathing via a tracheostomy collar facilitates a quicker transition than breathing with pressure support after prolonged mechanical ventilation (>21 days). Investigators reported their findings at the Society of Critical Care Medicine’s 42nd Congress in January in San Juan, Puerto Rico.
On average, patients were able to successfully wean four days earlier with unassisted breathing versus pressure support—a significant difference, says lead investigator Amal Jubran, MD, section chief of pulmonary and critical-care medicine at the Edward Hines Jr. VA Hospital in Chicago. No major differences were reported in survival between the two groups at six-month and 12-month intervals after enrollment in the study.
“The faster pace of weaning in the tracheostomy collar group may be related to its effect on clinical decision-making,” says Dr. Jubran, a professor at Loyola University Chicago’s Stritch School of Medicine. “Observing a patient breathing through a tracheostomy collar provides the clinician with a clear view of the patient’s respiratory capabilities.”
Amid this uncertainty, Dr. Jubran adds, clinicians are more likely to accelerate the weaning process in patients who unexpectedly respond well during a tracheostomy collar challenge than in those receiving a low level of pressure support.
In the study, less than 10% of 312 patients—most of whom were elderly—required reconnection to a ventilator after being weaned successfully. Weaning efforts should be restarted only after cardiopulmonary stability has been reached, she says.
Factoring into the equation are the measurements for blood pressure and respiratory rate and the amounts of oxygenation and sedation in patients on ventilators, says Paul Odenbach, MD, SHM, a hospitalist at Abbott Northwestern Hospital in Minneapolis.
“I look at them clinically overall,” he says. “The most important piece is eyeballing them from where they are in their disease trajectory.
Are they awake enough to be protecting their airway once they are extubated?” he adds. He has found that a stable airway is more easily achieved with a tracheostomy collar.
Listen to Dr. Odenbach explain what hospitalists should watch out for when weaning patients off mechanical ventilation, especially in critical-care situations.
Managing heart failure, treating infections, and optimizing nutrition are crucial before weaning off ventilation, says geriatrician Joel Sender, MD, section chief of pulmonary medicine at St. Barnabas Hospital in Bronx, N.Y., and medical director of its Rehabilitation & Continuing Care Center.
“It is important to identify the best candidates for weaning and then apply the best methods,” says Dr. Sender. “Sadly, many patients are not good candidates, and only a portion are successfully weaned.” That’s why “there’s a great need to have a frank discussion with the family to answer their questions and to promote a realistic set of treatment goals.” TH
Susan Kreimer is a freelance writer in New York.
Key Takeaways for Hospitalists
- The biggest obstacle in weaning management is the delay in starting to assess whether a patient is ready for weaning.
- Weaning off mechanical ventilation should be attempted as soon as cardiopulmonary instability has been resolved.
- Patients requiring prolonged mechanical ventilation should be weaned with daily trials of unassisted breathing through a tracheostomy collar and not with pressure support.
Each day a patient spends on a ventilator increases pneumonia risk by about 1% (Am J Respir Crit Care Med. 2002;165[7]:867-903). Being unable to move or talk also might induce a sense of helplessness. As a result, many clinicians wean off a ventilator sooner rather than later.
A recent study (JAMA. 2013;309[7]:671-677) has found that unassisted breathing via a tracheostomy collar facilitates a quicker transition than breathing with pressure support after prolonged mechanical ventilation (>21 days). Investigators reported their findings at the Society of Critical Care Medicine’s 42nd Congress in January in San Juan, Puerto Rico.
On average, patients were able to successfully wean four days earlier with unassisted breathing versus pressure support—a significant difference, says lead investigator Amal Jubran, MD, section chief of pulmonary and critical-care medicine at the Edward Hines Jr. VA Hospital in Chicago. No major differences were reported in survival between the two groups at six-month and 12-month intervals after enrollment in the study.
“The faster pace of weaning in the tracheostomy collar group may be related to its effect on clinical decision-making,” says Dr. Jubran, a professor at Loyola University Chicago’s Stritch School of Medicine. “Observing a patient breathing through a tracheostomy collar provides the clinician with a clear view of the patient’s respiratory capabilities.”
Amid this uncertainty, Dr. Jubran adds, clinicians are more likely to accelerate the weaning process in patients who unexpectedly respond well during a tracheostomy collar challenge than in those receiving a low level of pressure support.
In the study, less than 10% of 312 patients—most of whom were elderly—required reconnection to a ventilator after being weaned successfully. Weaning efforts should be restarted only after cardiopulmonary stability has been reached, she says.
Factoring into the equation are the measurements for blood pressure and respiratory rate and the amounts of oxygenation and sedation in patients on ventilators, says Paul Odenbach, MD, SHM, a hospitalist at Abbott Northwestern Hospital in Minneapolis.
“I look at them clinically overall,” he says. “The most important piece is eyeballing them from where they are in their disease trajectory.
Are they awake enough to be protecting their airway once they are extubated?” he adds. He has found that a stable airway is more easily achieved with a tracheostomy collar.
Listen to Dr. Odenbach explain what hospitalists should watch out for when weaning patients off mechanical ventilation, especially in critical-care situations.
Managing heart failure, treating infections, and optimizing nutrition are crucial before weaning off ventilation, says geriatrician Joel Sender, MD, section chief of pulmonary medicine at St. Barnabas Hospital in Bronx, N.Y., and medical director of its Rehabilitation & Continuing Care Center.
“It is important to identify the best candidates for weaning and then apply the best methods,” says Dr. Sender. “Sadly, many patients are not good candidates, and only a portion are successfully weaned.” That’s why “there’s a great need to have a frank discussion with the family to answer their questions and to promote a realistic set of treatment goals.” TH
Susan Kreimer is a freelance writer in New York.
Key Takeaways for Hospitalists
- The biggest obstacle in weaning management is the delay in starting to assess whether a patient is ready for weaning.
- Weaning off mechanical ventilation should be attempted as soon as cardiopulmonary instability has been resolved.
- Patients requiring prolonged mechanical ventilation should be weaned with daily trials of unassisted breathing through a tracheostomy collar and not with pressure support.

