User login
Simple Interventions Save Lives
A new Health Affairs study tested three relatively simple and inexpensive interventions on a hospital unit to prevent the kinds of hospital-acquired infections that cause the deaths of an estimated 99,000 patients each year. Principal investigator Bradford Harris, MD, and colleagues conducted the research on a pediatric ICU at the University of North Carolina at Chapel Hill School of Medicine, finding that patients admitted after these interventions were implemented left the hospital on average two days earlier, at lower cost, and with a 2.3% lower death rate. Study authors projected annual savings of $12 million for a single PICU.1
The simple measures include strict enforcement of standard hand hygiene policies; guideline-recommended measures for ventilator patients, such as elevating the head of the hospital bed; and compliance with guidelines for maintaining central line catheters, along with educational posters and the use of oral care kits.
A recent article in the “Cleveland Plain Dealer” describes efforts in that city’s hospitals to enforce proper hand hygiene.2 MetroHealth Medical Center hired four employees it calls “infection prevention observers,” whose entire job is to make sure that every caregiver who comes near a patient washes his or her hands. They openly appear on the units carrying clipboards and filling out sheets tracking non-compliance.
The hospital’s hand hygiene compliance rate has reached 98% on all medical units (nationwide, the rate is around 50%), while bloodstream infections have dropped to one-third of what they were in 2010. Cleveland Clinic and University Hospitals achieved similar compliance by employing secret observers of staff hand-washing.
CDC epidemiologist and hand hygiene expert Kate Ellingson, MD, told the newspaper that while awareness of the importance of hand hygiene has long been understood, it is difficult for healthcare workers to follow. But hospitals that use employee monitors, post data, and implement other hand hygiene initiatives tend to show strong compliance.
References
- Harris BD, Hanson H, Christy C, et al. Strict hand hygiene and other practices shortened stays and cut costs and mortality in a pediatric intensive care unit. Health Affairs. 2011;30(9):1751-1761.
- Tribble SJ. Cleveland MetroHealth Medical Center increases hand washing, reduces infections. “Cleveland Plain Dealer” website. Available at: http://www.cleveland.com/healthfit/index.ssf/2011/09/metrohealth_increases_hand_was.html. Accessed Oct. 15, 2011.
A new Health Affairs study tested three relatively simple and inexpensive interventions on a hospital unit to prevent the kinds of hospital-acquired infections that cause the deaths of an estimated 99,000 patients each year. Principal investigator Bradford Harris, MD, and colleagues conducted the research on a pediatric ICU at the University of North Carolina at Chapel Hill School of Medicine, finding that patients admitted after these interventions were implemented left the hospital on average two days earlier, at lower cost, and with a 2.3% lower death rate. Study authors projected annual savings of $12 million for a single PICU.1
The simple measures include strict enforcement of standard hand hygiene policies; guideline-recommended measures for ventilator patients, such as elevating the head of the hospital bed; and compliance with guidelines for maintaining central line catheters, along with educational posters and the use of oral care kits.
A recent article in the “Cleveland Plain Dealer” describes efforts in that city’s hospitals to enforce proper hand hygiene.2 MetroHealth Medical Center hired four employees it calls “infection prevention observers,” whose entire job is to make sure that every caregiver who comes near a patient washes his or her hands. They openly appear on the units carrying clipboards and filling out sheets tracking non-compliance.
The hospital’s hand hygiene compliance rate has reached 98% on all medical units (nationwide, the rate is around 50%), while bloodstream infections have dropped to one-third of what they were in 2010. Cleveland Clinic and University Hospitals achieved similar compliance by employing secret observers of staff hand-washing.
CDC epidemiologist and hand hygiene expert Kate Ellingson, MD, told the newspaper that while awareness of the importance of hand hygiene has long been understood, it is difficult for healthcare workers to follow. But hospitals that use employee monitors, post data, and implement other hand hygiene initiatives tend to show strong compliance.
References
- Harris BD, Hanson H, Christy C, et al. Strict hand hygiene and other practices shortened stays and cut costs and mortality in a pediatric intensive care unit. Health Affairs. 2011;30(9):1751-1761.
- Tribble SJ. Cleveland MetroHealth Medical Center increases hand washing, reduces infections. “Cleveland Plain Dealer” website. Available at: http://www.cleveland.com/healthfit/index.ssf/2011/09/metrohealth_increases_hand_was.html. Accessed Oct. 15, 2011.
A new Health Affairs study tested three relatively simple and inexpensive interventions on a hospital unit to prevent the kinds of hospital-acquired infections that cause the deaths of an estimated 99,000 patients each year. Principal investigator Bradford Harris, MD, and colleagues conducted the research on a pediatric ICU at the University of North Carolina at Chapel Hill School of Medicine, finding that patients admitted after these interventions were implemented left the hospital on average two days earlier, at lower cost, and with a 2.3% lower death rate. Study authors projected annual savings of $12 million for a single PICU.1
The simple measures include strict enforcement of standard hand hygiene policies; guideline-recommended measures for ventilator patients, such as elevating the head of the hospital bed; and compliance with guidelines for maintaining central line catheters, along with educational posters and the use of oral care kits.
A recent article in the “Cleveland Plain Dealer” describes efforts in that city’s hospitals to enforce proper hand hygiene.2 MetroHealth Medical Center hired four employees it calls “infection prevention observers,” whose entire job is to make sure that every caregiver who comes near a patient washes his or her hands. They openly appear on the units carrying clipboards and filling out sheets tracking non-compliance.
The hospital’s hand hygiene compliance rate has reached 98% on all medical units (nationwide, the rate is around 50%), while bloodstream infections have dropped to one-third of what they were in 2010. Cleveland Clinic and University Hospitals achieved similar compliance by employing secret observers of staff hand-washing.
CDC epidemiologist and hand hygiene expert Kate Ellingson, MD, told the newspaper that while awareness of the importance of hand hygiene has long been understood, it is difficult for healthcare workers to follow. But hospitals that use employee monitors, post data, and implement other hand hygiene initiatives tend to show strong compliance.
References
- Harris BD, Hanson H, Christy C, et al. Strict hand hygiene and other practices shortened stays and cut costs and mortality in a pediatric intensive care unit. Health Affairs. 2011;30(9):1751-1761.
- Tribble SJ. Cleveland MetroHealth Medical Center increases hand washing, reduces infections. “Cleveland Plain Dealer” website. Available at: http://www.cleveland.com/healthfit/index.ssf/2011/09/metrohealth_increases_hand_was.html. Accessed Oct. 15, 2011.
Dartmouth Atlas: Little Progress Reducing Readmissions
The newest Dartmouth Atlas report, released Sept. 28, documents striking variation in 30-day hospital readmission rates for Medicare patients across 308 hospital-referral regions.1 The authors found little progress in decreasing 30-day readmissions from 2004 to 2009, while for some conditions and many regions, rates actually went up.
National readmission rates following surgery were 12.7% in both 2004 and 2009; readmissions for medical conditions rose slightly, from 15.9% to 16.1%, over the same period. Only 42% of hospitalized Medicare patients discharged to home had a PCP contact within 14 days of discharge, according to the report.
The Dartmouth Atlas Project (www.dartmouthatlas.org) documents geographic variation in healthcare utilization unrelated to outcome. It offers an extensive database for comparison by state, county, region and facility.
The new report is the first to identify an association nationally between readmissions rates and “the overall intensity of inpatient care provided to patients within a region or hospital,” with patterns of relatively high hospital utilization often corresponding with areas of higher readmissions. “Other patients are readmitted simply because they live in a locale where the hospital is used more frequently as a site of care,” the authors note.
Without continuous, high-quality care coordination across sites, the authors write, discharged patients can repeatedly bounce back to emergency rooms and hospitals.
Reference
The newest Dartmouth Atlas report, released Sept. 28, documents striking variation in 30-day hospital readmission rates for Medicare patients across 308 hospital-referral regions.1 The authors found little progress in decreasing 30-day readmissions from 2004 to 2009, while for some conditions and many regions, rates actually went up.
National readmission rates following surgery were 12.7% in both 2004 and 2009; readmissions for medical conditions rose slightly, from 15.9% to 16.1%, over the same period. Only 42% of hospitalized Medicare patients discharged to home had a PCP contact within 14 days of discharge, according to the report.
The Dartmouth Atlas Project (www.dartmouthatlas.org) documents geographic variation in healthcare utilization unrelated to outcome. It offers an extensive database for comparison by state, county, region and facility.
The new report is the first to identify an association nationally between readmissions rates and “the overall intensity of inpatient care provided to patients within a region or hospital,” with patterns of relatively high hospital utilization often corresponding with areas of higher readmissions. “Other patients are readmitted simply because they live in a locale where the hospital is used more frequently as a site of care,” the authors note.
Without continuous, high-quality care coordination across sites, the authors write, discharged patients can repeatedly bounce back to emergency rooms and hospitals.
Reference
The newest Dartmouth Atlas report, released Sept. 28, documents striking variation in 30-day hospital readmission rates for Medicare patients across 308 hospital-referral regions.1 The authors found little progress in decreasing 30-day readmissions from 2004 to 2009, while for some conditions and many regions, rates actually went up.
National readmission rates following surgery were 12.7% in both 2004 and 2009; readmissions for medical conditions rose slightly, from 15.9% to 16.1%, over the same period. Only 42% of hospitalized Medicare patients discharged to home had a PCP contact within 14 days of discharge, according to the report.
The Dartmouth Atlas Project (www.dartmouthatlas.org) documents geographic variation in healthcare utilization unrelated to outcome. It offers an extensive database for comparison by state, county, region and facility.
The new report is the first to identify an association nationally between readmissions rates and “the overall intensity of inpatient care provided to patients within a region or hospital,” with patterns of relatively high hospital utilization often corresponding with areas of higher readmissions. “Other patients are readmitted simply because they live in a locale where the hospital is used more frequently as a site of care,” the authors note.
Without continuous, high-quality care coordination across sites, the authors write, discharged patients can repeatedly bounce back to emergency rooms and hospitals.
Reference
By the Numbers: 209,000
Projected total number of adult in-hospital cardiac arrests that are treated with a resuscitation response each year in U.S. hospitals.1 Raina Merchant, MD, and colleagues from the University of Pennsylvania Health System derived several estimates from the American Heart Association’s Get with the Guidelines-Resuscitation registry for 2003 to 2007, weighted for total U.S. hospital bed days. Survival rate for in-hospital cardiac arrests is 21%, compared with 10% for arrests in other settings. But the authors note that arrests might be rising, which is “important for understanding the burden of in-hospital cardiac arrest and developing strategies to improve care for hospitalized patients,” Dr. Merchant says.
Reference
Projected total number of adult in-hospital cardiac arrests that are treated with a resuscitation response each year in U.S. hospitals.1 Raina Merchant, MD, and colleagues from the University of Pennsylvania Health System derived several estimates from the American Heart Association’s Get with the Guidelines-Resuscitation registry for 2003 to 2007, weighted for total U.S. hospital bed days. Survival rate for in-hospital cardiac arrests is 21%, compared with 10% for arrests in other settings. But the authors note that arrests might be rising, which is “important for understanding the burden of in-hospital cardiac arrest and developing strategies to improve care for hospitalized patients,” Dr. Merchant says.
Reference
Projected total number of adult in-hospital cardiac arrests that are treated with a resuscitation response each year in U.S. hospitals.1 Raina Merchant, MD, and colleagues from the University of Pennsylvania Health System derived several estimates from the American Heart Association’s Get with the Guidelines-Resuscitation registry for 2003 to 2007, weighted for total U.S. hospital bed days. Survival rate for in-hospital cardiac arrests is 21%, compared with 10% for arrests in other settings. But the authors note that arrests might be rising, which is “important for understanding the burden of in-hospital cardiac arrest and developing strategies to improve care for hospitalized patients,” Dr. Merchant says.
Reference
Lost in Transition
It’s been nearly two decades since I graduated from medical school. I think back and I honestly do not remember any lectures about transitions of care.
During residency, I remember some attending physicians would insist that when I discharged patients from the hospital, the patients had to leave with post-discharge appointments in hand. Like any diligent intern, I did as I was told. I telephoned the administrative assistants in clinic and booked follow-up appointments for my patients. I always asked for the first available appointment. Why? Because that was what my senior resident told me to do. I suspect he learned that from his resident as well.
Sometimes the appointment was scheduled for the week following discharge; other times it was six months later. I honestly didn’t give it much thought. There was a blank on the discharge paperwork and I filled it in with a date and time. I was doing my job—or so I thought.
Can you imagine if someone just gave you a slip of paper today telling you when to show up to get your teeth cleaned without consulting your schedule? How about scheduling the oil change for your car at a garage 100 miles away? Seems pretty silly, doesn’t it? Nothing about it seems customer-centric or cost-efficient.
With such a system in place, why are we surprised when patients do not show up for their follow-up appointments? When the patient presents to the ED later and is readmitted to the hospital, we label them as “non-compliant” because they failed to show up for their follow-up appointment.
Inefficient, Ineffective, Inappropriate
There are multiple problems with the above situation. The first problem: Why are doctors calling to schedule follow-up appointments in the first place? Do we ask airline pilots to serve refreshments? I suppose they could, but I’d rather they concentrate on flying the plane. It also seems like an awful waste of money and resources when we could accomplish the same feat with less-expensive airline attendants who are better trained to interact with passengers.
At most teaching hospitals across the country, I suspect we still rely on trainees to book follow-up appointments for patients. At hospitals without trainees, I suspect some of this responsibility falls on nurses and unit coordinators. Again, I wonder how often these people are actually in a position to schedule an appointment that the patient is likely to keep—or whether they are filling in a box on a checklist like I used to do.
Common Problem?
How do other industries address this issue? Well, many utilize customer service representatives to help consumers book their appointments. Some industries have advanced software, which allows consumers to book their own appointment online. I have to tell you that I am chuckling as I write this. I’m chuckling not because this is funny—I am just amazed that something that is so common sense is not utilized consistently across the hospital industry. When was the last time you actually called a hotel to book a room? Most of us find it so much more convenient to book airline tickets or hotel rooms online.
If we were to create a system with the consumer’s satisfaction and cost in mind, would you rely on trainees, nurses, or unit coordinators to book follow-up appointments? I suppose Hypothetical System 2.0 would include consumer representatives speaking with patients to book appointments. Hypothetical System 3.0 would allow patients and/or a family member to book the appointment online.
I can tell you that folks at Beth Israel Deaconess Medical Center in Boston, where I work, have given this some thought. We are nowhere near a 3.0 version, but we do rely on professional appointment-makers to work with our hospitalized patients to book follow-up appointments. Inpatient providers put in the order online requesting follow-up appointments for their hospitalized patients. The online application asks the provider to specify the requests. Does the patient need follow-up with specialists, as well as their primary outpatient provider? The inpatient provider can specify the window of time in which they recommend follow-up for the patient. If I want my patient to follow up with their primary-care physician (PCP) within one week and with their cardiologist within two weeks, the appointment-maker will work with the patient and the respective doctors’ offices to make this happen. I am contacted only if any issues arise.
All of this information is provided to the patient with their other discharge paperwork. Some of you might be asking: How can the hospital afford to pay for this software and for the cadre of professional appointment-makers? I am wondering how hospitals can afford not to. It’s like worrying about the cost of a college degree until you realize how difficult it is trying to get a job without one.
Part of the PCP “access” problem we have in this country is due to the fact that not every patient shows up for scheduled appointments. Our appointment-makers minimize the “no show” rate because, by speaking with patients about their schedules, they are providing appointments to patients with knowledge that they are likely to make the appointment. One of the things we learned at Beth Israel was that our trainees were sometimes requesting appointments for patients within one week of discharge when I knew darn well that the patient was unlikely to make that appointment because the patient most likely would still be at rehab.
Prior to this system, we also had the occasional PCP who was upset because we booked their patient’s follow-up with a specialist who was outside that PCP’s “inner circle” of specialists. How in the world are any of us supposed to remember this information?
Well, our professional appointment-makers utilize this information as part of the algorithm they follow when booking appointments for patients. As our nation moves towards a value-based purchasing system for healthcare, we don’t need to recreate the wheel; we can adopt proven practices from other cost-effective industries—and we can improve customer satisfaction.
I am interested in hearing how appointments are arranged for your hospitalized patients. Send me your thoughts at [email protected].
Dr. Li is president of SHM.
It’s been nearly two decades since I graduated from medical school. I think back and I honestly do not remember any lectures about transitions of care.
During residency, I remember some attending physicians would insist that when I discharged patients from the hospital, the patients had to leave with post-discharge appointments in hand. Like any diligent intern, I did as I was told. I telephoned the administrative assistants in clinic and booked follow-up appointments for my patients. I always asked for the first available appointment. Why? Because that was what my senior resident told me to do. I suspect he learned that from his resident as well.
Sometimes the appointment was scheduled for the week following discharge; other times it was six months later. I honestly didn’t give it much thought. There was a blank on the discharge paperwork and I filled it in with a date and time. I was doing my job—or so I thought.
Can you imagine if someone just gave you a slip of paper today telling you when to show up to get your teeth cleaned without consulting your schedule? How about scheduling the oil change for your car at a garage 100 miles away? Seems pretty silly, doesn’t it? Nothing about it seems customer-centric or cost-efficient.
With such a system in place, why are we surprised when patients do not show up for their follow-up appointments? When the patient presents to the ED later and is readmitted to the hospital, we label them as “non-compliant” because they failed to show up for their follow-up appointment.
Inefficient, Ineffective, Inappropriate
There are multiple problems with the above situation. The first problem: Why are doctors calling to schedule follow-up appointments in the first place? Do we ask airline pilots to serve refreshments? I suppose they could, but I’d rather they concentrate on flying the plane. It also seems like an awful waste of money and resources when we could accomplish the same feat with less-expensive airline attendants who are better trained to interact with passengers.
At most teaching hospitals across the country, I suspect we still rely on trainees to book follow-up appointments for patients. At hospitals without trainees, I suspect some of this responsibility falls on nurses and unit coordinators. Again, I wonder how often these people are actually in a position to schedule an appointment that the patient is likely to keep—or whether they are filling in a box on a checklist like I used to do.
Common Problem?
How do other industries address this issue? Well, many utilize customer service representatives to help consumers book their appointments. Some industries have advanced software, which allows consumers to book their own appointment online. I have to tell you that I am chuckling as I write this. I’m chuckling not because this is funny—I am just amazed that something that is so common sense is not utilized consistently across the hospital industry. When was the last time you actually called a hotel to book a room? Most of us find it so much more convenient to book airline tickets or hotel rooms online.
If we were to create a system with the consumer’s satisfaction and cost in mind, would you rely on trainees, nurses, or unit coordinators to book follow-up appointments? I suppose Hypothetical System 2.0 would include consumer representatives speaking with patients to book appointments. Hypothetical System 3.0 would allow patients and/or a family member to book the appointment online.
I can tell you that folks at Beth Israel Deaconess Medical Center in Boston, where I work, have given this some thought. We are nowhere near a 3.0 version, but we do rely on professional appointment-makers to work with our hospitalized patients to book follow-up appointments. Inpatient providers put in the order online requesting follow-up appointments for their hospitalized patients. The online application asks the provider to specify the requests. Does the patient need follow-up with specialists, as well as their primary outpatient provider? The inpatient provider can specify the window of time in which they recommend follow-up for the patient. If I want my patient to follow up with their primary-care physician (PCP) within one week and with their cardiologist within two weeks, the appointment-maker will work with the patient and the respective doctors’ offices to make this happen. I am contacted only if any issues arise.
All of this information is provided to the patient with their other discharge paperwork. Some of you might be asking: How can the hospital afford to pay for this software and for the cadre of professional appointment-makers? I am wondering how hospitals can afford not to. It’s like worrying about the cost of a college degree until you realize how difficult it is trying to get a job without one.
Part of the PCP “access” problem we have in this country is due to the fact that not every patient shows up for scheduled appointments. Our appointment-makers minimize the “no show” rate because, by speaking with patients about their schedules, they are providing appointments to patients with knowledge that they are likely to make the appointment. One of the things we learned at Beth Israel was that our trainees were sometimes requesting appointments for patients within one week of discharge when I knew darn well that the patient was unlikely to make that appointment because the patient most likely would still be at rehab.
Prior to this system, we also had the occasional PCP who was upset because we booked their patient’s follow-up with a specialist who was outside that PCP’s “inner circle” of specialists. How in the world are any of us supposed to remember this information?
Well, our professional appointment-makers utilize this information as part of the algorithm they follow when booking appointments for patients. As our nation moves towards a value-based purchasing system for healthcare, we don’t need to recreate the wheel; we can adopt proven practices from other cost-effective industries—and we can improve customer satisfaction.
I am interested in hearing how appointments are arranged for your hospitalized patients. Send me your thoughts at [email protected].
Dr. Li is president of SHM.
It’s been nearly two decades since I graduated from medical school. I think back and I honestly do not remember any lectures about transitions of care.
During residency, I remember some attending physicians would insist that when I discharged patients from the hospital, the patients had to leave with post-discharge appointments in hand. Like any diligent intern, I did as I was told. I telephoned the administrative assistants in clinic and booked follow-up appointments for my patients. I always asked for the first available appointment. Why? Because that was what my senior resident told me to do. I suspect he learned that from his resident as well.
Sometimes the appointment was scheduled for the week following discharge; other times it was six months later. I honestly didn’t give it much thought. There was a blank on the discharge paperwork and I filled it in with a date and time. I was doing my job—or so I thought.
Can you imagine if someone just gave you a slip of paper today telling you when to show up to get your teeth cleaned without consulting your schedule? How about scheduling the oil change for your car at a garage 100 miles away? Seems pretty silly, doesn’t it? Nothing about it seems customer-centric or cost-efficient.
With such a system in place, why are we surprised when patients do not show up for their follow-up appointments? When the patient presents to the ED later and is readmitted to the hospital, we label them as “non-compliant” because they failed to show up for their follow-up appointment.
Inefficient, Ineffective, Inappropriate
There are multiple problems with the above situation. The first problem: Why are doctors calling to schedule follow-up appointments in the first place? Do we ask airline pilots to serve refreshments? I suppose they could, but I’d rather they concentrate on flying the plane. It also seems like an awful waste of money and resources when we could accomplish the same feat with less-expensive airline attendants who are better trained to interact with passengers.
At most teaching hospitals across the country, I suspect we still rely on trainees to book follow-up appointments for patients. At hospitals without trainees, I suspect some of this responsibility falls on nurses and unit coordinators. Again, I wonder how often these people are actually in a position to schedule an appointment that the patient is likely to keep—or whether they are filling in a box on a checklist like I used to do.
Common Problem?
How do other industries address this issue? Well, many utilize customer service representatives to help consumers book their appointments. Some industries have advanced software, which allows consumers to book their own appointment online. I have to tell you that I am chuckling as I write this. I’m chuckling not because this is funny—I am just amazed that something that is so common sense is not utilized consistently across the hospital industry. When was the last time you actually called a hotel to book a room? Most of us find it so much more convenient to book airline tickets or hotel rooms online.
If we were to create a system with the consumer’s satisfaction and cost in mind, would you rely on trainees, nurses, or unit coordinators to book follow-up appointments? I suppose Hypothetical System 2.0 would include consumer representatives speaking with patients to book appointments. Hypothetical System 3.0 would allow patients and/or a family member to book the appointment online.
I can tell you that folks at Beth Israel Deaconess Medical Center in Boston, where I work, have given this some thought. We are nowhere near a 3.0 version, but we do rely on professional appointment-makers to work with our hospitalized patients to book follow-up appointments. Inpatient providers put in the order online requesting follow-up appointments for their hospitalized patients. The online application asks the provider to specify the requests. Does the patient need follow-up with specialists, as well as their primary outpatient provider? The inpatient provider can specify the window of time in which they recommend follow-up for the patient. If I want my patient to follow up with their primary-care physician (PCP) within one week and with their cardiologist within two weeks, the appointment-maker will work with the patient and the respective doctors’ offices to make this happen. I am contacted only if any issues arise.
All of this information is provided to the patient with their other discharge paperwork. Some of you might be asking: How can the hospital afford to pay for this software and for the cadre of professional appointment-makers? I am wondering how hospitals can afford not to. It’s like worrying about the cost of a college degree until you realize how difficult it is trying to get a job without one.
Part of the PCP “access” problem we have in this country is due to the fact that not every patient shows up for scheduled appointments. Our appointment-makers minimize the “no show” rate because, by speaking with patients about their schedules, they are providing appointments to patients with knowledge that they are likely to make the appointment. One of the things we learned at Beth Israel was that our trainees were sometimes requesting appointments for patients within one week of discharge when I knew darn well that the patient was unlikely to make that appointment because the patient most likely would still be at rehab.
Prior to this system, we also had the occasional PCP who was upset because we booked their patient’s follow-up with a specialist who was outside that PCP’s “inner circle” of specialists. How in the world are any of us supposed to remember this information?
Well, our professional appointment-makers utilize this information as part of the algorithm they follow when booking appointments for patients. As our nation moves towards a value-based purchasing system for healthcare, we don’t need to recreate the wheel; we can adopt proven practices from other cost-effective industries—and we can improve customer satisfaction.
I am interested in hearing how appointments are arranged for your hospitalized patients. Send me your thoughts at [email protected].
Dr. Li is president of SHM.
Quality, Defined
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Pornography. There can be few better hooks for readers than that. Just typing the word is a bit uncomfortable. As is, I imagine, reading it. But it’s effective, and likely why you’ve made it to word 37 of my column—34 words further than you usually get, I imagine.
“What about pornography?” you ask with bated breath. “What could pornography possibly have to do with hospital medicine?” your mind wonders. “Is this the column that (finally) gets Glasheen fired?” the ambulance chaser in you titillates.
By now, you’ve no doubt heard the famous Potter Stewart definition of pornography: “I know it when I see it.” That’s how the former U.S. Supreme Court justice described his threshold for recognizing pornography. It was made famous in a 1960s decision about whether a particular movie scene was protected by the 1st Amendment right to free speech or, indeed, a pornographic obscenity to be censured. Stewart, who clearly recognized the need to “define” pornography, also recognized the inherent challenges in doing so. The I-know-it-when-I-see-it benchmark is, of course, flawed, but I defy you to come up with a better definition.
Quality Is, of Course…
I was thinking about pornography (another discomforting phrase to type) recently—and Potter Stewart’s challenge in defining it, specifically—when I was asked about quality in healthcare. The query, which occurred during a several-hour, mind-numbing meeting (is there another type of several-hour meeting?), was “What is quality?” The question, laced with hostility and dripping with antagonism, was posed by a senior physician and directed pointedly at me. Indignantly, I cleared my throat, mentally stepping onto my pedestal to ceremoniously topple this academic egghead with my erudite response.
“Well, quality is, of course,” I confidently retorted, the “of course” added to demonstrate my moral superiority, “the ability to … uhhh, you see … ummmm, you know.” At which point I again cleared my throat not once, not twice, but a socially awkward three times before employing the timed-honored, full-body shock-twitch that signifies that you’ve just received an urgent vibrate page (faked, of course) and excused myself from the meeting, never to return.
The reality is that I struggle to define quality. Like Chief Justice Stewart, I think I know quality when I see it, but more precise definitions can be elusive.
And distracting.
It’s Not My Job
Just this morning, I read a news release from a respected physician group trumpeting the fact that their advocacy resulted in the federal government reducing the number of quality data-point requirements in their final rule for accountable-care organizations (ACOs) from 66 to 33. Trumpeting? Is this a good thing? Should we be supporting fewer quality measures? The article quoted a physician leader saying that the original reporting requirements were too burdensome. Too burdensome to whom? My guess is the recipients of our care, often referred to as our patients, wouldn’t categorize quality assurance as “too burdensome.”
I was at another meeting recently in which a respected colleague related her take on the physician role in improving quality. “I don’t think that’s a physician’s job. That’s what we have a quality department for,” she noted. “It’s just too expensive, time-consuming, and boring for physicians to do that kind of work.”
Too burdensome? Not a physician’s job to ensure the delivery of quality care? While I understand the sentiment (the need to have support staff collecting data, recognition of the huge infrastructure requirements, etc.), I can’t help but think that these types of responses are a large part of the struggle we are having with improving quality.
Then again, I would hazard that 0.0 percent of physicians would argue with the premise that we are obliged by the Hippocratic Oath, our moral compass, and our sense of professionalism to provide the best possible care to our patients. If we accept that we aren’t doing that—and we aren’t—then what is the disconnect? Why aren’t we seeking more quality data points? Why isn’t this “our job”?
Definitional Disconnect
Well, the truth is, it is our job. And we know it. The problem is that quality isn’t universally defined and the process of trying to define it often distracts us from the true task at hand—improving patient care.
Few of us would argue that a wrong-site surgery or anaphylaxis from administration of a medication known to have caused an allergy represents a suboptimal level of care. But more often than not, we see quality being measured and defined in less concrete, more obscure ways—ways that my eyes may not view as low-quality. These definitions are inherently flawed and breed contempt among providers who are told they aren’t passing muster in metrics they don’t see as “quality.”
So the real disconnect is definitional. Is quality defined by the Institute of Medicine characteristics of safe, effective, patient-centered, timely, efficient, and equitable care? Or is it the rates of underuse, overuse, and misuse of medical treatments and procedures? Or is it defined by individual quality metrics such as those captured by the Centers for Medicare & Medicaid Services (CMS)—you know, things like hospital fall rates, perioperative antibiotic usage, beta-blockers after MI, or whether a patient reported their bathroom as being clean?
Is 30% of the quality of care that we deliver referable to the patient experience (as measured by HCAHPS), as the new value-based purchasing program would have us believe? Is it hospital accreditation through the Joint Commission? Or physician certification through our parent boards? Is quality measured by a physician’s cognitive or technical skills, or where they went to school? Is it experience, medical knowledge, guideline usage?
We use such a mystifying array of metrics to define quality that it confuses the issue such that physicians who personally believe they are doing a good job can become disenfranchised. To a physician who provides clinically appropriate care around a surgical procedure or treatment of pneumonia, it can be demeaning and demoralizing to suggest that his or her patient did not receive “high quality” care because the bathroom wasn’t clean or the patient didn’t get a flu shot. Yet, this is the message we often send—a message that alienates many physicians, making them cynical about quality and disengaged in quality improvement. The result is that they seek fewer quality data points and defer the job of improving quality to someone else.
Make no mistake: Quality measures have an important role in our healthcare landscape. But to the degree that defining quality confuses, alienates, or disenfranchises providers, we should stop trying to define it. Quality is not a thing, a metric, or an outcome. It is not an elusive, unquantifiable creature that is achievable only by the elite. Quality is simply providing the best possible care. And quality improvement is simply closing the gap between the best possible care and actual care.
In this regard, we can learn a lot from Potter Stewart. We know quality when we see it. And we know what an absence of quality looks like.
Let’s close that gap by putting less energy into defining quality, and putting more energy into the tenacious pursuit of quality.
Dr. Glasheen is physician editor of The Hospitalist.
Seven-Day Schedule Could Improve Hospital Quality, Capacity
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
Intermountain Risk Score Could Help Heart Failure Cases
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
A risk measurement model created by the Heart Institute at Intermountain Medical Center in Murray, Utah, may one day be a familiar tool to HM groups.
Known as the Intermountain Risk Score (http://intermountainhealthcare.org/IMRS/), the tool uses 15 parameters culled from complete blood counts (CBC) and the basic metabolic profile (BMP) to determine risk. The model, which is free, was used to stratify mortality risk in heart failure patients receiving an internal cardioverter defibrillator (ICD) in a paper presented in September at the 15th annual scientific meeting of the Heart Failure Society of America.
The report found that mortality at one-year post-ICD was 2.4%, 11.8%, and 28.2% for the low-, moderate-, and high-risk groups, respectively. And while the study was narrow in its topic, Benjamin Horne, PhD, director of cardiovascular and genetic epidemiology at the institute, says its application to a multitude of inpatient settings is a natural evolution for the tool.
“One of the things about the innovation of this risk score is the lab tests are so common already,” Dr. Horne says. “They are so familiar to physicians. They’ve been around for decades. What no one had realized before is they had additional risk information contained within them.”
High-Performing Hospitals Invest in QI Infrastructure
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
A new study evaluating outcomes for hospitals participating in the American Heart Association’s Get with the Guidelines program found no correlation between high performance on adhering to measures and care standards for acute myocardial infarction and for heart failure despite overlap between the sets of care processes (J Am Coll Cardio. 2011;58:637-644).
A total of 400,000 heart patients were studied, and 283 participating hospitals were stratified into thirds based on their adherence to core quality measures for each disease, with the upper third labeled superior in performance. Lead author Tracy Wang, MD, MHS, MSc, of the Duke Clinical Research Institute in Durham, N.C., and colleagues found that superior performance for only one of the two diseases led to such end-result outcomes as in-hospital mortality that were no better than for hospitals that were not high performers for either condition. But hospitals with superior performance for both conditions had lower in-hospital mortality rates.
“Perhaps quality is more than just following checklists,” Dr. Wang says. “There’s something special about these high-performing hospitals across the board, with better QI, perhaps a little more investment in infrastructure for quality.”
This result, Dr. Wang says, should give ammunition for hospitalists and other physicians to go to their hospital administrators to request more investment in quality improvement overall, not just for specific conditions.
Joint Commission Launches Certification for Hospital Palliative Care
A new Joint Commission program offering advanced certification for hospital-based palliative-care services is accepting applications and conducting daylong surveys through the end of this month. As with the Joint Commission’s reviews of other specialty services (e.g. primary stroke centers), certification is narrower in scope, with service-specific evaluation of care and outcomes, than a full accreditation survey—which is an organizationwide evaluation of core processes and functions.
Advanced certification in palliative care is voluntary for the steadily growing number of acute-care hospitals offering palliative-care services (1,568, according to the latest count by the American Hospital Association), but the hospital seeking it must be accredited by the Joint Commission.1 Certification is intended for formal, defined, inpatient palliative care, whether dedicated units or consultation services, with the ability to direct clinical management of patients.
The core palliative-care team includes “licensed independent practitioners” (typically physicians), registered nurses, chaplains, and social workers.2 The service should follow palliative-care guidelines and evidence-based practice, and it must collect quality data on four performance measures—two of them clinical—and use these data to improve performance.
According to Michelle Sacco, the Joint Commission’s executive director for palliative care, evidence-based practice includes ensuring appropriate transitions to other community resources, such as hospices. She thinks the program is perfect for hospitalists, as HM increasingly is participating in palliative care in their hospitals. “This is also an opportunity to change the mindset that palliative care is for the end-stage only,” Sacco says.
Two-year certification costs $9,655, including the onsite review. For more information, visit the Joint Commission website (www.jointcommission.org/certification) or the Center to Advance Palliative Care’s site (www.capc.org).
References
- Palliative care in hospitals continues rapid growth for 10th straight year, according to latest analysis. Center to Advance Palliative Care website. Available at: www.capc.org/news-and-events/releases/07-14-11. Accessed Aug. 30, 2011.
- The National Consensus Project’s Clinical Practice Guidelines for Quality Palliative Care. The National Consensus Project website. Available at: www.nationalconsensusproject.org/. Accessed Aug. 31, 2011.
A new Joint Commission program offering advanced certification for hospital-based palliative-care services is accepting applications and conducting daylong surveys through the end of this month. As with the Joint Commission’s reviews of other specialty services (e.g. primary stroke centers), certification is narrower in scope, with service-specific evaluation of care and outcomes, than a full accreditation survey—which is an organizationwide evaluation of core processes and functions.
Advanced certification in palliative care is voluntary for the steadily growing number of acute-care hospitals offering palliative-care services (1,568, according to the latest count by the American Hospital Association), but the hospital seeking it must be accredited by the Joint Commission.1 Certification is intended for formal, defined, inpatient palliative care, whether dedicated units or consultation services, with the ability to direct clinical management of patients.
The core palliative-care team includes “licensed independent practitioners” (typically physicians), registered nurses, chaplains, and social workers.2 The service should follow palliative-care guidelines and evidence-based practice, and it must collect quality data on four performance measures—two of them clinical—and use these data to improve performance.
According to Michelle Sacco, the Joint Commission’s executive director for palliative care, evidence-based practice includes ensuring appropriate transitions to other community resources, such as hospices. She thinks the program is perfect for hospitalists, as HM increasingly is participating in palliative care in their hospitals. “This is also an opportunity to change the mindset that palliative care is for the end-stage only,” Sacco says.
Two-year certification costs $9,655, including the onsite review. For more information, visit the Joint Commission website (www.jointcommission.org/certification) or the Center to Advance Palliative Care’s site (www.capc.org).
References
- Palliative care in hospitals continues rapid growth for 10th straight year, according to latest analysis. Center to Advance Palliative Care website. Available at: www.capc.org/news-and-events/releases/07-14-11. Accessed Aug. 30, 2011.
- The National Consensus Project’s Clinical Practice Guidelines for Quality Palliative Care. The National Consensus Project website. Available at: www.nationalconsensusproject.org/. Accessed Aug. 31, 2011.
A new Joint Commission program offering advanced certification for hospital-based palliative-care services is accepting applications and conducting daylong surveys through the end of this month. As with the Joint Commission’s reviews of other specialty services (e.g. primary stroke centers), certification is narrower in scope, with service-specific evaluation of care and outcomes, than a full accreditation survey—which is an organizationwide evaluation of core processes and functions.
Advanced certification in palliative care is voluntary for the steadily growing number of acute-care hospitals offering palliative-care services (1,568, according to the latest count by the American Hospital Association), but the hospital seeking it must be accredited by the Joint Commission.1 Certification is intended for formal, defined, inpatient palliative care, whether dedicated units or consultation services, with the ability to direct clinical management of patients.
The core palliative-care team includes “licensed independent practitioners” (typically physicians), registered nurses, chaplains, and social workers.2 The service should follow palliative-care guidelines and evidence-based practice, and it must collect quality data on four performance measures—two of them clinical—and use these data to improve performance.
According to Michelle Sacco, the Joint Commission’s executive director for palliative care, evidence-based practice includes ensuring appropriate transitions to other community resources, such as hospices. She thinks the program is perfect for hospitalists, as HM increasingly is participating in palliative care in their hospitals. “This is also an opportunity to change the mindset that palliative care is for the end-stage only,” Sacco says.
Two-year certification costs $9,655, including the onsite review. For more information, visit the Joint Commission website (www.jointcommission.org/certification) or the Center to Advance Palliative Care’s site (www.capc.org).
References
- Palliative care in hospitals continues rapid growth for 10th straight year, according to latest analysis. Center to Advance Palliative Care website. Available at: www.capc.org/news-and-events/releases/07-14-11. Accessed Aug. 30, 2011.
- The National Consensus Project’s Clinical Practice Guidelines for Quality Palliative Care. The National Consensus Project website. Available at: www.nationalconsensusproject.org/. Accessed Aug. 31, 2011.
HM@15 - Is Hospital Medicine a Good Bet for Improving Patient Satisfaction?
At first glance, the deck might seem hopelessly stacked against hospitalists with regard to patient satisfaction. HM practitioners lack the long-term relationship with patients that many primary-care physicians (PCPs) have established. Unlike surgeons and other specialists, they tend to care for those patients—more complicated, lacking a regular doctor, or admitted through the ED, for example—who are more inclined to rate their hospital stay unfavorably.1 They may not even be accurately remembered by patients who encounter multiple doctors during the course of their hospitalization.2 And hospital information systems can misidentify the treating physician, while the actual surveys used to gauge hospitalists have been imperfect at best.3
And yet, the hospitalist model has evolved substantially on the question of how it can impact patient perceptions of care.
Initially, hospitalist champions adopted a largely defensive posture: The model would not negatively impact patient satisfaction as it delivered on efficiency—and later on quality. The healthcare system, however, is beginning to recognize the hospitalist as part of a care “team” whose patient-centered approach might pay big dividends in the inpatient experience and, eventually, on satisfaction scores.
“I think the next phase, which is a focus on the hospitalist as a team member and team builder, is going to be key,” says William Southern, MD, MPH, SFHM, chief of the division of hospital medicine at Montefiore Medical Center in Bronx, N.Y.
Recent studies suggest that hospitalists are helping to design and test new tools that will not only improve satisfaction, but also more fairly assess the impact of individual doctors. As the maturation process continues, experts say, hospitalists have an opportunity to influence both provider-based interventions and more programmatic decision-making that can have far-reaching effects. Certainly, the hand dealt to hospitalists is looking more favorable even as the ante has been raised with Medicare programs like value-based purchasing, and its pot of money tied to patient perceptions of care.
So how have hospitalists played their cards so far?
A Look at the Evidence
In its early years, the HM model faced a persistent criticism: Replacing traditional caregivers with these new inpatient providers in the name of efficiency would increase handoffs and, therefore, discontinuities of care delivered by a succession of unfamiliar faces. If patients didn’t see their PCP in the hospital, the thinking went, they might be more disgruntled at being tended to by hospitalists, leading to lower satisfaction scores.4
A particularly heated exchange played out in 1999 in the New England Journal of Medicine. Farris A. Manian, MD, MPH, of Infectious Disease Consultants in St. Louis wrote in one letter, “I am particularly concerned about what impressionable house-staff members will learn from hospitalists who place an inordinate emphasis on cost rather than the quality of patient care or teaching.”5
A few subsequent studies, however, hinted that such concerns might be overstated. A 2000 analysis in the American Journal of Medicine that examined North Mississippi Health Services in Tupelo, for instance, found that care administered by hospitalists led to a shorter length of stay and lower costs than care delivered by internists. Importantly, the study found that patient satisfaction was similar for both models, while quality metrics were likewise equal or even tilted slightly toward hospitalists.6
In their influential 2002 review of a profession that was only a half-decade old, Robert Wachter, MD, MHM, and Lee Goldman, MD, MPH, FACP from the University of California at San Francisco reinforced the message that HM wouldn’t lead to unhappy patients. “Empirical research supports the premise that hospitalists improve inpatient efficiency without harmful effects on quality or patient satisfaction,” they asserted.7
Among pediatric patients, a 2005 review found that “none of the four studies that evaluated patient satisfaction found statistically significant differences in satisfaction with inpatient care. However, two of the three evaluations that did assess parents’ satisfaction with care provided to their children found that parents were more satisfied with some aspects of care provided by hospitalists.”8

—William Southern, MD, chief, division of hospital medicine, Montefiore Medical Center, Bronx, N.Y.
Similar findings were popping up around the country: Replacing an internal medicine residency program with a physician assistant/hospitalist model at Brooklyn, N.Y.’s Coney Island Hospital did not adversely impact patient satisfaction, while it significantly improved mortality.9 Brigham & Women’s Hospital in Boston likewise reported no change in patient satisfaction in a study comparing a physician assistant/hospitalist service with traditional house staff services.10
The shift toward a more proactive position on patient satisfaction is exemplified within a 2008 white paper, “Hospitalists Meeting the Challenge of Patient Satisfaction,” written by a group of 19 private-practice HM experts known as The Phoenix Group.3 The paper acknowledged the flaws and limitations of existing survey methodologies, including Medicare’s Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) scores. Even so, the authors urged practice groups to adopt a team-oriented approach to communicate to hospital administrations “the belief that hospitalists are in the best position to improve survey scores overall for the facility.”
Carle Foundation Hospital in Urbana, Ill., is now publicly advertising its HM service’s contribution to high patient satisfaction scores on its website, and underscoring the hospitalists’ consistency, accessibility, and communication skills. “The hospital is never without a hospitalist, and our nurses know that they can rely on them,” says Lynn Barnes, vice president of hospital operations. “They’re available, they’re within a few minutes away, and patients’ needs get met very efficiently and rapidly.”
As a result, she says, their presence can lead to higher scores in patients’ perceptions of communication.
Hospitalists also have been central to several safety initiatives at Carle. Napoleon Knight, MD, medical director of hospital medicine and associate vice president for quality, says the HM team has helped address undiagnosed sleep apnea and implement rapid responses, such as “Code Speed.” Caregivers or family members can use the code to immediately call for help if they detect a downturn in a patient’s condition.
The ongoing initiatives, Dr. Knight and Barnes say, are helping the hospital improve how patients and their loved ones perceive care as Carle adapts to a rapidly shifting healthcare landscape. “With all of the changes that seem to be coming from the external environment weekly, we want to work collaboratively to make sure we’re connected and aligned and communicating in an ongoing fashion so we can react to all of these changes,” Dr. Knight says.
Continued below...
A Hopeful Trend
So far, evidence that the HM model is more broadly raising patient satisfaction scores is largely anecdotal. But a few analyses suggest the trend is moving in the right direction. A recent study in the American Journal of Medical Quality, for instance, concludes that facilities with hospitalists might have an advantage in patient satisfaction with nursing and such personal issues as privacy, emotional needs, and response to complaints.11 The study also posits that teaching facilities employing hospitalists could see benefits in overall satisfaction, while large facilities with hospitalists might see gains in satisfaction with admissions, nursing, and tests and treatments.
Brad Fulton, PhD, a researcher at South Bend, Ind.-based healthcare consulting firm Press Ganey and the study’s lead author, says the 30,000-foot view of patient satisfaction at the facility level can get foggy in a hurry due to differences in the kind and size of hospitalist programs. “And despite all of that fog, we’re still able to see through that and find something,” he says.
One limitation is that the study findings could also reflect differences in the culture of facilities that choose to add hospitalists. That caveat means it might not be possible to completely untangle the effect of an HM group on inpatient care from the larger, hospitalwide values that have allowed the group to set up shop. The wrinkle brings its own fascinating questions, according to Fulton. For example, is that kind of culture necessary for hospitalists to function as well as they do?

—Lynn Barnes, vice president of hospital operations, Carle Foundation Hospital, Urbana, Ill.
Such considerations will become more important as the healthcare system places additional emphasis on patient satisfaction, as Medicare’s value-based purchasing program is doing through its HCAHPS scores. With all the changes, success or failure on the patient experience front is going to carry “not just a reputational import, but also a financial impact,” says Ethan Cumbler, MD, FACP, director of Acute Care for the Elderly (ACE) Service at the University of Colorado Denver.
So how can HM fairly and accurately assess its own practitioners? “I think one starts by trying to apply some of the rigor that we have learned from our experience as hospitalists in quality improvement to the more warm and fuzzy field of patient experience,” Dr. Cumbler says. Many hospitals employ surveys supplied by consultants like Press Ganey to track the global patient satisfaction for their institution, he says.
“But for an individual hospitalist or hospitalist group, that kind of tool often lacks both the specificity and the timeliness necessary to make good decisions about impact of interventions on patient satisfaction,” he says.
Mark Williams, MD, FACP, FHM, professor and chief of the division of hospital medicine at Northwestern University’s Feinberg School of Medicine in Chicago, agrees that such imprecision could lead to unfair assessments. “You can imagine a scenario where a patient actually liked their hospitalist very much,” he says, “but when they got the survey, they said [their stay] was terrible and the reasons being because maybe the nurse call button was not answered and the food was terrible and medications were given to them incorrectly, or it was noisy at night so they couldn’t sleep.”
A recent study by Dr. Williams and his colleagues, in which they employed a new assessment method called the Communication Assessment Tool (CAT), confirmed the group’s suspicions: “that the results from the Press Ganey didn’t match up with the CAT, which was a direct assessment of the patient’s perception of the hospitalist’s communication skills,” he says.12
The validated tool, he adds, provides directed feedback to the physician based on the percentage of patients rating that provider as excellent, instead of on the average total score. Hospitalists have felt vindicated by the results. “They were very nervous because the hospital talked about basing an incentive off of the Press Ganey scores, and we said, ‘You can’t do that,’ because we didn’t feel they were accurate, and this study proved that,” Dr. Williams explains.
Fortunately, the message has reached researchers and consultants alike, and better tools are starting to reach hospitals around the country. At HM11 in May, Press Ganey unveiled a new survey designed to help patients assess the care delivered by two hospitalists, the average for inpatient stays. The item set is specific to HM functions, and includes the photo and name of each hospitalist, which Fulton says should improve the validity and accuracy of the data.
“The early response looks really good,” Fulton says, though it’s too early to say whether the tool, called Hospitalist Insight, will live up to its billing. If it proves its mettle, Fulton says, the survey could be used to reward top-performing hospitalists, and the growing dataset could allow hospitals to compare themselves with appropriate peer groups for fairer comparisons.
Meanwhile, researchers are testing out checklists to score hospitalist etiquette, and tracking and paging systems to help ensure continuity of care. They have found increased patient satisfaction when doctors engage in verbal communication during a discharge, in interdisciplinary team rounding, and in efforts to address religious and spiritual concerns.
Since 2000, when Montefiore’s hospitalist program began, Dr. Southern says the hospital has explained to patients the tradeoff accompanying the HM model. “I say something like this to every patient: ‘I know I’m not the doctor that you know, and you’re just meeting me. The downside is that you haven’t met me before and I’m a new face, but the upside is that if you need me during the day, I’m here all the time, I’m not someplace else. And so if you need something, I can be here quickly.’ ”
Being very explicit about that tradeoff, he says, has made patients very comfortable with the model of care, especially during a crisis moment in their lives. “I think it’s really important to say, ‘I know you don’t know me, but here’s the upside.’ And my experience is that patients easily understand that tradeoff and are very positive,” Dr. Southern says.
The Verdict
Available evidence suggests that practitioners of the HM model have pivoted from defending against early criticism that they may harm patient satisfaction to pitching themselves as team leaders who can boost facilitywide perceptions of care. So far, too little research has been conducted to suggest whether that optimism is fully warranted, but early signs look promising.
At facilities like Chicago’s Northwestern Memorial Hospital, medical floors staffed by hospitalists are beginning to beat out surgical floors for the traveling patient satisfaction award. And experts like Dr. Cumbler are pondering how ongoing initiatives to boost scores can follow in the footsteps of efficiency and quality-raising efforts by making the transition from focusing on individual doctors to adopting a more programmatic approach. “What’s happening to that patient during the 23 hours and 45 minutes of their hospital day that you are not sitting by the bedside? And what influence should a hospitalist have in affecting that other 23 hours and 45 minutes?” he says.
Handoffs, discharges, communication with PCPs, and other potential weak points in maintaining high levels of patient satisfaction, Dr. Cumbler says, all are amenable to systems-based improvement. “As hospitalists, we are in a unique position to influence not only our one-one-one interaction with the patient, but also to influence that system of care in a way that patients will notice in a real and tangible way,” he says. “I think we’ve recognized for some time that a healthy heart but a miserable patient is not a healthy person.”
Bryn Nelson is a freelance medical journalist based in Seattle.
References
- Williams M, Flanders SA, Whitcomb WF. Comprehensive hospital medicine: an evidence based approach. Elsevier;2007:971-976.
- Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in-hospital physicians. Arch Intern Med. 2009;169(2):199-201.
- Singer AS, et al. Hospitalists meeting the challenge of patient satisfaction. The Phoenix Group. 2008;1-5.
- Manian FA. Whither continuity of care? N Engl J Med. 1999;340:1362-1363.
- Correspondence. Whither continuity of care? N Engl J Med. 1999;341:850-852.
- Davis KM, Koch KE, Harvey JK, et al. Effects of hospitalists on cost, outcomes, and patient satisfaction in a rural health system. Amer J Med. 2000;108(8):621-626.
- Wachter RM, Goldman L. The hospitalist movement 5 years later. JAMA. 2002;287(4):487-494.
- Coffman J, Rundall TG. The impact of hospitalists on the cost and quality of inpatient care in the United States (a research synthesis). Med Care Res Rev. 2005;62:379–406.
- Dhuper S, Choksi S. Replacing an academic internal medicine residency program with a physician assistant-hospitalist model: a comparative analysis study. Am J Med Qual. 2009;24(2):132-139.
- Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361-368.
- Fulton BR, Drevs KE, Ayala LJ, Malott DL Jr. Patient satisfaction with hospitalists: facility-level analyses. Am J Med Qual. 2011;26(2):95-102.
- Ferranti DE, Makoul G, Forth VE, Rauworth J, Lee J, Williams MV. Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522-527.
At first glance, the deck might seem hopelessly stacked against hospitalists with regard to patient satisfaction. HM practitioners lack the long-term relationship with patients that many primary-care physicians (PCPs) have established. Unlike surgeons and other specialists, they tend to care for those patients—more complicated, lacking a regular doctor, or admitted through the ED, for example—who are more inclined to rate their hospital stay unfavorably.1 They may not even be accurately remembered by patients who encounter multiple doctors during the course of their hospitalization.2 And hospital information systems can misidentify the treating physician, while the actual surveys used to gauge hospitalists have been imperfect at best.3
And yet, the hospitalist model has evolved substantially on the question of how it can impact patient perceptions of care.
Initially, hospitalist champions adopted a largely defensive posture: The model would not negatively impact patient satisfaction as it delivered on efficiency—and later on quality. The healthcare system, however, is beginning to recognize the hospitalist as part of a care “team” whose patient-centered approach might pay big dividends in the inpatient experience and, eventually, on satisfaction scores.
“I think the next phase, which is a focus on the hospitalist as a team member and team builder, is going to be key,” says William Southern, MD, MPH, SFHM, chief of the division of hospital medicine at Montefiore Medical Center in Bronx, N.Y.
Recent studies suggest that hospitalists are helping to design and test new tools that will not only improve satisfaction, but also more fairly assess the impact of individual doctors. As the maturation process continues, experts say, hospitalists have an opportunity to influence both provider-based interventions and more programmatic decision-making that can have far-reaching effects. Certainly, the hand dealt to hospitalists is looking more favorable even as the ante has been raised with Medicare programs like value-based purchasing, and its pot of money tied to patient perceptions of care.
So how have hospitalists played their cards so far?
A Look at the Evidence
In its early years, the HM model faced a persistent criticism: Replacing traditional caregivers with these new inpatient providers in the name of efficiency would increase handoffs and, therefore, discontinuities of care delivered by a succession of unfamiliar faces. If patients didn’t see their PCP in the hospital, the thinking went, they might be more disgruntled at being tended to by hospitalists, leading to lower satisfaction scores.4
A particularly heated exchange played out in 1999 in the New England Journal of Medicine. Farris A. Manian, MD, MPH, of Infectious Disease Consultants in St. Louis wrote in one letter, “I am particularly concerned about what impressionable house-staff members will learn from hospitalists who place an inordinate emphasis on cost rather than the quality of patient care or teaching.”5
A few subsequent studies, however, hinted that such concerns might be overstated. A 2000 analysis in the American Journal of Medicine that examined North Mississippi Health Services in Tupelo, for instance, found that care administered by hospitalists led to a shorter length of stay and lower costs than care delivered by internists. Importantly, the study found that patient satisfaction was similar for both models, while quality metrics were likewise equal or even tilted slightly toward hospitalists.6
In their influential 2002 review of a profession that was only a half-decade old, Robert Wachter, MD, MHM, and Lee Goldman, MD, MPH, FACP from the University of California at San Francisco reinforced the message that HM wouldn’t lead to unhappy patients. “Empirical research supports the premise that hospitalists improve inpatient efficiency without harmful effects on quality or patient satisfaction,” they asserted.7
Among pediatric patients, a 2005 review found that “none of the four studies that evaluated patient satisfaction found statistically significant differences in satisfaction with inpatient care. However, two of the three evaluations that did assess parents’ satisfaction with care provided to their children found that parents were more satisfied with some aspects of care provided by hospitalists.”8

—William Southern, MD, chief, division of hospital medicine, Montefiore Medical Center, Bronx, N.Y.
Similar findings were popping up around the country: Replacing an internal medicine residency program with a physician assistant/hospitalist model at Brooklyn, N.Y.’s Coney Island Hospital did not adversely impact patient satisfaction, while it significantly improved mortality.9 Brigham & Women’s Hospital in Boston likewise reported no change in patient satisfaction in a study comparing a physician assistant/hospitalist service with traditional house staff services.10
The shift toward a more proactive position on patient satisfaction is exemplified within a 2008 white paper, “Hospitalists Meeting the Challenge of Patient Satisfaction,” written by a group of 19 private-practice HM experts known as The Phoenix Group.3 The paper acknowledged the flaws and limitations of existing survey methodologies, including Medicare’s Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) scores. Even so, the authors urged practice groups to adopt a team-oriented approach to communicate to hospital administrations “the belief that hospitalists are in the best position to improve survey scores overall for the facility.”
Carle Foundation Hospital in Urbana, Ill., is now publicly advertising its HM service’s contribution to high patient satisfaction scores on its website, and underscoring the hospitalists’ consistency, accessibility, and communication skills. “The hospital is never without a hospitalist, and our nurses know that they can rely on them,” says Lynn Barnes, vice president of hospital operations. “They’re available, they’re within a few minutes away, and patients’ needs get met very efficiently and rapidly.”
As a result, she says, their presence can lead to higher scores in patients’ perceptions of communication.
Hospitalists also have been central to several safety initiatives at Carle. Napoleon Knight, MD, medical director of hospital medicine and associate vice president for quality, says the HM team has helped address undiagnosed sleep apnea and implement rapid responses, such as “Code Speed.” Caregivers or family members can use the code to immediately call for help if they detect a downturn in a patient’s condition.
The ongoing initiatives, Dr. Knight and Barnes say, are helping the hospital improve how patients and their loved ones perceive care as Carle adapts to a rapidly shifting healthcare landscape. “With all of the changes that seem to be coming from the external environment weekly, we want to work collaboratively to make sure we’re connected and aligned and communicating in an ongoing fashion so we can react to all of these changes,” Dr. Knight says.
Continued below...
A Hopeful Trend
So far, evidence that the HM model is more broadly raising patient satisfaction scores is largely anecdotal. But a few analyses suggest the trend is moving in the right direction. A recent study in the American Journal of Medical Quality, for instance, concludes that facilities with hospitalists might have an advantage in patient satisfaction with nursing and such personal issues as privacy, emotional needs, and response to complaints.11 The study also posits that teaching facilities employing hospitalists could see benefits in overall satisfaction, while large facilities with hospitalists might see gains in satisfaction with admissions, nursing, and tests and treatments.
Brad Fulton, PhD, a researcher at South Bend, Ind.-based healthcare consulting firm Press Ganey and the study’s lead author, says the 30,000-foot view of patient satisfaction at the facility level can get foggy in a hurry due to differences in the kind and size of hospitalist programs. “And despite all of that fog, we’re still able to see through that and find something,” he says.
One limitation is that the study findings could also reflect differences in the culture of facilities that choose to add hospitalists. That caveat means it might not be possible to completely untangle the effect of an HM group on inpatient care from the larger, hospitalwide values that have allowed the group to set up shop. The wrinkle brings its own fascinating questions, according to Fulton. For example, is that kind of culture necessary for hospitalists to function as well as they do?

—Lynn Barnes, vice president of hospital operations, Carle Foundation Hospital, Urbana, Ill.
Such considerations will become more important as the healthcare system places additional emphasis on patient satisfaction, as Medicare’s value-based purchasing program is doing through its HCAHPS scores. With all the changes, success or failure on the patient experience front is going to carry “not just a reputational import, but also a financial impact,” says Ethan Cumbler, MD, FACP, director of Acute Care for the Elderly (ACE) Service at the University of Colorado Denver.
So how can HM fairly and accurately assess its own practitioners? “I think one starts by trying to apply some of the rigor that we have learned from our experience as hospitalists in quality improvement to the more warm and fuzzy field of patient experience,” Dr. Cumbler says. Many hospitals employ surveys supplied by consultants like Press Ganey to track the global patient satisfaction for their institution, he says.
“But for an individual hospitalist or hospitalist group, that kind of tool often lacks both the specificity and the timeliness necessary to make good decisions about impact of interventions on patient satisfaction,” he says.
Mark Williams, MD, FACP, FHM, professor and chief of the division of hospital medicine at Northwestern University’s Feinberg School of Medicine in Chicago, agrees that such imprecision could lead to unfair assessments. “You can imagine a scenario where a patient actually liked their hospitalist very much,” he says, “but when they got the survey, they said [their stay] was terrible and the reasons being because maybe the nurse call button was not answered and the food was terrible and medications were given to them incorrectly, or it was noisy at night so they couldn’t sleep.”
A recent study by Dr. Williams and his colleagues, in which they employed a new assessment method called the Communication Assessment Tool (CAT), confirmed the group’s suspicions: “that the results from the Press Ganey didn’t match up with the CAT, which was a direct assessment of the patient’s perception of the hospitalist’s communication skills,” he says.12
The validated tool, he adds, provides directed feedback to the physician based on the percentage of patients rating that provider as excellent, instead of on the average total score. Hospitalists have felt vindicated by the results. “They were very nervous because the hospital talked about basing an incentive off of the Press Ganey scores, and we said, ‘You can’t do that,’ because we didn’t feel they were accurate, and this study proved that,” Dr. Williams explains.
Fortunately, the message has reached researchers and consultants alike, and better tools are starting to reach hospitals around the country. At HM11 in May, Press Ganey unveiled a new survey designed to help patients assess the care delivered by two hospitalists, the average for inpatient stays. The item set is specific to HM functions, and includes the photo and name of each hospitalist, which Fulton says should improve the validity and accuracy of the data.
“The early response looks really good,” Fulton says, though it’s too early to say whether the tool, called Hospitalist Insight, will live up to its billing. If it proves its mettle, Fulton says, the survey could be used to reward top-performing hospitalists, and the growing dataset could allow hospitals to compare themselves with appropriate peer groups for fairer comparisons.
Meanwhile, researchers are testing out checklists to score hospitalist etiquette, and tracking and paging systems to help ensure continuity of care. They have found increased patient satisfaction when doctors engage in verbal communication during a discharge, in interdisciplinary team rounding, and in efforts to address religious and spiritual concerns.
Since 2000, when Montefiore’s hospitalist program began, Dr. Southern says the hospital has explained to patients the tradeoff accompanying the HM model. “I say something like this to every patient: ‘I know I’m not the doctor that you know, and you’re just meeting me. The downside is that you haven’t met me before and I’m a new face, but the upside is that if you need me during the day, I’m here all the time, I’m not someplace else. And so if you need something, I can be here quickly.’ ”
Being very explicit about that tradeoff, he says, has made patients very comfortable with the model of care, especially during a crisis moment in their lives. “I think it’s really important to say, ‘I know you don’t know me, but here’s the upside.’ And my experience is that patients easily understand that tradeoff and are very positive,” Dr. Southern says.
The Verdict
Available evidence suggests that practitioners of the HM model have pivoted from defending against early criticism that they may harm patient satisfaction to pitching themselves as team leaders who can boost facilitywide perceptions of care. So far, too little research has been conducted to suggest whether that optimism is fully warranted, but early signs look promising.
At facilities like Chicago’s Northwestern Memorial Hospital, medical floors staffed by hospitalists are beginning to beat out surgical floors for the traveling patient satisfaction award. And experts like Dr. Cumbler are pondering how ongoing initiatives to boost scores can follow in the footsteps of efficiency and quality-raising efforts by making the transition from focusing on individual doctors to adopting a more programmatic approach. “What’s happening to that patient during the 23 hours and 45 minutes of their hospital day that you are not sitting by the bedside? And what influence should a hospitalist have in affecting that other 23 hours and 45 minutes?” he says.
Handoffs, discharges, communication with PCPs, and other potential weak points in maintaining high levels of patient satisfaction, Dr. Cumbler says, all are amenable to systems-based improvement. “As hospitalists, we are in a unique position to influence not only our one-one-one interaction with the patient, but also to influence that system of care in a way that patients will notice in a real and tangible way,” he says. “I think we’ve recognized for some time that a healthy heart but a miserable patient is not a healthy person.”
Bryn Nelson is a freelance medical journalist based in Seattle.
References
- Williams M, Flanders SA, Whitcomb WF. Comprehensive hospital medicine: an evidence based approach. Elsevier;2007:971-976.
- Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in-hospital physicians. Arch Intern Med. 2009;169(2):199-201.
- Singer AS, et al. Hospitalists meeting the challenge of patient satisfaction. The Phoenix Group. 2008;1-5.
- Manian FA. Whither continuity of care? N Engl J Med. 1999;340:1362-1363.
- Correspondence. Whither continuity of care? N Engl J Med. 1999;341:850-852.
- Davis KM, Koch KE, Harvey JK, et al. Effects of hospitalists on cost, outcomes, and patient satisfaction in a rural health system. Amer J Med. 2000;108(8):621-626.
- Wachter RM, Goldman L. The hospitalist movement 5 years later. JAMA. 2002;287(4):487-494.
- Coffman J, Rundall TG. The impact of hospitalists on the cost and quality of inpatient care in the United States (a research synthesis). Med Care Res Rev. 2005;62:379–406.
- Dhuper S, Choksi S. Replacing an academic internal medicine residency program with a physician assistant-hospitalist model: a comparative analysis study. Am J Med Qual. 2009;24(2):132-139.
- Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361-368.
- Fulton BR, Drevs KE, Ayala LJ, Malott DL Jr. Patient satisfaction with hospitalists: facility-level analyses. Am J Med Qual. 2011;26(2):95-102.
- Ferranti DE, Makoul G, Forth VE, Rauworth J, Lee J, Williams MV. Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522-527.
At first glance, the deck might seem hopelessly stacked against hospitalists with regard to patient satisfaction. HM practitioners lack the long-term relationship with patients that many primary-care physicians (PCPs) have established. Unlike surgeons and other specialists, they tend to care for those patients—more complicated, lacking a regular doctor, or admitted through the ED, for example—who are more inclined to rate their hospital stay unfavorably.1 They may not even be accurately remembered by patients who encounter multiple doctors during the course of their hospitalization.2 And hospital information systems can misidentify the treating physician, while the actual surveys used to gauge hospitalists have been imperfect at best.3
And yet, the hospitalist model has evolved substantially on the question of how it can impact patient perceptions of care.
Initially, hospitalist champions adopted a largely defensive posture: The model would not negatively impact patient satisfaction as it delivered on efficiency—and later on quality. The healthcare system, however, is beginning to recognize the hospitalist as part of a care “team” whose patient-centered approach might pay big dividends in the inpatient experience and, eventually, on satisfaction scores.
“I think the next phase, which is a focus on the hospitalist as a team member and team builder, is going to be key,” says William Southern, MD, MPH, SFHM, chief of the division of hospital medicine at Montefiore Medical Center in Bronx, N.Y.
Recent studies suggest that hospitalists are helping to design and test new tools that will not only improve satisfaction, but also more fairly assess the impact of individual doctors. As the maturation process continues, experts say, hospitalists have an opportunity to influence both provider-based interventions and more programmatic decision-making that can have far-reaching effects. Certainly, the hand dealt to hospitalists is looking more favorable even as the ante has been raised with Medicare programs like value-based purchasing, and its pot of money tied to patient perceptions of care.
So how have hospitalists played their cards so far?
A Look at the Evidence
In its early years, the HM model faced a persistent criticism: Replacing traditional caregivers with these new inpatient providers in the name of efficiency would increase handoffs and, therefore, discontinuities of care delivered by a succession of unfamiliar faces. If patients didn’t see their PCP in the hospital, the thinking went, they might be more disgruntled at being tended to by hospitalists, leading to lower satisfaction scores.4
A particularly heated exchange played out in 1999 in the New England Journal of Medicine. Farris A. Manian, MD, MPH, of Infectious Disease Consultants in St. Louis wrote in one letter, “I am particularly concerned about what impressionable house-staff members will learn from hospitalists who place an inordinate emphasis on cost rather than the quality of patient care or teaching.”5
A few subsequent studies, however, hinted that such concerns might be overstated. A 2000 analysis in the American Journal of Medicine that examined North Mississippi Health Services in Tupelo, for instance, found that care administered by hospitalists led to a shorter length of stay and lower costs than care delivered by internists. Importantly, the study found that patient satisfaction was similar for both models, while quality metrics were likewise equal or even tilted slightly toward hospitalists.6
In their influential 2002 review of a profession that was only a half-decade old, Robert Wachter, MD, MHM, and Lee Goldman, MD, MPH, FACP from the University of California at San Francisco reinforced the message that HM wouldn’t lead to unhappy patients. “Empirical research supports the premise that hospitalists improve inpatient efficiency without harmful effects on quality or patient satisfaction,” they asserted.7
Among pediatric patients, a 2005 review found that “none of the four studies that evaluated patient satisfaction found statistically significant differences in satisfaction with inpatient care. However, two of the three evaluations that did assess parents’ satisfaction with care provided to their children found that parents were more satisfied with some aspects of care provided by hospitalists.”8

—William Southern, MD, chief, division of hospital medicine, Montefiore Medical Center, Bronx, N.Y.
Similar findings were popping up around the country: Replacing an internal medicine residency program with a physician assistant/hospitalist model at Brooklyn, N.Y.’s Coney Island Hospital did not adversely impact patient satisfaction, while it significantly improved mortality.9 Brigham & Women’s Hospital in Boston likewise reported no change in patient satisfaction in a study comparing a physician assistant/hospitalist service with traditional house staff services.10
The shift toward a more proactive position on patient satisfaction is exemplified within a 2008 white paper, “Hospitalists Meeting the Challenge of Patient Satisfaction,” written by a group of 19 private-practice HM experts known as The Phoenix Group.3 The paper acknowledged the flaws and limitations of existing survey methodologies, including Medicare’s Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) scores. Even so, the authors urged practice groups to adopt a team-oriented approach to communicate to hospital administrations “the belief that hospitalists are in the best position to improve survey scores overall for the facility.”
Carle Foundation Hospital in Urbana, Ill., is now publicly advertising its HM service’s contribution to high patient satisfaction scores on its website, and underscoring the hospitalists’ consistency, accessibility, and communication skills. “The hospital is never without a hospitalist, and our nurses know that they can rely on them,” says Lynn Barnes, vice president of hospital operations. “They’re available, they’re within a few minutes away, and patients’ needs get met very efficiently and rapidly.”
As a result, she says, their presence can lead to higher scores in patients’ perceptions of communication.
Hospitalists also have been central to several safety initiatives at Carle. Napoleon Knight, MD, medical director of hospital medicine and associate vice president for quality, says the HM team has helped address undiagnosed sleep apnea and implement rapid responses, such as “Code Speed.” Caregivers or family members can use the code to immediately call for help if they detect a downturn in a patient’s condition.
The ongoing initiatives, Dr. Knight and Barnes say, are helping the hospital improve how patients and their loved ones perceive care as Carle adapts to a rapidly shifting healthcare landscape. “With all of the changes that seem to be coming from the external environment weekly, we want to work collaboratively to make sure we’re connected and aligned and communicating in an ongoing fashion so we can react to all of these changes,” Dr. Knight says.
Continued below...
A Hopeful Trend
So far, evidence that the HM model is more broadly raising patient satisfaction scores is largely anecdotal. But a few analyses suggest the trend is moving in the right direction. A recent study in the American Journal of Medical Quality, for instance, concludes that facilities with hospitalists might have an advantage in patient satisfaction with nursing and such personal issues as privacy, emotional needs, and response to complaints.11 The study also posits that teaching facilities employing hospitalists could see benefits in overall satisfaction, while large facilities with hospitalists might see gains in satisfaction with admissions, nursing, and tests and treatments.
Brad Fulton, PhD, a researcher at South Bend, Ind.-based healthcare consulting firm Press Ganey and the study’s lead author, says the 30,000-foot view of patient satisfaction at the facility level can get foggy in a hurry due to differences in the kind and size of hospitalist programs. “And despite all of that fog, we’re still able to see through that and find something,” he says.
One limitation is that the study findings could also reflect differences in the culture of facilities that choose to add hospitalists. That caveat means it might not be possible to completely untangle the effect of an HM group on inpatient care from the larger, hospitalwide values that have allowed the group to set up shop. The wrinkle brings its own fascinating questions, according to Fulton. For example, is that kind of culture necessary for hospitalists to function as well as they do?

—Lynn Barnes, vice president of hospital operations, Carle Foundation Hospital, Urbana, Ill.
Such considerations will become more important as the healthcare system places additional emphasis on patient satisfaction, as Medicare’s value-based purchasing program is doing through its HCAHPS scores. With all the changes, success or failure on the patient experience front is going to carry “not just a reputational import, but also a financial impact,” says Ethan Cumbler, MD, FACP, director of Acute Care for the Elderly (ACE) Service at the University of Colorado Denver.
So how can HM fairly and accurately assess its own practitioners? “I think one starts by trying to apply some of the rigor that we have learned from our experience as hospitalists in quality improvement to the more warm and fuzzy field of patient experience,” Dr. Cumbler says. Many hospitals employ surveys supplied by consultants like Press Ganey to track the global patient satisfaction for their institution, he says.
“But for an individual hospitalist or hospitalist group, that kind of tool often lacks both the specificity and the timeliness necessary to make good decisions about impact of interventions on patient satisfaction,” he says.
Mark Williams, MD, FACP, FHM, professor and chief of the division of hospital medicine at Northwestern University’s Feinberg School of Medicine in Chicago, agrees that such imprecision could lead to unfair assessments. “You can imagine a scenario where a patient actually liked their hospitalist very much,” he says, “but when they got the survey, they said [their stay] was terrible and the reasons being because maybe the nurse call button was not answered and the food was terrible and medications were given to them incorrectly, or it was noisy at night so they couldn’t sleep.”
A recent study by Dr. Williams and his colleagues, in which they employed a new assessment method called the Communication Assessment Tool (CAT), confirmed the group’s suspicions: “that the results from the Press Ganey didn’t match up with the CAT, which was a direct assessment of the patient’s perception of the hospitalist’s communication skills,” he says.12
The validated tool, he adds, provides directed feedback to the physician based on the percentage of patients rating that provider as excellent, instead of on the average total score. Hospitalists have felt vindicated by the results. “They were very nervous because the hospital talked about basing an incentive off of the Press Ganey scores, and we said, ‘You can’t do that,’ because we didn’t feel they were accurate, and this study proved that,” Dr. Williams explains.
Fortunately, the message has reached researchers and consultants alike, and better tools are starting to reach hospitals around the country. At HM11 in May, Press Ganey unveiled a new survey designed to help patients assess the care delivered by two hospitalists, the average for inpatient stays. The item set is specific to HM functions, and includes the photo and name of each hospitalist, which Fulton says should improve the validity and accuracy of the data.
“The early response looks really good,” Fulton says, though it’s too early to say whether the tool, called Hospitalist Insight, will live up to its billing. If it proves its mettle, Fulton says, the survey could be used to reward top-performing hospitalists, and the growing dataset could allow hospitals to compare themselves with appropriate peer groups for fairer comparisons.
Meanwhile, researchers are testing out checklists to score hospitalist etiquette, and tracking and paging systems to help ensure continuity of care. They have found increased patient satisfaction when doctors engage in verbal communication during a discharge, in interdisciplinary team rounding, and in efforts to address religious and spiritual concerns.
Since 2000, when Montefiore’s hospitalist program began, Dr. Southern says the hospital has explained to patients the tradeoff accompanying the HM model. “I say something like this to every patient: ‘I know I’m not the doctor that you know, and you’re just meeting me. The downside is that you haven’t met me before and I’m a new face, but the upside is that if you need me during the day, I’m here all the time, I’m not someplace else. And so if you need something, I can be here quickly.’ ”
Being very explicit about that tradeoff, he says, has made patients very comfortable with the model of care, especially during a crisis moment in their lives. “I think it’s really important to say, ‘I know you don’t know me, but here’s the upside.’ And my experience is that patients easily understand that tradeoff and are very positive,” Dr. Southern says.
The Verdict
Available evidence suggests that practitioners of the HM model have pivoted from defending against early criticism that they may harm patient satisfaction to pitching themselves as team leaders who can boost facilitywide perceptions of care. So far, too little research has been conducted to suggest whether that optimism is fully warranted, but early signs look promising.
At facilities like Chicago’s Northwestern Memorial Hospital, medical floors staffed by hospitalists are beginning to beat out surgical floors for the traveling patient satisfaction award. And experts like Dr. Cumbler are pondering how ongoing initiatives to boost scores can follow in the footsteps of efficiency and quality-raising efforts by making the transition from focusing on individual doctors to adopting a more programmatic approach. “What’s happening to that patient during the 23 hours and 45 minutes of their hospital day that you are not sitting by the bedside? And what influence should a hospitalist have in affecting that other 23 hours and 45 minutes?” he says.
Handoffs, discharges, communication with PCPs, and other potential weak points in maintaining high levels of patient satisfaction, Dr. Cumbler says, all are amenable to systems-based improvement. “As hospitalists, we are in a unique position to influence not only our one-one-one interaction with the patient, but also to influence that system of care in a way that patients will notice in a real and tangible way,” he says. “I think we’ve recognized for some time that a healthy heart but a miserable patient is not a healthy person.”
Bryn Nelson is a freelance medical journalist based in Seattle.
References
- Williams M, Flanders SA, Whitcomb WF. Comprehensive hospital medicine: an evidence based approach. Elsevier;2007:971-976.
- Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in-hospital physicians. Arch Intern Med. 2009;169(2):199-201.
- Singer AS, et al. Hospitalists meeting the challenge of patient satisfaction. The Phoenix Group. 2008;1-5.
- Manian FA. Whither continuity of care? N Engl J Med. 1999;340:1362-1363.
- Correspondence. Whither continuity of care? N Engl J Med. 1999;341:850-852.
- Davis KM, Koch KE, Harvey JK, et al. Effects of hospitalists on cost, outcomes, and patient satisfaction in a rural health system. Amer J Med. 2000;108(8):621-626.
- Wachter RM, Goldman L. The hospitalist movement 5 years later. JAMA. 2002;287(4):487-494.
- Coffman J, Rundall TG. The impact of hospitalists on the cost and quality of inpatient care in the United States (a research synthesis). Med Care Res Rev. 2005;62:379–406.
- Dhuper S, Choksi S. Replacing an academic internal medicine residency program with a physician assistant-hospitalist model: a comparative analysis study. Am J Med Qual. 2009;24(2):132-139.
- Roy CL, Liang CL, Lund M, et al. Implementation of a physician assistant/hospitalist service in an academic medical center: impact on efficiency and patient outcomes. J Hosp Med. 2008;3(5):361-368.
- Fulton BR, Drevs KE, Ayala LJ, Malott DL Jr. Patient satisfaction with hospitalists: facility-level analyses. Am J Med Qual. 2011;26(2):95-102.
- Ferranti DE, Makoul G, Forth VE, Rauworth J, Lee J, Williams MV. Assessing patient perceptions of hospitalist communication skills using the Communication Assessment Tool (CAT). J Hosp Med. 2010;5(9):522-527.