User login
Hospitalists Can’t Ignore Rise in Carbapenem-Resistant Enterobacteriaceae (CRE) Infections
Neil Fishman, MD, associate chief medical officer at the University of Pennsylvania Health System in Philadelphia, sounds like a football coach when he says the best way to fight carbapenem-resistant Enterobacteriaceae (CRE) infections is with a good defense. Hospitalists and others should focus on contact precautions, hand hygiene, removing gowns and gloves before entering new rooms, and even suggest better room cleanings when trying to prevent the spread of CRE, he says. In fact, he has worked with SHM leadership for years to engage hospitalists about the “critical necessity of antimicrobial stewardship.”
“They’re all critical to prevent transmission,” says Dr. Fishman, who chairs the CDC’s Health Infection Control Practices Advisory Committee. “That’s part of the things that can be done in the here and now to try to prevent people from getting infected with these organisms. It’s what the CDC calls ‘detect and prevent.’”
Dr. Fishman’s suggestions echo findings in a new CDC report that shows a threefold increase in the proportion of Enterobacteriaceae bugs that proved resistant to carbapenem in the past decade. The data, in the CDC’s Morbidity and Mortality Weekly Report, showed the proportion of reported Enterobacteriacae that were CRE infections jumped to 4.2% in 2011 from 1.2% in
“It is a very serious public health threat,” says co-author Alex Kallen, MD, MPH, a medical epidemiologist and outbreak response coordinator in the CDC’s Division of Healthcare Quality Promotion. “Maybe it’s not that common now, but with no action, it has the potential to become much more common—like a lot of the other MDROs [multidrug-resistant organisms] that hospitalists see regularly. [Hospitalists] have a lot of control over some of the things that could potentially lead to increased transmission.”
Part of the problem, Dr. Fishman says, is a lack of antibiotic options. Polymyxins briefly showed success against the bacteria, but performance is waning. Dr. Fishman estimates it will be up to eight years before a new antibiotic to combat the infection is in widespread use.
Both he and Dr. Kallen say hospitalists can help reduce the spread of CRE through antibiotic stewardship, review of detailed patient histories to ferret out risk factors, and dedication to contact precautions and hand hygiene.
Dr. Kallen notes hospitalists also can play a leadership role in coordinating efforts for patients transferring between hospitals and other institutions (i.e. skilled nursing or assisted-living facilities). Part of being that leader is refusing to dismiss possible CRE cases.
“If you’re a place that doesn’t see this very often, and you see one, that’s a big deal,” Dr. Kallen says. “It needs to be acted on aggressively. Being proactive is much more effective than waiting until it’s common and then trying to intervene.” TH
Richard Quinn is a freelance writer in New Jersey.
Neil Fishman, MD, associate chief medical officer at the University of Pennsylvania Health System in Philadelphia, sounds like a football coach when he says the best way to fight carbapenem-resistant Enterobacteriaceae (CRE) infections is with a good defense. Hospitalists and others should focus on contact precautions, hand hygiene, removing gowns and gloves before entering new rooms, and even suggest better room cleanings when trying to prevent the spread of CRE, he says. In fact, he has worked with SHM leadership for years to engage hospitalists about the “critical necessity of antimicrobial stewardship.”
“They’re all critical to prevent transmission,” says Dr. Fishman, who chairs the CDC’s Health Infection Control Practices Advisory Committee. “That’s part of the things that can be done in the here and now to try to prevent people from getting infected with these organisms. It’s what the CDC calls ‘detect and prevent.’”
Dr. Fishman’s suggestions echo findings in a new CDC report that shows a threefold increase in the proportion of Enterobacteriaceae bugs that proved resistant to carbapenem in the past decade. The data, in the CDC’s Morbidity and Mortality Weekly Report, showed the proportion of reported Enterobacteriacae that were CRE infections jumped to 4.2% in 2011 from 1.2% in
“It is a very serious public health threat,” says co-author Alex Kallen, MD, MPH, a medical epidemiologist and outbreak response coordinator in the CDC’s Division of Healthcare Quality Promotion. “Maybe it’s not that common now, but with no action, it has the potential to become much more common—like a lot of the other MDROs [multidrug-resistant organisms] that hospitalists see regularly. [Hospitalists] have a lot of control over some of the things that could potentially lead to increased transmission.”
Part of the problem, Dr. Fishman says, is a lack of antibiotic options. Polymyxins briefly showed success against the bacteria, but performance is waning. Dr. Fishman estimates it will be up to eight years before a new antibiotic to combat the infection is in widespread use.
Both he and Dr. Kallen say hospitalists can help reduce the spread of CRE through antibiotic stewardship, review of detailed patient histories to ferret out risk factors, and dedication to contact precautions and hand hygiene.
Dr. Kallen notes hospitalists also can play a leadership role in coordinating efforts for patients transferring between hospitals and other institutions (i.e. skilled nursing or assisted-living facilities). Part of being that leader is refusing to dismiss possible CRE cases.
“If you’re a place that doesn’t see this very often, and you see one, that’s a big deal,” Dr. Kallen says. “It needs to be acted on aggressively. Being proactive is much more effective than waiting until it’s common and then trying to intervene.” TH
Richard Quinn is a freelance writer in New Jersey.
Neil Fishman, MD, associate chief medical officer at the University of Pennsylvania Health System in Philadelphia, sounds like a football coach when he says the best way to fight carbapenem-resistant Enterobacteriaceae (CRE) infections is with a good defense. Hospitalists and others should focus on contact precautions, hand hygiene, removing gowns and gloves before entering new rooms, and even suggest better room cleanings when trying to prevent the spread of CRE, he says. In fact, he has worked with SHM leadership for years to engage hospitalists about the “critical necessity of antimicrobial stewardship.”
“They’re all critical to prevent transmission,” says Dr. Fishman, who chairs the CDC’s Health Infection Control Practices Advisory Committee. “That’s part of the things that can be done in the here and now to try to prevent people from getting infected with these organisms. It’s what the CDC calls ‘detect and prevent.’”
Dr. Fishman’s suggestions echo findings in a new CDC report that shows a threefold increase in the proportion of Enterobacteriaceae bugs that proved resistant to carbapenem in the past decade. The data, in the CDC’s Morbidity and Mortality Weekly Report, showed the proportion of reported Enterobacteriacae that were CRE infections jumped to 4.2% in 2011 from 1.2% in
“It is a very serious public health threat,” says co-author Alex Kallen, MD, MPH, a medical epidemiologist and outbreak response coordinator in the CDC’s Division of Healthcare Quality Promotion. “Maybe it’s not that common now, but with no action, it has the potential to become much more common—like a lot of the other MDROs [multidrug-resistant organisms] that hospitalists see regularly. [Hospitalists] have a lot of control over some of the things that could potentially lead to increased transmission.”
Part of the problem, Dr. Fishman says, is a lack of antibiotic options. Polymyxins briefly showed success against the bacteria, but performance is waning. Dr. Fishman estimates it will be up to eight years before a new antibiotic to combat the infection is in widespread use.
Both he and Dr. Kallen say hospitalists can help reduce the spread of CRE through antibiotic stewardship, review of detailed patient histories to ferret out risk factors, and dedication to contact precautions and hand hygiene.
Dr. Kallen notes hospitalists also can play a leadership role in coordinating efforts for patients transferring between hospitals and other institutions (i.e. skilled nursing or assisted-living facilities). Part of being that leader is refusing to dismiss possible CRE cases.
“If you’re a place that doesn’t see this very often, and you see one, that’s a big deal,” Dr. Kallen says. “It needs to be acted on aggressively. Being proactive is much more effective than waiting until it’s common and then trying to intervene.” TH
Richard Quinn is a freelance writer in New Jersey.
The wizard of insurance
Thirty years ago, many college patients I saw were covered by a school health policy written by a company I will call James S. Fred Insurance. Because this happened long before electronic claims submissions, we knew that ours were handled by someone named Lucille.
For reasons I no longer recall, I found myself strolling in downtown Boston one afternoon, when I saw a large office building that listed none other than James S. Fred Insurance as a major tenant. I took the elevator to the 17th floor, went in, and asked for Lucille.
Sure enough, sitting in a quiet cubicle, there she was: a pleasant older woman who did the college accounts, a small cog in a massive wheel. When I introduced myself, Lucille recognized my name and greeted me warmly.
"I never expected to meet you in person," I said, "But since I have, perhaps I can tell you about a problem we’re having with reimbursement. I described the issue. Lucille took out a large manual, listing the terms of the company’s college coverage. "Here it is," she said, showing me the relevant paragraph.
I thanked her and took the book. But when I read the paragraph, I saw that it didn’t say what she said it said. I pointed this out.
"My goodness," said Lucille. "You’re right. We should be reimbursing you for that, shouldn’t we?"
So that was it. The massive insurance giant in the glass-and-steel skyscraper turned out to be a little old lady in a cubicle who couldn’t read the manual. It was like pulling back the curtain and finding out that the Wizard of Oz was a geezer with a wind machine.
I thought of this last week when I had a talk about my own personal coverage with a Midwest insurer. The issue turned on their responsibility for covering a service provided by a physician who does not participate in Medicare at all. (Yes, I am on Medicare now.)
Last year, I spoke with a human at the company who explained that all I needed to do was confirm that the provider was not Medicare affiliated. This year, after paying a few claims, they apparently changed their mind and sent letters demanding payback and saying they would only pay what Medicare would have, even if Medicare actually didn’t.
I appealed. The appeal was denied. I could not reach a human. I gave up.
Then last week, Jeanette called from Chicago. She described herself as Head of the Appeals Division, in a voice that sounded like Marian, the no-nonsense librarian from "The Music Man."
"Our policy is based on what’s in the manual," she said. "Let me see if I can find it. Oh, here it is." Then she read a passage about doctors who don’t accept Medicare assignments. "We ask them to submit claims anyway," she explained.
"Forgive me," I said, "but a doctor who doesn’t accept assignment is a Medicare provider, just one who won’t accept as full payment what Medicare allows. My doctor is not a Medicare provider at all. He can’t submit a claim, because he doesn’t have a Medicare provider number."
"My goodness," said Jeanette. "I think you may be right. Have you documented this for us?"
"With every claim," I said. "I followed your company’s instructions, and attached to every claim my doctor’s letter saying he doesn’t participate in Medicare. You should have a dozen or so copies of this letter. If you can’t find any, I’ll be happy to send another."
"Oh, here it is!" said Jeanette. "Yes, I see. We need to rectify this."
I danced a mental jig around the room. Lucille must be long retired, but I’d love to invite her and Jeanette for tea.
"I’m really grateful to have the chance to speak to person," I told Jeanette. "Thanks so much for listening."
You could hear Jeanette glow right through the phone. "Why, you’re welcome," she said. "You’ve made my whole day!"
Faceless bureaucracies can seem intimidating, impersonal, malevolent, diabolical, Kafkaesque.
But sometimes, they’re just little old ladies who have trouble reading manuals. To find out, just follow the yellow brick road.
Dr. Rockoff practices dermatology in Brookline, Mass.
Thirty years ago, many college patients I saw were covered by a school health policy written by a company I will call James S. Fred Insurance. Because this happened long before electronic claims submissions, we knew that ours were handled by someone named Lucille.
For reasons I no longer recall, I found myself strolling in downtown Boston one afternoon, when I saw a large office building that listed none other than James S. Fred Insurance as a major tenant. I took the elevator to the 17th floor, went in, and asked for Lucille.
Sure enough, sitting in a quiet cubicle, there she was: a pleasant older woman who did the college accounts, a small cog in a massive wheel. When I introduced myself, Lucille recognized my name and greeted me warmly.
"I never expected to meet you in person," I said, "But since I have, perhaps I can tell you about a problem we’re having with reimbursement. I described the issue. Lucille took out a large manual, listing the terms of the company’s college coverage. "Here it is," she said, showing me the relevant paragraph.
I thanked her and took the book. But when I read the paragraph, I saw that it didn’t say what she said it said. I pointed this out.
"My goodness," said Lucille. "You’re right. We should be reimbursing you for that, shouldn’t we?"
So that was it. The massive insurance giant in the glass-and-steel skyscraper turned out to be a little old lady in a cubicle who couldn’t read the manual. It was like pulling back the curtain and finding out that the Wizard of Oz was a geezer with a wind machine.
I thought of this last week when I had a talk about my own personal coverage with a Midwest insurer. The issue turned on their responsibility for covering a service provided by a physician who does not participate in Medicare at all. (Yes, I am on Medicare now.)
Last year, I spoke with a human at the company who explained that all I needed to do was confirm that the provider was not Medicare affiliated. This year, after paying a few claims, they apparently changed their mind and sent letters demanding payback and saying they would only pay what Medicare would have, even if Medicare actually didn’t.
I appealed. The appeal was denied. I could not reach a human. I gave up.
Then last week, Jeanette called from Chicago. She described herself as Head of the Appeals Division, in a voice that sounded like Marian, the no-nonsense librarian from "The Music Man."
"Our policy is based on what’s in the manual," she said. "Let me see if I can find it. Oh, here it is." Then she read a passage about doctors who don’t accept Medicare assignments. "We ask them to submit claims anyway," she explained.
"Forgive me," I said, "but a doctor who doesn’t accept assignment is a Medicare provider, just one who won’t accept as full payment what Medicare allows. My doctor is not a Medicare provider at all. He can’t submit a claim, because he doesn’t have a Medicare provider number."
"My goodness," said Jeanette. "I think you may be right. Have you documented this for us?"
"With every claim," I said. "I followed your company’s instructions, and attached to every claim my doctor’s letter saying he doesn’t participate in Medicare. You should have a dozen or so copies of this letter. If you can’t find any, I’ll be happy to send another."
"Oh, here it is!" said Jeanette. "Yes, I see. We need to rectify this."
I danced a mental jig around the room. Lucille must be long retired, but I’d love to invite her and Jeanette for tea.
"I’m really grateful to have the chance to speak to person," I told Jeanette. "Thanks so much for listening."
You could hear Jeanette glow right through the phone. "Why, you’re welcome," she said. "You’ve made my whole day!"
Faceless bureaucracies can seem intimidating, impersonal, malevolent, diabolical, Kafkaesque.
But sometimes, they’re just little old ladies who have trouble reading manuals. To find out, just follow the yellow brick road.
Dr. Rockoff practices dermatology in Brookline, Mass.
Thirty years ago, many college patients I saw were covered by a school health policy written by a company I will call James S. Fred Insurance. Because this happened long before electronic claims submissions, we knew that ours were handled by someone named Lucille.
For reasons I no longer recall, I found myself strolling in downtown Boston one afternoon, when I saw a large office building that listed none other than James S. Fred Insurance as a major tenant. I took the elevator to the 17th floor, went in, and asked for Lucille.
Sure enough, sitting in a quiet cubicle, there she was: a pleasant older woman who did the college accounts, a small cog in a massive wheel. When I introduced myself, Lucille recognized my name and greeted me warmly.
"I never expected to meet you in person," I said, "But since I have, perhaps I can tell you about a problem we’re having with reimbursement. I described the issue. Lucille took out a large manual, listing the terms of the company’s college coverage. "Here it is," she said, showing me the relevant paragraph.
I thanked her and took the book. But when I read the paragraph, I saw that it didn’t say what she said it said. I pointed this out.
"My goodness," said Lucille. "You’re right. We should be reimbursing you for that, shouldn’t we?"
So that was it. The massive insurance giant in the glass-and-steel skyscraper turned out to be a little old lady in a cubicle who couldn’t read the manual. It was like pulling back the curtain and finding out that the Wizard of Oz was a geezer with a wind machine.
I thought of this last week when I had a talk about my own personal coverage with a Midwest insurer. The issue turned on their responsibility for covering a service provided by a physician who does not participate in Medicare at all. (Yes, I am on Medicare now.)
Last year, I spoke with a human at the company who explained that all I needed to do was confirm that the provider was not Medicare affiliated. This year, after paying a few claims, they apparently changed their mind and sent letters demanding payback and saying they would only pay what Medicare would have, even if Medicare actually didn’t.
I appealed. The appeal was denied. I could not reach a human. I gave up.
Then last week, Jeanette called from Chicago. She described herself as Head of the Appeals Division, in a voice that sounded like Marian, the no-nonsense librarian from "The Music Man."
"Our policy is based on what’s in the manual," she said. "Let me see if I can find it. Oh, here it is." Then she read a passage about doctors who don’t accept Medicare assignments. "We ask them to submit claims anyway," she explained.
"Forgive me," I said, "but a doctor who doesn’t accept assignment is a Medicare provider, just one who won’t accept as full payment what Medicare allows. My doctor is not a Medicare provider at all. He can’t submit a claim, because he doesn’t have a Medicare provider number."
"My goodness," said Jeanette. "I think you may be right. Have you documented this for us?"
"With every claim," I said. "I followed your company’s instructions, and attached to every claim my doctor’s letter saying he doesn’t participate in Medicare. You should have a dozen or so copies of this letter. If you can’t find any, I’ll be happy to send another."
"Oh, here it is!" said Jeanette. "Yes, I see. We need to rectify this."
I danced a mental jig around the room. Lucille must be long retired, but I’d love to invite her and Jeanette for tea.
"I’m really grateful to have the chance to speak to person," I told Jeanette. "Thanks so much for listening."
You could hear Jeanette glow right through the phone. "Why, you’re welcome," she said. "You’ve made my whole day!"
Faceless bureaucracies can seem intimidating, impersonal, malevolent, diabolical, Kafkaesque.
But sometimes, they’re just little old ladies who have trouble reading manuals. To find out, just follow the yellow brick road.
Dr. Rockoff practices dermatology in Brookline, Mass.
Stereotactic laser ablation found feasible for hypothalamic hamartoma
SAN DIEGO – Magnetic resonance-guided stereotactic laser ablation is a safe and effective option in the treatment of hypothalamic hamartoma, results from a multicenter pilot study showed.
At the annual meeting of the American Academy of Neurology, Dr. Daniel J. Curry reported results from 20 patients who have undergone treatment with a Food and Drug Administration–cleared neurosurgical tissue coagulation system called Visualase. Hypothalamic hamartoma (HH) is a rare disorder of pediatric epilepsy with an estimated prevalence of 1:50,000-100,000, said Dr. Curry, director of pediatric surgical epilepsy and functional neurosurgery at Texas Children’s Hospital, Houston.
"The main presentation is the mirthless laughter of gelastic seizures, but patients can have other seizure types," he said. "The diagnosis is frequently delayed, and high seizure burden in the brain can lead to epileptic encephalopathy. Seizures are notoriously resistant to medical managements necessitating surgical intervention ... open, endoscopic, or ablative."
To date, surgical intervention has been limited due to modest outcomes, with 37%-50% achieving seizure freedom. The location of HH tumors makes surgical intervention difficult, and as a result 7%-10% of patients have permanent surgical morbidity.
For the technique using the Visualase, Dr. Curry and his associates at four other medical centers in the United States performed the surgical technique through a single 4-mm incision, a 3.2-mm burr hole, and a 1.65-mm cannula trajectory with Visualase under real-time MR thermography, first with a confirmation test at about 3 W, followed by higher doses of 6-10 W for 50-120 seconds. Temperature limits were set to protect the hypothalamus and basilar artery and optic tract. The surgery had an immediate effect, and patients stayed in the hospital for a mean of 2 days.
The primary measure was seizure frequency at 1 year while the secondary measure was the complication profile of stereotactic laser ablation in epilepsy.
Of the 20 patients, 5 were adults, and the entire study population ranged in age from 22 months to 34 years. A total of 21 ablations were performed in the 20 patients. Dr. Curry reported that all but four patients were seizure free after the procedure. However, the rate of seizures diminished among the four who were not seizure free.
Seizures recurred in one of the pediatric patients. "We re-ablated him and he is now seizure free," Dr. Curry said.
Complications to date have included two missed targets, one case of IV phenytoin toxicity, one case of transient diabetes insipidus, two cases of transient hemiparesis, and one subarachnoid hemorrhage. Perioperative, temporary weight gain was detected in most patients. "With lack of hormonal disturbance, this is thought to be due to the perioperative, high-dose steroid use," Dr. Curry explained.
Postoperative interviews with parents of study participants "have revealed significant improvements in intellectual development, concentration, and interactiveness," he said. "Most families report improvement of mood, decreased behavioral disorders, and rage attacks."
To date, only two patients have completed formal postoperative neuropsychological testing. "There were no significant declines in memory in either patient," Dr. Curry said. One had improved math skills and reading comprehension while the other complained of memory dysfunction but was not below normal on testing.
"We have learned that laser ablation of hypothalamic hamartoma can be accomplished safely," Dr. Curry concluded. "More studies are needed to explain the antiepileptic effect in settings of incomplete radiologic destruction of the target and to advance thermal planning."
Dr. Curry said that he had no relevant financial conflicts to disclose.
SAN DIEGO – Magnetic resonance-guided stereotactic laser ablation is a safe and effective option in the treatment of hypothalamic hamartoma, results from a multicenter pilot study showed.
At the annual meeting of the American Academy of Neurology, Dr. Daniel J. Curry reported results from 20 patients who have undergone treatment with a Food and Drug Administration–cleared neurosurgical tissue coagulation system called Visualase. Hypothalamic hamartoma (HH) is a rare disorder of pediatric epilepsy with an estimated prevalence of 1:50,000-100,000, said Dr. Curry, director of pediatric surgical epilepsy and functional neurosurgery at Texas Children’s Hospital, Houston.
"The main presentation is the mirthless laughter of gelastic seizures, but patients can have other seizure types," he said. "The diagnosis is frequently delayed, and high seizure burden in the brain can lead to epileptic encephalopathy. Seizures are notoriously resistant to medical managements necessitating surgical intervention ... open, endoscopic, or ablative."
To date, surgical intervention has been limited due to modest outcomes, with 37%-50% achieving seizure freedom. The location of HH tumors makes surgical intervention difficult, and as a result 7%-10% of patients have permanent surgical morbidity.
For the technique using the Visualase, Dr. Curry and his associates at four other medical centers in the United States performed the surgical technique through a single 4-mm incision, a 3.2-mm burr hole, and a 1.65-mm cannula trajectory with Visualase under real-time MR thermography, first with a confirmation test at about 3 W, followed by higher doses of 6-10 W for 50-120 seconds. Temperature limits were set to protect the hypothalamus and basilar artery and optic tract. The surgery had an immediate effect, and patients stayed in the hospital for a mean of 2 days.
The primary measure was seizure frequency at 1 year while the secondary measure was the complication profile of stereotactic laser ablation in epilepsy.
Of the 20 patients, 5 were adults, and the entire study population ranged in age from 22 months to 34 years. A total of 21 ablations were performed in the 20 patients. Dr. Curry reported that all but four patients were seizure free after the procedure. However, the rate of seizures diminished among the four who were not seizure free.
Seizures recurred in one of the pediatric patients. "We re-ablated him and he is now seizure free," Dr. Curry said.
Complications to date have included two missed targets, one case of IV phenytoin toxicity, one case of transient diabetes insipidus, two cases of transient hemiparesis, and one subarachnoid hemorrhage. Perioperative, temporary weight gain was detected in most patients. "With lack of hormonal disturbance, this is thought to be due to the perioperative, high-dose steroid use," Dr. Curry explained.
Postoperative interviews with parents of study participants "have revealed significant improvements in intellectual development, concentration, and interactiveness," he said. "Most families report improvement of mood, decreased behavioral disorders, and rage attacks."
To date, only two patients have completed formal postoperative neuropsychological testing. "There were no significant declines in memory in either patient," Dr. Curry said. One had improved math skills and reading comprehension while the other complained of memory dysfunction but was not below normal on testing.
"We have learned that laser ablation of hypothalamic hamartoma can be accomplished safely," Dr. Curry concluded. "More studies are needed to explain the antiepileptic effect in settings of incomplete radiologic destruction of the target and to advance thermal planning."
Dr. Curry said that he had no relevant financial conflicts to disclose.
SAN DIEGO – Magnetic resonance-guided stereotactic laser ablation is a safe and effective option in the treatment of hypothalamic hamartoma, results from a multicenter pilot study showed.
At the annual meeting of the American Academy of Neurology, Dr. Daniel J. Curry reported results from 20 patients who have undergone treatment with a Food and Drug Administration–cleared neurosurgical tissue coagulation system called Visualase. Hypothalamic hamartoma (HH) is a rare disorder of pediatric epilepsy with an estimated prevalence of 1:50,000-100,000, said Dr. Curry, director of pediatric surgical epilepsy and functional neurosurgery at Texas Children’s Hospital, Houston.
"The main presentation is the mirthless laughter of gelastic seizures, but patients can have other seizure types," he said. "The diagnosis is frequently delayed, and high seizure burden in the brain can lead to epileptic encephalopathy. Seizures are notoriously resistant to medical managements necessitating surgical intervention ... open, endoscopic, or ablative."
To date, surgical intervention has been limited due to modest outcomes, with 37%-50% achieving seizure freedom. The location of HH tumors makes surgical intervention difficult, and as a result 7%-10% of patients have permanent surgical morbidity.
For the technique using the Visualase, Dr. Curry and his associates at four other medical centers in the United States performed the surgical technique through a single 4-mm incision, a 3.2-mm burr hole, and a 1.65-mm cannula trajectory with Visualase under real-time MR thermography, first with a confirmation test at about 3 W, followed by higher doses of 6-10 W for 50-120 seconds. Temperature limits were set to protect the hypothalamus and basilar artery and optic tract. The surgery had an immediate effect, and patients stayed in the hospital for a mean of 2 days.
The primary measure was seizure frequency at 1 year while the secondary measure was the complication profile of stereotactic laser ablation in epilepsy.
Of the 20 patients, 5 were adults, and the entire study population ranged in age from 22 months to 34 years. A total of 21 ablations were performed in the 20 patients. Dr. Curry reported that all but four patients were seizure free after the procedure. However, the rate of seizures diminished among the four who were not seizure free.
Seizures recurred in one of the pediatric patients. "We re-ablated him and he is now seizure free," Dr. Curry said.
Complications to date have included two missed targets, one case of IV phenytoin toxicity, one case of transient diabetes insipidus, two cases of transient hemiparesis, and one subarachnoid hemorrhage. Perioperative, temporary weight gain was detected in most patients. "With lack of hormonal disturbance, this is thought to be due to the perioperative, high-dose steroid use," Dr. Curry explained.
Postoperative interviews with parents of study participants "have revealed significant improvements in intellectual development, concentration, and interactiveness," he said. "Most families report improvement of mood, decreased behavioral disorders, and rage attacks."
To date, only two patients have completed formal postoperative neuropsychological testing. "There were no significant declines in memory in either patient," Dr. Curry said. One had improved math skills and reading comprehension while the other complained of memory dysfunction but was not below normal on testing.
"We have learned that laser ablation of hypothalamic hamartoma can be accomplished safely," Dr. Curry concluded. "More studies are needed to explain the antiepileptic effect in settings of incomplete radiologic destruction of the target and to advance thermal planning."
Dr. Curry said that he had no relevant financial conflicts to disclose.
AT THE 2013 AAN ANNUAL MEETING
Major finding: After 20 patients with hypothalamic hamartoma underwent MR-guided stereotactic laser ablation, all but 4 were seizure free.
Data source: A multicenter pilot study of 21 ablations performed in patients who ranged in age from 22 months to 34 years.
Disclosures: Dr. Curry said that he had no relevant financial conflicts to disclose.
Bosutinib finds its place in the CML treatment paradigm
Drug therapy of chronic myeloid leukemia (CML) used to be simple. Or rather, it was narrow and not very effective. For a long time all we had was interferon alpha (IFN-alpha) and hydoxyurea, which failed to protect most patients from progression to the blastic phase. As a result, allotransplant, although associated with high mortality, was the treatment of choice for all eligible patients. Then imatinib came along and replaced a simple but poor choice with a simple but good choice for drug therapy. Now, 12 years later, the drug therapy space for CML is populated by 5 different tyrosine kinase inhibitors (TKIs; imatinib, dasatinib, nilotinib, bosutinib, and ponatinib) and omacetaxine (previously known as homoharringtonine) in addition to IFN-alpha and hydoxyurea. Navigating this space is a challenge, especially for hematologists and oncologists who don’t have the privilege of specializing. The drug at issue is bosutinib, which has been approved for treating adults “with chronic, accelerated, or blast phase Philadelphia chromosome-positive (Ph) CML with resistance or intolerance to prior therapy,” but it has not received approval for frontline therapy. A combined phase 1/2 study demonstrated a 41% cumulative rate of complete cytogenetic response (CCyR) in patients with chronic phase CML with resistance to or intolerance of imatinib who were treated with bosutinib; progressionfree and overall survival at 2 years were 79% and 92%, respectively, with better results for patients with intolerance compared with patients with resistance. The results are quite comparable with those of nilotinib or dasatinib in the same setting.1-3 In contrast, only 24% of patients on bosutinib achieved CCyR if they had prior exposure to dasatinib or nilotinib in addition to imatinib, which is also similar to the results with dasatinib or nilotinib in the third line,4 although follow-up is shorter. Only 2 BCRABL1 kinase mutations confer resistance to bosutinib: the multiresistant T315I mutations and V299L.5
Drug therapy of chronic myeloid leukemia (CML) used to be simple. Or rather, it was narrow and not very effective. For a long time all we had was interferon alpha (IFN-alpha) and hydoxyurea, which failed to protect most patients from progression to the blastic phase. As a result, allotransplant, although associated with high mortality, was the treatment of choice for all eligible patients. Then imatinib came along and replaced a simple but poor choice with a simple but good choice for drug therapy. Now, 12 years later, the drug therapy space for CML is populated by 5 different tyrosine kinase inhibitors (TKIs; imatinib, dasatinib, nilotinib, bosutinib, and ponatinib) and omacetaxine (previously known as homoharringtonine) in addition to IFN-alpha and hydoxyurea. Navigating this space is a challenge, especially for hematologists and oncologists who don’t have the privilege of specializing. The drug at issue is bosutinib, which has been approved for treating adults “with chronic, accelerated, or blast phase Philadelphia chromosome-positive (Ph) CML with resistance or intolerance to prior therapy,” but it has not received approval for frontline therapy. A combined phase 1/2 study demonstrated a 41% cumulative rate of complete cytogenetic response (CCyR) in patients with chronic phase CML with resistance to or intolerance of imatinib who were treated with bosutinib; progressionfree and overall survival at 2 years were 79% and 92%, respectively, with better results for patients with intolerance compared with patients with resistance. The results are quite comparable with those of nilotinib or dasatinib in the same setting.1-3 In contrast, only 24% of patients on bosutinib achieved CCyR if they had prior exposure to dasatinib or nilotinib in addition to imatinib, which is also similar to the results with dasatinib or nilotinib in the third line,4 although follow-up is shorter. Only 2 BCRABL1 kinase mutations confer resistance to bosutinib: the multiresistant T315I mutations and V299L.5
Drug therapy of chronic myeloid leukemia (CML) used to be simple. Or rather, it was narrow and not very effective. For a long time all we had was interferon alpha (IFN-alpha) and hydoxyurea, which failed to protect most patients from progression to the blastic phase. As a result, allotransplant, although associated with high mortality, was the treatment of choice for all eligible patients. Then imatinib came along and replaced a simple but poor choice with a simple but good choice for drug therapy. Now, 12 years later, the drug therapy space for CML is populated by 5 different tyrosine kinase inhibitors (TKIs; imatinib, dasatinib, nilotinib, bosutinib, and ponatinib) and omacetaxine (previously known as homoharringtonine) in addition to IFN-alpha and hydoxyurea. Navigating this space is a challenge, especially for hematologists and oncologists who don’t have the privilege of specializing. The drug at issue is bosutinib, which has been approved for treating adults “with chronic, accelerated, or blast phase Philadelphia chromosome-positive (Ph) CML with resistance or intolerance to prior therapy,” but it has not received approval for frontline therapy. A combined phase 1/2 study demonstrated a 41% cumulative rate of complete cytogenetic response (CCyR) in patients with chronic phase CML with resistance to or intolerance of imatinib who were treated with bosutinib; progressionfree and overall survival at 2 years were 79% and 92%, respectively, with better results for patients with intolerance compared with patients with resistance. The results are quite comparable with those of nilotinib or dasatinib in the same setting.1-3 In contrast, only 24% of patients on bosutinib achieved CCyR if they had prior exposure to dasatinib or nilotinib in addition to imatinib, which is also similar to the results with dasatinib or nilotinib in the third line,4 although follow-up is shorter. Only 2 BCRABL1 kinase mutations confer resistance to bosutinib: the multiresistant T315I mutations and V299L.5
Onco-bracketology? March Madness meets today’s practice
I have just returned from the Oncology Practice Summit, the annual conference for practice-based oncologists and midlevels, which was hosted by COMMUNITY ONCOLOGY and its sister publications, THE JOURNAL OF SUPPORTIVE ONCOLOGY (JSO) and THE ONCOLOGY REPORT, in Las Vegas. During my flight to the conference, I noticed that there was a certain buzz among the passengers, which I naturally assumed was about our oncology meeting. But as I looked around, I realized that not only was I the only passenger who was wearing a tie, I was also the only one who had knocked back less than one drink. The frenzy was about the first weekend of the NCAA’s March Madness, and the pervasive enthusiasm among the passengers revolved around the wellknown “science” of bracketology, in which basketball enthusiasts take all 64 teams in the tournament and try to predict which team will win each match as the teams work their way down to the Final Four and ultimately, to the winner. President Obama had already said that his pick was Indiana (we know now how that turned out — sorry Indiana), but the amateur handicappers on the plane were still sifting through the teams’ records and the coaches’ and individual players’ strengths and weakness to bet (upon their arrival in Las Vegas) on which team would ultimately prevail. Once in Las Vegas, we managed to have our conference despite the March Madness mayhem, and in the course of the meeting, the term bracketology took on an oncology-tinged relevance for me. Bear with me.
I have just returned from the Oncology Practice Summit, the annual conference for practice-based oncologists and midlevels, which was hosted by COMMUNITY ONCOLOGY and its sister publications, THE JOURNAL OF SUPPORTIVE ONCOLOGY (JSO) and THE ONCOLOGY REPORT, in Las Vegas. During my flight to the conference, I noticed that there was a certain buzz among the passengers, which I naturally assumed was about our oncology meeting. But as I looked around, I realized that not only was I the only passenger who was wearing a tie, I was also the only one who had knocked back less than one drink. The frenzy was about the first weekend of the NCAA’s March Madness, and the pervasive enthusiasm among the passengers revolved around the wellknown “science” of bracketology, in which basketball enthusiasts take all 64 teams in the tournament and try to predict which team will win each match as the teams work their way down to the Final Four and ultimately, to the winner. President Obama had already said that his pick was Indiana (we know now how that turned out — sorry Indiana), but the amateur handicappers on the plane were still sifting through the teams’ records and the coaches’ and individual players’ strengths and weakness to bet (upon their arrival in Las Vegas) on which team would ultimately prevail. Once in Las Vegas, we managed to have our conference despite the March Madness mayhem, and in the course of the meeting, the term bracketology took on an oncology-tinged relevance for me. Bear with me.
I have just returned from the Oncology Practice Summit, the annual conference for practice-based oncologists and midlevels, which was hosted by COMMUNITY ONCOLOGY and its sister publications, THE JOURNAL OF SUPPORTIVE ONCOLOGY (JSO) and THE ONCOLOGY REPORT, in Las Vegas. During my flight to the conference, I noticed that there was a certain buzz among the passengers, which I naturally assumed was about our oncology meeting. But as I looked around, I realized that not only was I the only passenger who was wearing a tie, I was also the only one who had knocked back less than one drink. The frenzy was about the first weekend of the NCAA’s March Madness, and the pervasive enthusiasm among the passengers revolved around the wellknown “science” of bracketology, in which basketball enthusiasts take all 64 teams in the tournament and try to predict which team will win each match as the teams work their way down to the Final Four and ultimately, to the winner. President Obama had already said that his pick was Indiana (we know now how that turned out — sorry Indiana), but the amateur handicappers on the plane were still sifting through the teams’ records and the coaches’ and individual players’ strengths and weakness to bet (upon their arrival in Las Vegas) on which team would ultimately prevail. Once in Las Vegas, we managed to have our conference despite the March Madness mayhem, and in the course of the meeting, the term bracketology took on an oncology-tinged relevance for me. Bear with me.
Get ready now for 2014 Medicare ACO program
The Centers for Medicare and Medicaid Services has just announced key dates for the 2014 Medicare Shared Savings Program application cycle – and although the upcoming Jan. 1, 2014, start date for the MSSP seems far off, physicians should start organizing now.
Physician interest in participating is mounting, as physician-led accountable care organizations are emerging as leaders in improving quality while eradicating waste. In fact, there are now more physician-run ACOs than any other model (see chart below).
Physicians see opportunity
The MSSP has embraced the accountable care concept to improve the quality of care for Medicare fee-for-service beneficiaries. Eligible providers and suppliers may participate in the MSSP by creating or participating in an ACO. The MSSP rewards ACOs that lower their rate of growth in health care costs while meeting quality performance standards.
On Jan. 10, 2013, the Centers for Medicare and Medicaid Services (CMS) announced that 106 new organizations were selected to participate in the program. That’s in addition to the 87 ACOs approved in July 2012 and the 27 selected in April 2012 – bringing the total to 220 ACOs selected to participate in the MSSP. Early evidence indicates that these ACOs are decreasing costs while improving clinical outcomes.
For many of those ACOs, Medicare will be just the beginning. Private insurers such as Aetna, UnitedHealth Group, Humana, Cigna, and most Blue Cross plans are contracting with ACOs to care for more patients. Many state Medicaid programs have moved or are considering moving to accountable care.
These multiple streams of shared savings will be generated through the same ACO infrastructure needed for the MSSP, encouraging more physician-owned ACOs to form.
With the rise of ACOs, "providers are doing things in a positive way rather than a reactive way. We are seeing the beginnings of a tsunami," noted Dr. Michael Cryer, national medical director at employee benefits consultancy Aon Hewitt, in a New York Times article ("Small-picture approach flips medical economics," March 12, 2012).
According to a recent study by consulting firm Oliver Wyman entitled "The ACO Surprise," roughly 10% of the U.S. population, or from 25 million to 31 million patients, are being served by ACOs. "Successful ACOs won’t just siphon patients away from traditional providers. They will change the rules of the game," the report’s authors conclude.
Don’t miss these 2013 deadlines
CMS has just released its 2013 application cycle for 2014 (see table). The time to act is now. It will take time to understand ACOs and enlist a critical mass of informed and committed primary care providers. Though the notice of intent ("NOI") is not binding, failure to file in May is binding – you are barred from applying. Likewise, you must obtain your user ID by May 31.
The application is not hard, but it basically reflects your ACO game plan. You must be organized, have a focused care plan, and complete the application by the end of July – much earlier than last year’s deadline.
Bottom line: Do not let the start date lull you into procrastination.
Let’s have a closer look at some of the things that must be covered in the application. In addition to a culture of teamwork, patient engagement, and alignment of financial incentives, which are chief among the eight essential elements necessary for a successful ACO ("The essential elements of an ACO," Internal Medicine News, Oct. 1, 2012, p. 38), the MSSP application requires:
• Compliance with the required definitions of "ACO applicant" and "participant."
• A certification that the ACO, its ACO-provider participants, and its ACO providers/suppliers have agreed to become accountable for the quality, cost, and overall care of the Medicare fee-for-service beneficiaries assigned to the ACO.
• Establishment of a governing body.
• Implementation of a comprehensive compliance plan.
• Execution of an ACO Participation Agreement.
In addition, certain organizational milestones should be reached in advance of the application. In particular, planning for a successful ACO requires identification of a physician-champion, completion of a feasibility analysis, implementation of sufficient information technology, and internal reporting on quality and cost metrics. As in any entrepreneurial pursuit, timing is critical, and delay equates to lost potential.
Given that primary care providers are the only providers mandated for inclusion in the MSSP, it is apparent that CMS expects primary care to drive ACO value via prevention and wellness; chronic disease management; care transitions and navigation; reduced hospitalizations; and multispecialty care coordination of complex patients.
ACOs, in one form or another, are sure to be permanent fixtures in American health care, as the nation’s economy and its residents eagerly await the benefits stemming from primary care–driven innovation.
Opportunity knocks – get going!
For more information about the Medicare Shared Savings Program, click here.
Mr. Bobbitt is a senior partner and head of the Health Law Group at the Smith Anderson law firm in Raleigh, North Carolina. He has many years’ experience assisting physicians in forming integrated delivery systems. He has spoken and written nationally to primary care physicians on the strategies and practicalities of forming or joining ACOs. This article is meant to be educational and does not constitute legal advice. For additional information, readers may contact the author ([email protected] or 919-821-6612). Mr. McNeill is a practicing attorney pursuing his LLM at Duke University, currently focusing on accountable care.
Medicare Shared Savings Program deadlines
Key dates for Jan. 1, 2014, start:
Notice of intent (NOI) accepted | May 1-31, 2013 |
CMS user ID forms accepted | May 1-31, 2013 |
Applications accepted | July 1-31, 2013 |
Application approval or denial decision | Fall 2013 |
Start date for MSSP ACO | Jan. 1, 2014 |
Source: Centers for Medicare and Medicaid Services
The Centers for Medicare and Medicaid Services has just announced key dates for the 2014 Medicare Shared Savings Program application cycle – and although the upcoming Jan. 1, 2014, start date for the MSSP seems far off, physicians should start organizing now.
Physician interest in participating is mounting, as physician-led accountable care organizations are emerging as leaders in improving quality while eradicating waste. In fact, there are now more physician-run ACOs than any other model (see chart below).
Physicians see opportunity
The MSSP has embraced the accountable care concept to improve the quality of care for Medicare fee-for-service beneficiaries. Eligible providers and suppliers may participate in the MSSP by creating or participating in an ACO. The MSSP rewards ACOs that lower their rate of growth in health care costs while meeting quality performance standards.
On Jan. 10, 2013, the Centers for Medicare and Medicaid Services (CMS) announced that 106 new organizations were selected to participate in the program. That’s in addition to the 87 ACOs approved in July 2012 and the 27 selected in April 2012 – bringing the total to 220 ACOs selected to participate in the MSSP. Early evidence indicates that these ACOs are decreasing costs while improving clinical outcomes.
For many of those ACOs, Medicare will be just the beginning. Private insurers such as Aetna, UnitedHealth Group, Humana, Cigna, and most Blue Cross plans are contracting with ACOs to care for more patients. Many state Medicaid programs have moved or are considering moving to accountable care.
These multiple streams of shared savings will be generated through the same ACO infrastructure needed for the MSSP, encouraging more physician-owned ACOs to form.
With the rise of ACOs, "providers are doing things in a positive way rather than a reactive way. We are seeing the beginnings of a tsunami," noted Dr. Michael Cryer, national medical director at employee benefits consultancy Aon Hewitt, in a New York Times article ("Small-picture approach flips medical economics," March 12, 2012).
According to a recent study by consulting firm Oliver Wyman entitled "The ACO Surprise," roughly 10% of the U.S. population, or from 25 million to 31 million patients, are being served by ACOs. "Successful ACOs won’t just siphon patients away from traditional providers. They will change the rules of the game," the report’s authors conclude.
Don’t miss these 2013 deadlines
CMS has just released its 2013 application cycle for 2014 (see table). The time to act is now. It will take time to understand ACOs and enlist a critical mass of informed and committed primary care providers. Though the notice of intent ("NOI") is not binding, failure to file in May is binding – you are barred from applying. Likewise, you must obtain your user ID by May 31.
The application is not hard, but it basically reflects your ACO game plan. You must be organized, have a focused care plan, and complete the application by the end of July – much earlier than last year’s deadline.
Bottom line: Do not let the start date lull you into procrastination.
Let’s have a closer look at some of the things that must be covered in the application. In addition to a culture of teamwork, patient engagement, and alignment of financial incentives, which are chief among the eight essential elements necessary for a successful ACO ("The essential elements of an ACO," Internal Medicine News, Oct. 1, 2012, p. 38), the MSSP application requires:
• Compliance with the required definitions of "ACO applicant" and "participant."
• A certification that the ACO, its ACO-provider participants, and its ACO providers/suppliers have agreed to become accountable for the quality, cost, and overall care of the Medicare fee-for-service beneficiaries assigned to the ACO.
• Establishment of a governing body.
• Implementation of a comprehensive compliance plan.
• Execution of an ACO Participation Agreement.
In addition, certain organizational milestones should be reached in advance of the application. In particular, planning for a successful ACO requires identification of a physician-champion, completion of a feasibility analysis, implementation of sufficient information technology, and internal reporting on quality and cost metrics. As in any entrepreneurial pursuit, timing is critical, and delay equates to lost potential.
Given that primary care providers are the only providers mandated for inclusion in the MSSP, it is apparent that CMS expects primary care to drive ACO value via prevention and wellness; chronic disease management; care transitions and navigation; reduced hospitalizations; and multispecialty care coordination of complex patients.
ACOs, in one form or another, are sure to be permanent fixtures in American health care, as the nation’s economy and its residents eagerly await the benefits stemming from primary care–driven innovation.
Opportunity knocks – get going!
For more information about the Medicare Shared Savings Program, click here.
Mr. Bobbitt is a senior partner and head of the Health Law Group at the Smith Anderson law firm in Raleigh, North Carolina. He has many years’ experience assisting physicians in forming integrated delivery systems. He has spoken and written nationally to primary care physicians on the strategies and practicalities of forming or joining ACOs. This article is meant to be educational and does not constitute legal advice. For additional information, readers may contact the author ([email protected] or 919-821-6612). Mr. McNeill is a practicing attorney pursuing his LLM at Duke University, currently focusing on accountable care.
Medicare Shared Savings Program deadlines
Key dates for Jan. 1, 2014, start:
Notice of intent (NOI) accepted | May 1-31, 2013 |
CMS user ID forms accepted | May 1-31, 2013 |
Applications accepted | July 1-31, 2013 |
Application approval or denial decision | Fall 2013 |
Start date for MSSP ACO | Jan. 1, 2014 |
Source: Centers for Medicare and Medicaid Services
The Centers for Medicare and Medicaid Services has just announced key dates for the 2014 Medicare Shared Savings Program application cycle – and although the upcoming Jan. 1, 2014, start date for the MSSP seems far off, physicians should start organizing now.
Physician interest in participating is mounting, as physician-led accountable care organizations are emerging as leaders in improving quality while eradicating waste. In fact, there are now more physician-run ACOs than any other model (see chart below).
Physicians see opportunity
The MSSP has embraced the accountable care concept to improve the quality of care for Medicare fee-for-service beneficiaries. Eligible providers and suppliers may participate in the MSSP by creating or participating in an ACO. The MSSP rewards ACOs that lower their rate of growth in health care costs while meeting quality performance standards.
On Jan. 10, 2013, the Centers for Medicare and Medicaid Services (CMS) announced that 106 new organizations were selected to participate in the program. That’s in addition to the 87 ACOs approved in July 2012 and the 27 selected in April 2012 – bringing the total to 220 ACOs selected to participate in the MSSP. Early evidence indicates that these ACOs are decreasing costs while improving clinical outcomes.
For many of those ACOs, Medicare will be just the beginning. Private insurers such as Aetna, UnitedHealth Group, Humana, Cigna, and most Blue Cross plans are contracting with ACOs to care for more patients. Many state Medicaid programs have moved or are considering moving to accountable care.
These multiple streams of shared savings will be generated through the same ACO infrastructure needed for the MSSP, encouraging more physician-owned ACOs to form.
With the rise of ACOs, "providers are doing things in a positive way rather than a reactive way. We are seeing the beginnings of a tsunami," noted Dr. Michael Cryer, national medical director at employee benefits consultancy Aon Hewitt, in a New York Times article ("Small-picture approach flips medical economics," March 12, 2012).
According to a recent study by consulting firm Oliver Wyman entitled "The ACO Surprise," roughly 10% of the U.S. population, or from 25 million to 31 million patients, are being served by ACOs. "Successful ACOs won’t just siphon patients away from traditional providers. They will change the rules of the game," the report’s authors conclude.
Don’t miss these 2013 deadlines
CMS has just released its 2013 application cycle for 2014 (see table). The time to act is now. It will take time to understand ACOs and enlist a critical mass of informed and committed primary care providers. Though the notice of intent ("NOI") is not binding, failure to file in May is binding – you are barred from applying. Likewise, you must obtain your user ID by May 31.
The application is not hard, but it basically reflects your ACO game plan. You must be organized, have a focused care plan, and complete the application by the end of July – much earlier than last year’s deadline.
Bottom line: Do not let the start date lull you into procrastination.
Let’s have a closer look at some of the things that must be covered in the application. In addition to a culture of teamwork, patient engagement, and alignment of financial incentives, which are chief among the eight essential elements necessary for a successful ACO ("The essential elements of an ACO," Internal Medicine News, Oct. 1, 2012, p. 38), the MSSP application requires:
• Compliance with the required definitions of "ACO applicant" and "participant."
• A certification that the ACO, its ACO-provider participants, and its ACO providers/suppliers have agreed to become accountable for the quality, cost, and overall care of the Medicare fee-for-service beneficiaries assigned to the ACO.
• Establishment of a governing body.
• Implementation of a comprehensive compliance plan.
• Execution of an ACO Participation Agreement.
In addition, certain organizational milestones should be reached in advance of the application. In particular, planning for a successful ACO requires identification of a physician-champion, completion of a feasibility analysis, implementation of sufficient information technology, and internal reporting on quality and cost metrics. As in any entrepreneurial pursuit, timing is critical, and delay equates to lost potential.
Given that primary care providers are the only providers mandated for inclusion in the MSSP, it is apparent that CMS expects primary care to drive ACO value via prevention and wellness; chronic disease management; care transitions and navigation; reduced hospitalizations; and multispecialty care coordination of complex patients.
ACOs, in one form or another, are sure to be permanent fixtures in American health care, as the nation’s economy and its residents eagerly await the benefits stemming from primary care–driven innovation.
Opportunity knocks – get going!
For more information about the Medicare Shared Savings Program, click here.
Mr. Bobbitt is a senior partner and head of the Health Law Group at the Smith Anderson law firm in Raleigh, North Carolina. He has many years’ experience assisting physicians in forming integrated delivery systems. He has spoken and written nationally to primary care physicians on the strategies and practicalities of forming or joining ACOs. This article is meant to be educational and does not constitute legal advice. For additional information, readers may contact the author ([email protected] or 919-821-6612). Mr. McNeill is a practicing attorney pursuing his LLM at Duke University, currently focusing on accountable care.
Medicare Shared Savings Program deadlines
Key dates for Jan. 1, 2014, start:
Notice of intent (NOI) accepted | May 1-31, 2013 |
CMS user ID forms accepted | May 1-31, 2013 |
Applications accepted | July 1-31, 2013 |
Application approval or denial decision | Fall 2013 |
Start date for MSSP ACO | Jan. 1, 2014 |
Source: Centers for Medicare and Medicaid Services
Conflict between randomized and registry trials
A recent spate of observational or registry analyses have challenged conventional wisdom derived from randomized clinical trials (RCTs). Both RCTs and registries have inherent flaws, but both provide important information in regard to drug efficacy in the search for "truth."
RCTs examine therapeutic effects in highly selected patient populations by focusing on one clinical entity, thereby excluding many patients with comorbidities that could influence or blunt the effect of the intervention. In a sense, RCTs do not represent the real-world expression of disease, since diseases rarely exists in isolation.
Registry trials collect large numbers of patients with a particular diagnosis within a large database. They include unselected patients and examine the effect of therapy in one disease regardless of comorbidities and are subject to both doctor and patient bias and confounding by comorbidities like chronic renal and pulmonary disease and, above all, are not randomized. Using a contemporary analogy, RCTs are a rifle shot whereas registries are more of a shotgun blast.
There have been two recent important targets for clinical research in heart failure. One is the search for better therapy for heart failure patients with preserved ejection fraction (HFPEF). The other is a search for drugs or devices that can provide added benefit to contemporary therapy for heart failure with reduced ejection fraction (HFREF)
The observation that many HFPEF patients develop heart failure despite current therapy with renin angiotensin aldosterone system (RAAS) antagonists and beta-blockers has led to a search for better therapy. RCTs with newer agents. including focused therapy with new RAAS antagonists, have failed to affect mortality in HFPEF (Lancet 2003;362:759-66). In contrast, a recent publication using the Swedish Heart Failure Registry (JAMA 2012;308:2108-17) found that patients treated with RAAS antagonists benefited compared with patients not taking them. The failure of the newer drugs to reach significance was attributed to flawed patient selection in RCTs that led to lower mortality rates and rendered the trials underpowered.
Similar discordance was observed between RCT and registry data in patients with HFREF who were treated with aldosterone antagonist (AA) in addition to contemporary RAAS antagonists and beta-blocker therapy. Using the Medicare database (JAMA 2012;308:2097-107), the investigators failed to observe any treatment benefit of AA on mortality that had previously been reported (N. Engl. J. Med. 1999;341:709-17). They did observe a decrease in rehospitalization for heart failure associated with an increase in rehospitalization for hyperkalemia. The authors attributed the reported benefit in the RCT to the exclusion of older and diabetic patients in addition to those with renal impairment, who were included in the registry analysis and reflected the real world of HFREF.
One registry study examining the benefit of ICDs in heart failure patients (JAMA 2013;309:55-62) from the analysis by the National Cardiovascular Registry did support the mortality benefit observed in the RCT (N. Engl. J. Med. 2002; 346:877-83).
As RCTs have developed over the last half-century, they have changed from investigations of therapeutic concepts to assessments of the efficacy of new and, often, expensive drugs. Much of this search has been supported by the pharmaceutical and device industries, which are intent on more focused research because of their concern about the "noise" generated by comorbidities that could obscure the benefit of their product. As a result, RCTs have identified lower-risk, homogeneous patient populations that may not reflect the real-world experience. Nevertheless, registry studies suffer from the major effect of bias, which is influenced by the physicians’ therapeutic choices and can distort the observed outcome. Unfortunately, the search for "truth" in clinical research remains often out of our reach.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
A recent spate of observational or registry analyses have challenged conventional wisdom derived from randomized clinical trials (RCTs). Both RCTs and registries have inherent flaws, but both provide important information in regard to drug efficacy in the search for "truth."
RCTs examine therapeutic effects in highly selected patient populations by focusing on one clinical entity, thereby excluding many patients with comorbidities that could influence or blunt the effect of the intervention. In a sense, RCTs do not represent the real-world expression of disease, since diseases rarely exists in isolation.
Registry trials collect large numbers of patients with a particular diagnosis within a large database. They include unselected patients and examine the effect of therapy in one disease regardless of comorbidities and are subject to both doctor and patient bias and confounding by comorbidities like chronic renal and pulmonary disease and, above all, are not randomized. Using a contemporary analogy, RCTs are a rifle shot whereas registries are more of a shotgun blast.
There have been two recent important targets for clinical research in heart failure. One is the search for better therapy for heart failure patients with preserved ejection fraction (HFPEF). The other is a search for drugs or devices that can provide added benefit to contemporary therapy for heart failure with reduced ejection fraction (HFREF)
The observation that many HFPEF patients develop heart failure despite current therapy with renin angiotensin aldosterone system (RAAS) antagonists and beta-blockers has led to a search for better therapy. RCTs with newer agents. including focused therapy with new RAAS antagonists, have failed to affect mortality in HFPEF (Lancet 2003;362:759-66). In contrast, a recent publication using the Swedish Heart Failure Registry (JAMA 2012;308:2108-17) found that patients treated with RAAS antagonists benefited compared with patients not taking them. The failure of the newer drugs to reach significance was attributed to flawed patient selection in RCTs that led to lower mortality rates and rendered the trials underpowered.
Similar discordance was observed between RCT and registry data in patients with HFREF who were treated with aldosterone antagonist (AA) in addition to contemporary RAAS antagonists and beta-blocker therapy. Using the Medicare database (JAMA 2012;308:2097-107), the investigators failed to observe any treatment benefit of AA on mortality that had previously been reported (N. Engl. J. Med. 1999;341:709-17). They did observe a decrease in rehospitalization for heart failure associated with an increase in rehospitalization for hyperkalemia. The authors attributed the reported benefit in the RCT to the exclusion of older and diabetic patients in addition to those with renal impairment, who were included in the registry analysis and reflected the real world of HFREF.
One registry study examining the benefit of ICDs in heart failure patients (JAMA 2013;309:55-62) from the analysis by the National Cardiovascular Registry did support the mortality benefit observed in the RCT (N. Engl. J. Med. 2002; 346:877-83).
As RCTs have developed over the last half-century, they have changed from investigations of therapeutic concepts to assessments of the efficacy of new and, often, expensive drugs. Much of this search has been supported by the pharmaceutical and device industries, which are intent on more focused research because of their concern about the "noise" generated by comorbidities that could obscure the benefit of their product. As a result, RCTs have identified lower-risk, homogeneous patient populations that may not reflect the real-world experience. Nevertheless, registry studies suffer from the major effect of bias, which is influenced by the physicians’ therapeutic choices and can distort the observed outcome. Unfortunately, the search for "truth" in clinical research remains often out of our reach.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
A recent spate of observational or registry analyses have challenged conventional wisdom derived from randomized clinical trials (RCTs). Both RCTs and registries have inherent flaws, but both provide important information in regard to drug efficacy in the search for "truth."
RCTs examine therapeutic effects in highly selected patient populations by focusing on one clinical entity, thereby excluding many patients with comorbidities that could influence or blunt the effect of the intervention. In a sense, RCTs do not represent the real-world expression of disease, since diseases rarely exists in isolation.
Registry trials collect large numbers of patients with a particular diagnosis within a large database. They include unselected patients and examine the effect of therapy in one disease regardless of comorbidities and are subject to both doctor and patient bias and confounding by comorbidities like chronic renal and pulmonary disease and, above all, are not randomized. Using a contemporary analogy, RCTs are a rifle shot whereas registries are more of a shotgun blast.
There have been two recent important targets for clinical research in heart failure. One is the search for better therapy for heart failure patients with preserved ejection fraction (HFPEF). The other is a search for drugs or devices that can provide added benefit to contemporary therapy for heart failure with reduced ejection fraction (HFREF)
The observation that many HFPEF patients develop heart failure despite current therapy with renin angiotensin aldosterone system (RAAS) antagonists and beta-blockers has led to a search for better therapy. RCTs with newer agents. including focused therapy with new RAAS antagonists, have failed to affect mortality in HFPEF (Lancet 2003;362:759-66). In contrast, a recent publication using the Swedish Heart Failure Registry (JAMA 2012;308:2108-17) found that patients treated with RAAS antagonists benefited compared with patients not taking them. The failure of the newer drugs to reach significance was attributed to flawed patient selection in RCTs that led to lower mortality rates and rendered the trials underpowered.
Similar discordance was observed between RCT and registry data in patients with HFREF who were treated with aldosterone antagonist (AA) in addition to contemporary RAAS antagonists and beta-blocker therapy. Using the Medicare database (JAMA 2012;308:2097-107), the investigators failed to observe any treatment benefit of AA on mortality that had previously been reported (N. Engl. J. Med. 1999;341:709-17). They did observe a decrease in rehospitalization for heart failure associated with an increase in rehospitalization for hyperkalemia. The authors attributed the reported benefit in the RCT to the exclusion of older and diabetic patients in addition to those with renal impairment, who were included in the registry analysis and reflected the real world of HFREF.
One registry study examining the benefit of ICDs in heart failure patients (JAMA 2013;309:55-62) from the analysis by the National Cardiovascular Registry did support the mortality benefit observed in the RCT (N. Engl. J. Med. 2002; 346:877-83).
As RCTs have developed over the last half-century, they have changed from investigations of therapeutic concepts to assessments of the efficacy of new and, often, expensive drugs. Much of this search has been supported by the pharmaceutical and device industries, which are intent on more focused research because of their concern about the "noise" generated by comorbidities that could obscure the benefit of their product. As a result, RCTs have identified lower-risk, homogeneous patient populations that may not reflect the real-world experience. Nevertheless, registry studies suffer from the major effect of bias, which is influenced by the physicians’ therapeutic choices and can distort the observed outcome. Unfortunately, the search for "truth" in clinical research remains often out of our reach.
Dr. Goldstein, medical editor of Cardiology News, is professor of medicine at Wayne State University and division head emeritus of cardiovascular medicine at Henry Ford Hospital, both in Detroit. He is on data safety monitoring committees for the National Institutes of Health and several pharmaceutical companies.
Recruiting Hospital Patients for Research
Randomized controlled trials (RCTs) generally provide the most rigorous evidence for clinical practice guidelines and quality‐improvement initiatives. However, 2 major shortcomings limit the ability to broadly apply these results to the general population. One has to do with sampling bias (due to subject consent and inclusion/exclusion criteria) and the other with potential differences between participants and eligible nonparticipants. The latter may be of particular importance in trials of behavioral interventions (rather than medication trials), which often require substantial participant effort.
First, individuals who provide written consent to participate in RCTs of behavioral interventions typically represent a minority of those approached and therefore may not be representative of the target population. Although the consenting proportion is often not disclosed, some estimate that only 35%50% of eligible subjects typically participate.[1, 2, 3] These estimates mirror the authors' prior experience with a 55.2% consent rate among subjects approached for a Medicare quality‐improvement behavioral intervention.[3] Though the literature is sparse, it suggests that eligible individuals who decline to participate in either interventions or usual care may differ from participants in their perception of intervention risks and effort[4] or in their levels of self‐efficacy or confidence in recovery.[5, 6] Relatively low enrollment rates mean that much of the population remains unstudied; however, evidence‐based interventions are often applied to populations broader than those included in the original analyses.
Additionally, although some nonparticipants may correctly decide that they do not need the assistance of a proposed intervention and therefore decline to participate, others may inappropriately judge the intervention's potential benefit and applicability when declining. In other words, electing to not participate in a study, despite eligibility, may reflect more than a refusal of inconvenience, disinterest, or desire to contribute to knowledge; for some individuals it may offer a proxy statement about health knowledge, personal beliefs, attitudes, and needs, including perceived stress,[5] cultural relevance,[7, 8] and literacy/health literacy.[9, 10] Characterizing these patients can help us to modify recruitment approaches and improve participation so that participants better represent the target population. If these differences also relate to patients' adherence to care recommendations, a more nuanced understanding could improve ways to identify and engage potentially nonadherent patients to improve health outcomes.
We hypothesized that we could identify characteristics that differ between behavioral‐intervention participants and eligible nonparticipants using a set of screening questions. We proposed that these characteristics, including constructs related to perceived stress, recovery expectation, health literacy, insight, and action into advance care planning and confusion by any question, would predict the likelihood of consenting to a behavioral intervention requiring substantial subject engagement. Some of these characteristics may relate to adherence to preventive care or treatment recommendations. We did not specifically hypothesize about the distribution of demographic differences.
METHODS
Study Design
Prospective observational study conducted within a larger behavioral intervention.
Screening Question Design
We adapted our screening questions from several previously validated surveys, selecting questions related to perceived stress and self‐efficacy,[11] recovery expectations, health literacy/medication label interpretation,[12] and discussing advance directives (Table 1). Some of these characteristics may relate to adherence to preventive care or treatment programs[13, 14] or to clinical outcomes.[15, 16]
Screening Question | Adapted From Original Validated Question | Source | Construct |
---|---|---|---|
In the last week, how often have you felt that you are unable to control the important things in your life? (Rarely, sometimes, almost always) | In the last month, how often have you felt that you were unable to control the important things in your life? (Never, almost never, sometimes, fairly often, very often) | Adapted from the Perceived Stress Scale (PSS‐14).[11] | Perceived stress, self‐efficacy |
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? (Rarely, sometimes, almost always) | In the last month, how often have you felt difficulties were piling up so high that you could not overcome them? (Never, almost never, sometimes, fairly often, very often) | Adapted from the Perceived Stress Scale (PSS‐14).[11] | Perceived stress, self‐efficacy |
How sure are you that you can go back to the way you felt before being hospitalized? (Not sure at all, somewhat sure, very sure) | Courtesy of Phil Clark, PhD, University of Rhode Island, drawing on research on resilience. Similar questions are used in other studies, including studies of postsurgical recovery.[29, 30, 31] | Recovery expectation, resilience | |
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? (Yes, no) | Based on consumer‐targeted materials on advance care planning. | Advance care planning | |
(Show patient a picture of prescription label.) How many times a day should someone take this medicine? (Correct, incorrect) | (Show patient a picture of ice cream label.) If you eat the entire container, how many calories will you eat? (Correct, incorrect) | Adapted from Pfizer's Clear Health Communication: The Newest Vital Sign.[12] | Health literacy |
Prior to administering the screening questions, we performed cognitive testing with residents of an assisted‐living facility (N=10), a population that resembles our study's target population. In response to cognitive testing, we eliminated a question not interpreted easily by any of the participants, identified wording changes to clarify questions, simplified answer choices for ease of response (especially because questions are delivered verbally), and moved the most complicated (and potentially most embarrassing) question to the end, with more straightforward questions toward the beginning. We also substantially enlarged the image of a standard medication label to improve readability. Our final tool included 5 questions (Table 1).
The final instrument prompted coaches to record patient confusion. Additionally, the advance‐directive question included a refused to answer option and the medication question included unable to answer (needs glasses, too tired, etc.), a potential marker of low health literacy if used as an excuse to avoid embarrassment.[17]
Setting
We recruited inpatients at 5 Rhode Island acute‐care hospitals, including 1 community hospital, 3 teaching hospitals, and a tertiary‐care center and teaching hospital, ranging from 174 beds to 719 beds. Recruitment occurred from November 2010 to April 2011. The hospitals' respective institutional review boards approved the screening questions.
Study Population
We recruited a convenience sample of consecutively identified hospitalized Medicare fee‐for‐service beneficiaries, identified as (1) eligible for the subsequent behavioral intervention based on inpatient census lists and (2) willing to discuss an offer for a home‐based behavioral intervention. The behavioral intervention, based on the Care Transitions Intervention and described elsewhere,[3, 18] included a home visit and 2 phone calls (each about 1 hour). Coaches used a personal health record to help patients and/or caregivers better manage their health by (1) being able to list their active medical conditions and medications and (2) understanding warning signs indicating a need to reach out for help, including getting a timely medical appointment after hospitalization. The population for the present study included individuals approached to discuss participation in the behavioral intervention who also agreed to answer the screening questions.
Inclusion/Exclusion Criteria
We included hospitalized Medicare fee‐for‐service beneficiaries. We excluded patients who were current long‐term care residents, were to be discharged to long‐term or skilled care, or had a documented hospice referral. We also excluded patients with limited English proficiency or who were judged to have inadequate cognitive function, unless a caregiver agreed to receive the intervention as a proxy. We made these exclusions when recruiting for the behavioral intervention. Because we presented the screening questions to a subset of those approached for the behavioral intervention, we did not further exclude anyone. In other words, we offered the screening questions to all 295 people we approached during this study time period (100%).
Screening‐Question Study Process
Coaches asked patients to answer the 5 screening questions immediately after offering them the opportunity to participate in the behavioral intervention, regardless of whether or not they accepted the behavioral intervention. This study examines the subset of patients approached for the behavioral intervention who verbally consented to answer the screening questions.
Data Sources and Covariates
We analyzed primary data from the screening questions and behavioral intervention (for those who consented to participate), as well as Medicare claims and Medicaid enrollment data. We matched screening‐question data from November 2010 through April 2011 with Medicare Part A claims from October 2010 through May 2011 to calculate 30‐day readmission rates.
We obtained the following information for patients offered the behavioral intervention: (1) responses to screening questions, (2) whether patients consented to the behavioral intervention, (3) exposure to the behavioral intervention, and (4) recruitment date. Medicare claims data included (1) admission and discharge dates to calculate the length of stay, (2) index diagnosis, (3) hospital, and (4) site of discharge. Medicare enrollment data provided information on (1) Medicaid/Medicare dual‐eligibility status, (2) sex, and (3) patient‐reported race. We matched data based on patient name and date of birth. Our primary outcome was consent to the behavioral intervention. Secondarily, we reviewed posthospital utilization patterns, including hospital readmission, emergency‐department use, and use of home‐health services.
Statistical Analysis
We categorized patients into 2 groups (Figure 1): participants (consented to the behavioral intervention) and nonparticipants (eligible for the behavioral intervention but declined to participate). We excluded responses for those confused by the question (no response). For the response scales never, sometimes, almost always and not at all sure, somewhat sure, very sure, we isolated the most negative response, grouping the middle and most positive responses (Table 2). For the medication‐label question, we grouped incorrect and unable to answer (needs glasses, too tired, etc.) responses. We compared demographic differences between behavioral intervention participants and nonparticipants using 2 tests (categorical variables) and Student t tests (continuous variables). We then used multivariate logistic regression to analyze differences in consent to the behavioral intervention based on screening‐question responses, adjusting for demographics that differed significantly in the bivariate comparisons.

Screening‐Question Response | Adjusted OR (95% CI) | P Value |
---|---|---|
| ||
In the last week, how often have you felt that you are unable to control the important things in your life? | ||
Out of control (Almost always) | 0.35 (0.14‐0.92) | 0.034a |
In control (Sometimes, rarely) | 1.00 (Ref) | |
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? | ||
Overwhelmed (Almost always) | 0.41 (0.16‐1.07) | 0.069 |
Not overwhelmed (Sometimes, rarely) | 1.00 (Ref) | |
How sure are you that you can go back to the way you felt before being hospitalized? | ||
Not confident (Not sure at all) | 0.17 (0.06‐0.45) | 0.001a |
Confident (Somewhat sure, very sure) | 1.00 (Ref) | |
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? | ||
No | 0.45 (0.13‐1.64) | 0.227 |
Yes | 1.00 (Ref) | |
How many times a day should someone take this medicine? (Show patient a medication label) | ||
Incorrect answer | 3.82 (1.12‐13.03) | 0.033a |
Correct answer | 1.00 (Ref) | |
Confused by any question? | ||
Yes | 0.11 (0.05‐0.24) | 0.001a |
No | 1.00 (Ref) |
The authors used SAS version 9.2 (SAS Institute, Inc., Cary, NC) for all analyses.
RESULTS
Of the 295 patients asked to complete the screening questions, 260 (88.1%) consented to answer the screening questions and 35 (11.9%) declined. More than half of those who answered the screening questions consented to participate in the behavioral intervention (160; 61.5%) (Figure 1). When compared with nonparticipants, participants in the behavioral intervention were younger (25.6% age 85 years vs 40% age 85 years, P=0.028), had a longer average length of hospital stay (7.9 vs 6.1 days, P=0.008), were more likely to be discharged home without clinical services (35.0% vs 23.0%, P=0.041), and were unevenly distributed between the 5 recruitment‐site hospitals, coming primarily from the teaching hospitals (P<0.001) (Table 3). There were no significant differences based on race, sex, dual‐eligible Medicare/Medicaid status, presence of a caregiver, or index diagnosis.
Patient Characteristics | Declined (n=100) | Consented (n=160) | P Value |
---|---|---|---|
| |||
Male, n (%) | 34 (34.0) | 52 (32.5) | 0.803 |
Race, n (%) | |||
White | 94 (94.0) | 151 (94.4) | 0.691 |
Black | 2 (2.0) | 5 (3.1) | |
Other | 4 (4.0) | 4 (2.5) | |
Age, n (%), y | |||
<65 | 17 (17.0) | 23 (14.4) | 0.028a |
6574 | 14 (14.0) | 42 (26.3) | |
7584 | 29 (29.0) | 54 (33.8) | |
85 | 40 (40.0) | 41 (25.6) | |
Dual eligible, n (%)b | 11 (11.0) | 24 (15.0) | 0.358 |
Caregiver present, n (%) | 17 (17.0) | 34 (21.3) | 0.401 |
Length of stay, mean (SD), d | 6.1 (4.1) | 7.9 (4.8) | 0.008a |
Index diagnosis, n (%) | |||
Acute MI | 3 (3.0) | 6 (3.8) | 0.806 |
CHF | 6 (6.0) | 20 (12.5) | 0.111 |
Pneumonia | 7 (7.0) | 9 (5.6) | 0.572 |
COPD | 6 (6.0) | 6 (8.8) | 0.484 |
Discharged home without clinical services, n (%)c | 23 (23.0) | 56 (35.0) | 0.041a |
Hospital site | |||
Hospital 1 | 15 (15.0) | 43 (26.9) | <0.001a |
Hospital 2 | 20 (20.0) | 26 (16.3) | |
Hospital 3 | 15 (15.0) | 23 (14.4) | |
Hospital 4 | 2 (2.0) | 48 (30.0) | |
Hospital 5 | 48 (48.0) | 20 (12.5) |
Patients who identified themselves as being unable to control important things in their lives were 65% less likely to consent to the behavioral intervention than those in control (odds ratio [OR]: 0.35, 95% confidence interval [CI]: 0.14‐0.92), and those who did not feel confident about recovering were 83% less likely to consent (OR: 0.17, 95% CI: 0.06‐0.45). Individuals who were confused by any question were 89% less likely to consent (OR: 0.11, 95% CI: 0.05‐0.24). Individuals who answered the medication question incorrectly were 3 times more likely to consent (OR: 3.82, 95% CI: 1.12‐13.03). There were no significant differences in consent for feeling overwhelmed (difficulties piling up) or for having discussed advance care planning with family members or doctors.
We had insufficient power to detect significant differences in posthospital utilization (including hospital readmission, emergency‐department use, and receipt of home health), based on screening‐question responses (data not shown).
DISCUSSION
We find that patients who declined to participate in the behavioral intervention (eligible nonparticipants) differed from participants in 3 important ways: perceived stress, recovery expectation, and health literacy. As hypothesized, patients with higher perceived stress and lower recovery expectation were less likely to consent to the behavioral intervention, even after adjusting for demographic and healthcare‐utilization differences. Contrary to our hypothesis, patients who incorrectly answered the medication question were more likely to consent to the intervention than those who correctly answered.
Characterizing nonparticipants and participants can offer important insight into the limitations of the research that informs clinical guidelines and behavioral interventions. Such characteristics could also indicate how to better engage patients in interventions or other aspects of their care, if associated with lower rates of adherence to recommended health behaviors or treatment plans. For example, self‐efficacy (closely related to perceived stress) and hopelessness regarding clinical outcomes (similar to low recovery expectation in the present study) are associated with nonadherence to medication plans and other care in some populations.[5, 6] Other more extreme stress, like that following a major medical event, has also been associated with a lower rate of adherence to medication regimens and a resulting higher rate of hospital readmission and mortality.[19, 20] People with low health literacy (compared with adequate health literacy) are more likely to report being confused about their medications, requesting help to read medication labels and missing appointments due to trouble reading reminder cards.[9] Identifying these characteristics may assist providers in helping patients address adherence barriers by first accurately identifying the root of patient issues (eg, where the lack of confidence in recovery is rooted in lack of resources or social support), then potentially referring to community resources where possible. For example, some states (including Rhode Island, this study's location) may have Aging and Disability Resource Centers dedicated to linking elderly people with transportation, decision support, and other resources to support quality care.
The association between health literacy and intervention participation remains uncertain. Our question, which assessed interpretation of a prescription label as a health‐literacy proxy, may have given patients insight into their limited health literacy that motivated them to accept the subsequent behavioral intervention. Others have found that lowerhealth literacy patients want their providers to know that they did not understand some health words,[9] though they may be less likely to ask questions, request additional services, or seek new information during a medical encounter.[21] In our study, those who correctly answered the medication‐label question were almost mutually exclusive from those who were otherwise stressed (12% overlap; data not shown). Thus, patients who correctly answer this question may correctly realize that they do not need the support offered by the behavioral intervention and decline to participate. For other patients, perceived stress and poor recovery expectations may be more immediate and important determinants of declination, with patients too stressed to volunteer for another task, even if it involves much‐needed assistance.
The frequency with which patients were confused by the questions merits further comment and may also be driven by stress. Though each question seeks to identify the impact of a specific construct (Table 1), being confused by any question may reflect a more general (or subacute) level of cognitive impairment or generalized low health literacy not limited to the applied numeracy of the medication‐label question. We excluded confused responses to demonstrate more clearly the impact of each individual construct.
The impact of these characteristics may be affected by study design or other characteristics. One of the few studies to examine (via RCT) how methods affect consent found that participation decreased with increasing complexity of the consent process: written consent yielded the lowest participation, limited written consent was higher, and verbal consent was the highest.[10] Other tactics to increase consent include monetary incentives,[22] culturally sensitive materials,[7] telephone reminders,[23] an opt‐out instead of opt‐in approach,[23] and an open design where participants know which treatment they are receiving.[23] We do not know how these tactics relate to the characteristics captured in our screening questions, although other characteristics we measured, such as patients' self‐identified race, have been associated with intervention participation and access to care,[8, 24, 25] and patients who perceive that the benefit of the intervention outweighs expected risks and time requirements are more likely to consent.[4] We intentionally minimized the number of screening questions to encourage participation. The high rate of consent to our screening questions compared with consent to the (more involved) behavioral intervention reveals how sensitive patients are to the perceived invasiveness of an intervention.
We note several limitations. First, overall generalizability is limited due to our small sample size, use of consecutive convenience sampling, and exclusion criteria (eg, patients discharged to long‐term or skilled nursing care). And, these results may not apply to patients who are not hospitalized; hospitalized patients may have different motivations and stressors regarding their involvement in their care. Additionally, although we included as many people with mild cognitive impairment as possible by proxy through caregivers, we excluded some that did not have caregivers, potentially undermining the accuracy of how cognition impacts the choice to accept the behavioral intervention. Because researchers often explicitly exclude individuals based on cognitive impairment, differences between recruited subjects and the population at large may be particularly high among elderly patients, where up to half of the eligible population may be affected by cognitive impairment.[26] Further research into successfully engaging caregivers as a way to reach otherwise‐excluded patients with cognitive impairment can help to mitigate threats to generalizability. Finally, our screening questions are based on validated questions, but we rearranged our question wording, simplified answer choices, and removed them from their original context. Thus, the questions were not validated in our population or when administered in this manner. Although we conducted cognitive testing, further validity and reliability testing are necessary to translate these questions into a general screening tool. The medication‐label question also requires revision; in data collection and analysis, we assume that patients who were unable to answer (needs glasses, too tired, etc.) were masking an inability to respond correctly. Though the use of this excuse is cited in the literature,[17] we cannot be certain that our treatment of it in these screening questions is generalizable. Generalizability also applies to how we group responses. Isolating the most negative response (by grouping the middle answer with the most positive answer) most specifically identifies individuals more likely to need assistance and is therefore clinically pertinent, but this also potentially fails to identify individuals who also need help but do not choose the more extreme answer. Further research to refine the screening questions might also consider the timeframe of the perceived stress questions (past week rather than past month); this timeframe may be specific to the acute medical situation rather than general or unrelated perceived stress. Though this study cannot test this hypothesis, individuals with higher pre‐illness perceived stress may be more interested in addressing the issues that were stressors prior to acute illness, rather than the offered behavioral intervention. Additionally, some of the questions were highly correlated (Q1 and Q2) and indicate a potential for shortening the screening questionnaire.
Still, these findings further the discussion of how to identify and consent hospitalized patients for participation in behavioral interventions, both for research and for routine clinical care. Researchers should specifically consider how to engage individuals who are stressed and are not confident about recovery to improve reach and effectiveness. For example, interventions should prospectively collect data on stress and confidence in recovery and include protocols to support people who are positively identified with these characteristics. These characteristics may also offer insight into improving patient and caregiver engagement; more research is needed into characteristics related to patients' willingness to seek assistance in care. We are not the first to suggest that characteristics not observed in medical charts may impact patient completion or response to behavioral interventions,[27, 28] and considering differences between participants and eligible nonparticipants in clinical care delivery and interventions can strengthen the evidence base for clinical improvements, particularly related to patient self‐management. The implications are useful for both practicing clinicians and larger systems examining the comparativeness of patient interventions and generalizing results from RCTs.
Acknowledgments
The authors thank Phil Clark, PhD, and the SENIOR Project (Study of Exercise and Nutrition in Older Rhode Islanders) research team at the University of Rhode Island for formulating 1 of the screening questions, and Marissa Meucci for her assistance with the cognitive testing and formative research for the screening questions.
Disclosures
The analyses on which this study is based were performed by Healthcentric Advisors under contract HHSM 5002011‐RI10C, titled Utilization and Quality Control Peer Review for the State of Rhode Island, sponsored by the Centers for Medicare and Medicaid Services, US Department of Health and Human Services. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors report no conflicts of interest.
- Evaluating the 'all‐comers' design: a comparison of participants in two 'all‐comers' PCI trials with non‐participants. Eur Heart J. 2011;32(17):2161–2167. , , , et al.
- Can the randomized controlled trial literature generalize to nonrandomized patients? J Consult Clin Psychol. 2005;73(1):127–135. , , , .
- The care transitions intervention: translating from efficacy to effectiveness. Arch Intern Med. 2011;171(14):1232–1237. , , , , , .
- Determinants of patient participation in clinical studies requiring informed consent: why patients enter a clinical trial. Patient Educ Couns. 1998;35(2):111–125. , , .
- Coping self‐efficacy as a predictor of adherence to antiretroviral therapy in men and women living with HIV in Kenya. AIDS Patient Care STDS. 2011;25(9):557–561. , , , .
- Self‐reported influences of hopelessness, health literacy, lifestyle action, and patient inertia on blood pressure control in a hypertensive emergency department population. Am J Med Sci. 2009;338(5):368–372. , , , , , .
- Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol. 2006;6:34. , .
- Age‐, sex‐, and race‐based differences among patients enrolled versus not enrolled in acute lung injury clinical trials. Crit Care Med. 2010;38(6):1450–1457. , , , , , .
- Patients' shame and attitudes toward discussing the results of literacy screening. J Health Commun. 2007;12(8):721–732. , , , , , .
- Impact of detailed informed consent on research subjects' participation: a prospective, randomized trial. J Emerg Med. 2008;34(3):269–275. .
- A global measure of perceived stress. J Health Soc Behav. 1983;24:385–396. Available at: http://www.psy.cmu.edu/∼scohen/globalmeas83.pdf. Accessed May 10, 2012. , , .
- .Pfizer, Inc. Clear Health Communication: The Newest Vital Sign. Available at: http://www.pfizerhealthliteracy.com/asset/pdf/NVS_Eng/files/nvs_flipbook_english_final.pdf. Accessed May 10, 2012.
- Relationship of preventive health practices and health literacy: a national study. Am J Health Behav. 2008;32(3):227–242. , , .
- Predictors of medication self‐management skill in a low‐literacy population. J Gen Intern Med. 2006;21:852–856. , , , , , .
- Does how you do depend on how you think you'll do? A systematic review of the evidence for a relation between patients' recovery expectations and health outcomes [published correction appears in CMAJ. 2001;165(10):1303]. CMAJ. 2001;165(2):174–179. , , .
- Health care costs in the last week of life: associations with end‐of‐life conversations. Arch Intern Med. 2009;169(5):480–488. , , , et al.
- Exploring health literacy competencies in community pharmacy. Health Expect. 2010;15(1):12–22. , , , , .
- The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):1822–1828. , , , .
- A prospective study of posttraumatic stress symptoms and non‐adherence in survivors of a myocardial infarction (MI). Gen Hosp Psychiatry. 2001;23:215–222. , , , et al.
- Posttraumatic stress, non‐adherence, and adverse outcomes in survivors of a myocardial infarction. Psychosom Med. 2004;66:521–526. , , , et al.
- The implications of health literacy on patient‐provider communication. Arch Dis Child. 2008;93:428–432. , .
- Strategies to improve recruitment to research studies. Cochrane Database Syst Rev. 2007;2:MR000013. , , .
- Strategies to improve recruitment to randomised controlled trials. Cochrane Database Syst Rev. 2010;4:MR000013. , , , et al.
- Addressing diabetes racial and ethnic disparities: lessons learned from quality improvement collaboratives. Diabetes Manag (Lond). 2011;1(6):653–660. , , , .
- Racial differences in eligibility and enrollment in a smoking cessation clinical trial. Health Psychol. 2011;30(1):40–48. , , , .
- The disappearing subject: exclusion of people with cognitive impairment from research. J Am Geriatr Soc. 2012;6:413–419. , , , .
- The role of patient preferences in cost‐effectiveness analysis: a conflict of values? Pharmacoeconomics. 2009;27(9):705–712. , , .
- A comprehensive care management program to prevent chronic obstructive pulmonary disease hospitalizations. Ann Intern Med. 2012;156(10):673–683. , , , et al.
- The role of expectations in patients' reports of post‐operative outcomes and improvement following therapy. Med Care. 1993;31:1043–1056. , , , , .
- Expectations and outcomes after hip fracture among the elderly. Int J Aging Hum Dev. 1992;34:339–350. , .
- Role of patients' view of their illness in predicting return to work and functioning after myocardial infarction: longitudinal study. BMJ. 1996;312:1191–1194. , , , .
Randomized controlled trials (RCTs) generally provide the most rigorous evidence for clinical practice guidelines and quality‐improvement initiatives. However, 2 major shortcomings limit the ability to broadly apply these results to the general population. One has to do with sampling bias (due to subject consent and inclusion/exclusion criteria) and the other with potential differences between participants and eligible nonparticipants. The latter may be of particular importance in trials of behavioral interventions (rather than medication trials), which often require substantial participant effort.
First, individuals who provide written consent to participate in RCTs of behavioral interventions typically represent a minority of those approached and therefore may not be representative of the target population. Although the consenting proportion is often not disclosed, some estimate that only 35%50% of eligible subjects typically participate.[1, 2, 3] These estimates mirror the authors' prior experience with a 55.2% consent rate among subjects approached for a Medicare quality‐improvement behavioral intervention.[3] Though the literature is sparse, it suggests that eligible individuals who decline to participate in either interventions or usual care may differ from participants in their perception of intervention risks and effort[4] or in their levels of self‐efficacy or confidence in recovery.[5, 6] Relatively low enrollment rates mean that much of the population remains unstudied; however, evidence‐based interventions are often applied to populations broader than those included in the original analyses.
Additionally, although some nonparticipants may correctly decide that they do not need the assistance of a proposed intervention and therefore decline to participate, others may inappropriately judge the intervention's potential benefit and applicability when declining. In other words, electing to not participate in a study, despite eligibility, may reflect more than a refusal of inconvenience, disinterest, or desire to contribute to knowledge; for some individuals it may offer a proxy statement about health knowledge, personal beliefs, attitudes, and needs, including perceived stress,[5] cultural relevance,[7, 8] and literacy/health literacy.[9, 10] Characterizing these patients can help us to modify recruitment approaches and improve participation so that participants better represent the target population. If these differences also relate to patients' adherence to care recommendations, a more nuanced understanding could improve ways to identify and engage potentially nonadherent patients to improve health outcomes.
We hypothesized that we could identify characteristics that differ between behavioral‐intervention participants and eligible nonparticipants using a set of screening questions. We proposed that these characteristics, including constructs related to perceived stress, recovery expectation, health literacy, insight, and action into advance care planning and confusion by any question, would predict the likelihood of consenting to a behavioral intervention requiring substantial subject engagement. Some of these characteristics may relate to adherence to preventive care or treatment recommendations. We did not specifically hypothesize about the distribution of demographic differences.
METHODS
Study Design
Prospective observational study conducted within a larger behavioral intervention.
Screening Question Design
We adapted our screening questions from several previously validated surveys, selecting questions related to perceived stress and self‐efficacy,[11] recovery expectations, health literacy/medication label interpretation,[12] and discussing advance directives (Table 1). Some of these characteristics may relate to adherence to preventive care or treatment programs[13, 14] or to clinical outcomes.[15, 16]
Screening Question | Adapted From Original Validated Question | Source | Construct |
---|---|---|---|
In the last week, how often have you felt that you are unable to control the important things in your life? (Rarely, sometimes, almost always) | In the last month, how often have you felt that you were unable to control the important things in your life? (Never, almost never, sometimes, fairly often, very often) | Adapted from the Perceived Stress Scale (PSS‐14).[11] | Perceived stress, self‐efficacy |
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? (Rarely, sometimes, almost always) | In the last month, how often have you felt difficulties were piling up so high that you could not overcome them? (Never, almost never, sometimes, fairly often, very often) | Adapted from the Perceived Stress Scale (PSS‐14).[11] | Perceived stress, self‐efficacy |
How sure are you that you can go back to the way you felt before being hospitalized? (Not sure at all, somewhat sure, very sure) | Courtesy of Phil Clark, PhD, University of Rhode Island, drawing on research on resilience. Similar questions are used in other studies, including studies of postsurgical recovery.[29, 30, 31] | Recovery expectation, resilience | |
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? (Yes, no) | Based on consumer‐targeted materials on advance care planning. | Advance care planning | |
(Show patient a picture of prescription label.) How many times a day should someone take this medicine? (Correct, incorrect) | (Show patient a picture of ice cream label.) If you eat the entire container, how many calories will you eat? (Correct, incorrect) | Adapted from Pfizer's Clear Health Communication: The Newest Vital Sign.[12] | Health literacy |
Prior to administering the screening questions, we performed cognitive testing with residents of an assisted‐living facility (N=10), a population that resembles our study's target population. In response to cognitive testing, we eliminated a question not interpreted easily by any of the participants, identified wording changes to clarify questions, simplified answer choices for ease of response (especially because questions are delivered verbally), and moved the most complicated (and potentially most embarrassing) question to the end, with more straightforward questions toward the beginning. We also substantially enlarged the image of a standard medication label to improve readability. Our final tool included 5 questions (Table 1).
The final instrument prompted coaches to record patient confusion. Additionally, the advance‐directive question included a refused to answer option and the medication question included unable to answer (needs glasses, too tired, etc.), a potential marker of low health literacy if used as an excuse to avoid embarrassment.[17]
Setting
We recruited inpatients at 5 Rhode Island acute‐care hospitals, including 1 community hospital, 3 teaching hospitals, and a tertiary‐care center and teaching hospital, ranging from 174 beds to 719 beds. Recruitment occurred from November 2010 to April 2011. The hospitals' respective institutional review boards approved the screening questions.
Study Population
We recruited a convenience sample of consecutively identified hospitalized Medicare fee‐for‐service beneficiaries, identified as (1) eligible for the subsequent behavioral intervention based on inpatient census lists and (2) willing to discuss an offer for a home‐based behavioral intervention. The behavioral intervention, based on the Care Transitions Intervention and described elsewhere,[3, 18] included a home visit and 2 phone calls (each about 1 hour). Coaches used a personal health record to help patients and/or caregivers better manage their health by (1) being able to list their active medical conditions and medications and (2) understanding warning signs indicating a need to reach out for help, including getting a timely medical appointment after hospitalization. The population for the present study included individuals approached to discuss participation in the behavioral intervention who also agreed to answer the screening questions.
Inclusion/Exclusion Criteria
We included hospitalized Medicare fee‐for‐service beneficiaries. We excluded patients who were current long‐term care residents, were to be discharged to long‐term or skilled care, or had a documented hospice referral. We also excluded patients with limited English proficiency or who were judged to have inadequate cognitive function, unless a caregiver agreed to receive the intervention as a proxy. We made these exclusions when recruiting for the behavioral intervention. Because we presented the screening questions to a subset of those approached for the behavioral intervention, we did not further exclude anyone. In other words, we offered the screening questions to all 295 people we approached during this study time period (100%).
Screening‐Question Study Process
Coaches asked patients to answer the 5 screening questions immediately after offering them the opportunity to participate in the behavioral intervention, regardless of whether or not they accepted the behavioral intervention. This study examines the subset of patients approached for the behavioral intervention who verbally consented to answer the screening questions.
Data Sources and Covariates
We analyzed primary data from the screening questions and behavioral intervention (for those who consented to participate), as well as Medicare claims and Medicaid enrollment data. We matched screening‐question data from November 2010 through April 2011 with Medicare Part A claims from October 2010 through May 2011 to calculate 30‐day readmission rates.
We obtained the following information for patients offered the behavioral intervention: (1) responses to screening questions, (2) whether patients consented to the behavioral intervention, (3) exposure to the behavioral intervention, and (4) recruitment date. Medicare claims data included (1) admission and discharge dates to calculate the length of stay, (2) index diagnosis, (3) hospital, and (4) site of discharge. Medicare enrollment data provided information on (1) Medicaid/Medicare dual‐eligibility status, (2) sex, and (3) patient‐reported race. We matched data based on patient name and date of birth. Our primary outcome was consent to the behavioral intervention. Secondarily, we reviewed posthospital utilization patterns, including hospital readmission, emergency‐department use, and use of home‐health services.
Statistical Analysis
We categorized patients into 2 groups (Figure 1): participants (consented to the behavioral intervention) and nonparticipants (eligible for the behavioral intervention but declined to participate). We excluded responses for those confused by the question (no response). For the response scales never, sometimes, almost always and not at all sure, somewhat sure, very sure, we isolated the most negative response, grouping the middle and most positive responses (Table 2). For the medication‐label question, we grouped incorrect and unable to answer (needs glasses, too tired, etc.) responses. We compared demographic differences between behavioral intervention participants and nonparticipants using 2 tests (categorical variables) and Student t tests (continuous variables). We then used multivariate logistic regression to analyze differences in consent to the behavioral intervention based on screening‐question responses, adjusting for demographics that differed significantly in the bivariate comparisons.

Screening‐Question Response | Adjusted OR (95% CI) | P Value |
---|---|---|
| ||
In the last week, how often have you felt that you are unable to control the important things in your life? | ||
Out of control (Almost always) | 0.35 (0.14‐0.92) | 0.034a |
In control (Sometimes, rarely) | 1.00 (Ref) | |
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? | ||
Overwhelmed (Almost always) | 0.41 (0.16‐1.07) | 0.069 |
Not overwhelmed (Sometimes, rarely) | 1.00 (Ref) | |
How sure are you that you can go back to the way you felt before being hospitalized? | ||
Not confident (Not sure at all) | 0.17 (0.06‐0.45) | 0.001a |
Confident (Somewhat sure, very sure) | 1.00 (Ref) | |
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? | ||
No | 0.45 (0.13‐1.64) | 0.227 |
Yes | 1.00 (Ref) | |
How many times a day should someone take this medicine? (Show patient a medication label) | ||
Incorrect answer | 3.82 (1.12‐13.03) | 0.033a |
Correct answer | 1.00 (Ref) | |
Confused by any question? | ||
Yes | 0.11 (0.05‐0.24) | 0.001a |
No | 1.00 (Ref) |
The authors used SAS version 9.2 (SAS Institute, Inc., Cary, NC) for all analyses.
RESULTS
Of the 295 patients asked to complete the screening questions, 260 (88.1%) consented to answer the screening questions and 35 (11.9%) declined. More than half of those who answered the screening questions consented to participate in the behavioral intervention (160; 61.5%) (Figure 1). When compared with nonparticipants, participants in the behavioral intervention were younger (25.6% age 85 years vs 40% age 85 years, P=0.028), had a longer average length of hospital stay (7.9 vs 6.1 days, P=0.008), were more likely to be discharged home without clinical services (35.0% vs 23.0%, P=0.041), and were unevenly distributed between the 5 recruitment‐site hospitals, coming primarily from the teaching hospitals (P<0.001) (Table 3). There were no significant differences based on race, sex, dual‐eligible Medicare/Medicaid status, presence of a caregiver, or index diagnosis.
Patient Characteristics | Declined (n=100) | Consented (n=160) | P Value |
---|---|---|---|
| |||
Male, n (%) | 34 (34.0) | 52 (32.5) | 0.803 |
Race, n (%) | |||
White | 94 (94.0) | 151 (94.4) | 0.691 |
Black | 2 (2.0) | 5 (3.1) | |
Other | 4 (4.0) | 4 (2.5) | |
Age, n (%), y | |||
<65 | 17 (17.0) | 23 (14.4) | 0.028a |
6574 | 14 (14.0) | 42 (26.3) | |
7584 | 29 (29.0) | 54 (33.8) | |
85 | 40 (40.0) | 41 (25.6) | |
Dual eligible, n (%)b | 11 (11.0) | 24 (15.0) | 0.358 |
Caregiver present, n (%) | 17 (17.0) | 34 (21.3) | 0.401 |
Length of stay, mean (SD), d | 6.1 (4.1) | 7.9 (4.8) | 0.008a |
Index diagnosis, n (%) | |||
Acute MI | 3 (3.0) | 6 (3.8) | 0.806 |
CHF | 6 (6.0) | 20 (12.5) | 0.111 |
Pneumonia | 7 (7.0) | 9 (5.6) | 0.572 |
COPD | 6 (6.0) | 6 (8.8) | 0.484 |
Discharged home without clinical services, n (%)c | 23 (23.0) | 56 (35.0) | 0.041a |
Hospital site | |||
Hospital 1 | 15 (15.0) | 43 (26.9) | <0.001a |
Hospital 2 | 20 (20.0) | 26 (16.3) | |
Hospital 3 | 15 (15.0) | 23 (14.4) | |
Hospital 4 | 2 (2.0) | 48 (30.0) | |
Hospital 5 | 48 (48.0) | 20 (12.5) |
Patients who identified themselves as being unable to control important things in their lives were 65% less likely to consent to the behavioral intervention than those in control (odds ratio [OR]: 0.35, 95% confidence interval [CI]: 0.14‐0.92), and those who did not feel confident about recovering were 83% less likely to consent (OR: 0.17, 95% CI: 0.06‐0.45). Individuals who were confused by any question were 89% less likely to consent (OR: 0.11, 95% CI: 0.05‐0.24). Individuals who answered the medication question incorrectly were 3 times more likely to consent (OR: 3.82, 95% CI: 1.12‐13.03). There were no significant differences in consent for feeling overwhelmed (difficulties piling up) or for having discussed advance care planning with family members or doctors.
We had insufficient power to detect significant differences in posthospital utilization (including hospital readmission, emergency‐department use, and receipt of home health), based on screening‐question responses (data not shown).
DISCUSSION
We find that patients who declined to participate in the behavioral intervention (eligible nonparticipants) differed from participants in 3 important ways: perceived stress, recovery expectation, and health literacy. As hypothesized, patients with higher perceived stress and lower recovery expectation were less likely to consent to the behavioral intervention, even after adjusting for demographic and healthcare‐utilization differences. Contrary to our hypothesis, patients who incorrectly answered the medication question were more likely to consent to the intervention than those who correctly answered.
Characterizing nonparticipants and participants can offer important insight into the limitations of the research that informs clinical guidelines and behavioral interventions. Such characteristics could also indicate how to better engage patients in interventions or other aspects of their care, if associated with lower rates of adherence to recommended health behaviors or treatment plans. For example, self‐efficacy (closely related to perceived stress) and hopelessness regarding clinical outcomes (similar to low recovery expectation in the present study) are associated with nonadherence to medication plans and other care in some populations.[5, 6] Other more extreme stress, like that following a major medical event, has also been associated with a lower rate of adherence to medication regimens and a resulting higher rate of hospital readmission and mortality.[19, 20] People with low health literacy (compared with adequate health literacy) are more likely to report being confused about their medications, requesting help to read medication labels and missing appointments due to trouble reading reminder cards.[9] Identifying these characteristics may assist providers in helping patients address adherence barriers by first accurately identifying the root of patient issues (eg, where the lack of confidence in recovery is rooted in lack of resources or social support), then potentially referring to community resources where possible. For example, some states (including Rhode Island, this study's location) may have Aging and Disability Resource Centers dedicated to linking elderly people with transportation, decision support, and other resources to support quality care.
The association between health literacy and intervention participation remains uncertain. Our question, which assessed interpretation of a prescription label as a health‐literacy proxy, may have given patients insight into their limited health literacy that motivated them to accept the subsequent behavioral intervention. Others have found that lowerhealth literacy patients want their providers to know that they did not understand some health words,[9] though they may be less likely to ask questions, request additional services, or seek new information during a medical encounter.[21] In our study, those who correctly answered the medication‐label question were almost mutually exclusive from those who were otherwise stressed (12% overlap; data not shown). Thus, patients who correctly answer this question may correctly realize that they do not need the support offered by the behavioral intervention and decline to participate. For other patients, perceived stress and poor recovery expectations may be more immediate and important determinants of declination, with patients too stressed to volunteer for another task, even if it involves much‐needed assistance.
The frequency with which patients were confused by the questions merits further comment and may also be driven by stress. Though each question seeks to identify the impact of a specific construct (Table 1), being confused by any question may reflect a more general (or subacute) level of cognitive impairment or generalized low health literacy not limited to the applied numeracy of the medication‐label question. We excluded confused responses to demonstrate more clearly the impact of each individual construct.
The impact of these characteristics may be affected by study design or other characteristics. One of the few studies to examine (via RCT) how methods affect consent found that participation decreased with increasing complexity of the consent process: written consent yielded the lowest participation, limited written consent was higher, and verbal consent was the highest.[10] Other tactics to increase consent include monetary incentives,[22] culturally sensitive materials,[7] telephone reminders,[23] an opt‐out instead of opt‐in approach,[23] and an open design where participants know which treatment they are receiving.[23] We do not know how these tactics relate to the characteristics captured in our screening questions, although other characteristics we measured, such as patients' self‐identified race, have been associated with intervention participation and access to care,[8, 24, 25] and patients who perceive that the benefit of the intervention outweighs expected risks and time requirements are more likely to consent.[4] We intentionally minimized the number of screening questions to encourage participation. The high rate of consent to our screening questions compared with consent to the (more involved) behavioral intervention reveals how sensitive patients are to the perceived invasiveness of an intervention.
We note several limitations. First, overall generalizability is limited due to our small sample size, use of consecutive convenience sampling, and exclusion criteria (eg, patients discharged to long‐term or skilled nursing care). And, these results may not apply to patients who are not hospitalized; hospitalized patients may have different motivations and stressors regarding their involvement in their care. Additionally, although we included as many people with mild cognitive impairment as possible by proxy through caregivers, we excluded some that did not have caregivers, potentially undermining the accuracy of how cognition impacts the choice to accept the behavioral intervention. Because researchers often explicitly exclude individuals based on cognitive impairment, differences between recruited subjects and the population at large may be particularly high among elderly patients, where up to half of the eligible population may be affected by cognitive impairment.[26] Further research into successfully engaging caregivers as a way to reach otherwise‐excluded patients with cognitive impairment can help to mitigate threats to generalizability. Finally, our screening questions are based on validated questions, but we rearranged our question wording, simplified answer choices, and removed them from their original context. Thus, the questions were not validated in our population or when administered in this manner. Although we conducted cognitive testing, further validity and reliability testing are necessary to translate these questions into a general screening tool. The medication‐label question also requires revision; in data collection and analysis, we assume that patients who were unable to answer (needs glasses, too tired, etc.) were masking an inability to respond correctly. Though the use of this excuse is cited in the literature,[17] we cannot be certain that our treatment of it in these screening questions is generalizable. Generalizability also applies to how we group responses. Isolating the most negative response (by grouping the middle answer with the most positive answer) most specifically identifies individuals more likely to need assistance and is therefore clinically pertinent, but this also potentially fails to identify individuals who also need help but do not choose the more extreme answer. Further research to refine the screening questions might also consider the timeframe of the perceived stress questions (past week rather than past month); this timeframe may be specific to the acute medical situation rather than general or unrelated perceived stress. Though this study cannot test this hypothesis, individuals with higher pre‐illness perceived stress may be more interested in addressing the issues that were stressors prior to acute illness, rather than the offered behavioral intervention. Additionally, some of the questions were highly correlated (Q1 and Q2) and indicate a potential for shortening the screening questionnaire.
Still, these findings further the discussion of how to identify and consent hospitalized patients for participation in behavioral interventions, both for research and for routine clinical care. Researchers should specifically consider how to engage individuals who are stressed and are not confident about recovery to improve reach and effectiveness. For example, interventions should prospectively collect data on stress and confidence in recovery and include protocols to support people who are positively identified with these characteristics. These characteristics may also offer insight into improving patient and caregiver engagement; more research is needed into characteristics related to patients' willingness to seek assistance in care. We are not the first to suggest that characteristics not observed in medical charts may impact patient completion or response to behavioral interventions,[27, 28] and considering differences between participants and eligible nonparticipants in clinical care delivery and interventions can strengthen the evidence base for clinical improvements, particularly related to patient self‐management. The implications are useful for both practicing clinicians and larger systems examining the comparativeness of patient interventions and generalizing results from RCTs.
Acknowledgments
The authors thank Phil Clark, PhD, and the SENIOR Project (Study of Exercise and Nutrition in Older Rhode Islanders) research team at the University of Rhode Island for formulating 1 of the screening questions, and Marissa Meucci for her assistance with the cognitive testing and formative research for the screening questions.
Disclosures
The analyses on which this study is based were performed by Healthcentric Advisors under contract HHSM 5002011‐RI10C, titled Utilization and Quality Control Peer Review for the State of Rhode Island, sponsored by the Centers for Medicare and Medicaid Services, US Department of Health and Human Services. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors report no conflicts of interest.
Randomized controlled trials (RCTs) generally provide the most rigorous evidence for clinical practice guidelines and quality‐improvement initiatives. However, 2 major shortcomings limit the ability to broadly apply these results to the general population. One has to do with sampling bias (due to subject consent and inclusion/exclusion criteria) and the other with potential differences between participants and eligible nonparticipants. The latter may be of particular importance in trials of behavioral interventions (rather than medication trials), which often require substantial participant effort.
First, individuals who provide written consent to participate in RCTs of behavioral interventions typically represent a minority of those approached and therefore may not be representative of the target population. Although the consenting proportion is often not disclosed, some estimate that only 35%50% of eligible subjects typically participate.[1, 2, 3] These estimates mirror the authors' prior experience with a 55.2% consent rate among subjects approached for a Medicare quality‐improvement behavioral intervention.[3] Though the literature is sparse, it suggests that eligible individuals who decline to participate in either interventions or usual care may differ from participants in their perception of intervention risks and effort[4] or in their levels of self‐efficacy or confidence in recovery.[5, 6] Relatively low enrollment rates mean that much of the population remains unstudied; however, evidence‐based interventions are often applied to populations broader than those included in the original analyses.
Additionally, although some nonparticipants may correctly decide that they do not need the assistance of a proposed intervention and therefore decline to participate, others may inappropriately judge the intervention's potential benefit and applicability when declining. In other words, electing to not participate in a study, despite eligibility, may reflect more than a refusal of inconvenience, disinterest, or desire to contribute to knowledge; for some individuals it may offer a proxy statement about health knowledge, personal beliefs, attitudes, and needs, including perceived stress,[5] cultural relevance,[7, 8] and literacy/health literacy.[9, 10] Characterizing these patients can help us to modify recruitment approaches and improve participation so that participants better represent the target population. If these differences also relate to patients' adherence to care recommendations, a more nuanced understanding could improve ways to identify and engage potentially nonadherent patients to improve health outcomes.
We hypothesized that we could identify characteristics that differ between behavioral‐intervention participants and eligible nonparticipants using a set of screening questions. We proposed that these characteristics, including constructs related to perceived stress, recovery expectation, health literacy, insight, and action into advance care planning and confusion by any question, would predict the likelihood of consenting to a behavioral intervention requiring substantial subject engagement. Some of these characteristics may relate to adherence to preventive care or treatment recommendations. We did not specifically hypothesize about the distribution of demographic differences.
METHODS
Study Design
Prospective observational study conducted within a larger behavioral intervention.
Screening Question Design
We adapted our screening questions from several previously validated surveys, selecting questions related to perceived stress and self‐efficacy,[11] recovery expectations, health literacy/medication label interpretation,[12] and discussing advance directives (Table 1). Some of these characteristics may relate to adherence to preventive care or treatment programs[13, 14] or to clinical outcomes.[15, 16]
Screening Question | Adapted From Original Validated Question | Source | Construct |
---|---|---|---|
In the last week, how often have you felt that you are unable to control the important things in your life? (Rarely, sometimes, almost always) | In the last month, how often have you felt that you were unable to control the important things in your life? (Never, almost never, sometimes, fairly often, very often) | Adapted from the Perceived Stress Scale (PSS‐14).[11] | Perceived stress, self‐efficacy |
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? (Rarely, sometimes, almost always) | In the last month, how often have you felt difficulties were piling up so high that you could not overcome them? (Never, almost never, sometimes, fairly often, very often) | Adapted from the Perceived Stress Scale (PSS‐14).[11] | Perceived stress, self‐efficacy |
How sure are you that you can go back to the way you felt before being hospitalized? (Not sure at all, somewhat sure, very sure) | Courtesy of Phil Clark, PhD, University of Rhode Island, drawing on research on resilience. Similar questions are used in other studies, including studies of postsurgical recovery.[29, 30, 31] | Recovery expectation, resilience | |
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? (Yes, no) | Based on consumer‐targeted materials on advance care planning. | Advance care planning | |
(Show patient a picture of prescription label.) How many times a day should someone take this medicine? (Correct, incorrect) | (Show patient a picture of ice cream label.) If you eat the entire container, how many calories will you eat? (Correct, incorrect) | Adapted from Pfizer's Clear Health Communication: The Newest Vital Sign.[12] | Health literacy |
Prior to administering the screening questions, we performed cognitive testing with residents of an assisted‐living facility (N=10), a population that resembles our study's target population. In response to cognitive testing, we eliminated a question not interpreted easily by any of the participants, identified wording changes to clarify questions, simplified answer choices for ease of response (especially because questions are delivered verbally), and moved the most complicated (and potentially most embarrassing) question to the end, with more straightforward questions toward the beginning. We also substantially enlarged the image of a standard medication label to improve readability. Our final tool included 5 questions (Table 1).
The final instrument prompted coaches to record patient confusion. Additionally, the advance‐directive question included a refused to answer option and the medication question included unable to answer (needs glasses, too tired, etc.), a potential marker of low health literacy if used as an excuse to avoid embarrassment.[17]
Setting
We recruited inpatients at 5 Rhode Island acute‐care hospitals, including 1 community hospital, 3 teaching hospitals, and a tertiary‐care center and teaching hospital, ranging from 174 beds to 719 beds. Recruitment occurred from November 2010 to April 2011. The hospitals' respective institutional review boards approved the screening questions.
Study Population
We recruited a convenience sample of consecutively identified hospitalized Medicare fee‐for‐service beneficiaries, identified as (1) eligible for the subsequent behavioral intervention based on inpatient census lists and (2) willing to discuss an offer for a home‐based behavioral intervention. The behavioral intervention, based on the Care Transitions Intervention and described elsewhere,[3, 18] included a home visit and 2 phone calls (each about 1 hour). Coaches used a personal health record to help patients and/or caregivers better manage their health by (1) being able to list their active medical conditions and medications and (2) understanding warning signs indicating a need to reach out for help, including getting a timely medical appointment after hospitalization. The population for the present study included individuals approached to discuss participation in the behavioral intervention who also agreed to answer the screening questions.
Inclusion/Exclusion Criteria
We included hospitalized Medicare fee‐for‐service beneficiaries. We excluded patients who were current long‐term care residents, were to be discharged to long‐term or skilled care, or had a documented hospice referral. We also excluded patients with limited English proficiency or who were judged to have inadequate cognitive function, unless a caregiver agreed to receive the intervention as a proxy. We made these exclusions when recruiting for the behavioral intervention. Because we presented the screening questions to a subset of those approached for the behavioral intervention, we did not further exclude anyone. In other words, we offered the screening questions to all 295 people we approached during this study time period (100%).
Screening‐Question Study Process
Coaches asked patients to answer the 5 screening questions immediately after offering them the opportunity to participate in the behavioral intervention, regardless of whether or not they accepted the behavioral intervention. This study examines the subset of patients approached for the behavioral intervention who verbally consented to answer the screening questions.
Data Sources and Covariates
We analyzed primary data from the screening questions and behavioral intervention (for those who consented to participate), as well as Medicare claims and Medicaid enrollment data. We matched screening‐question data from November 2010 through April 2011 with Medicare Part A claims from October 2010 through May 2011 to calculate 30‐day readmission rates.
We obtained the following information for patients offered the behavioral intervention: (1) responses to screening questions, (2) whether patients consented to the behavioral intervention, (3) exposure to the behavioral intervention, and (4) recruitment date. Medicare claims data included (1) admission and discharge dates to calculate the length of stay, (2) index diagnosis, (3) hospital, and (4) site of discharge. Medicare enrollment data provided information on (1) Medicaid/Medicare dual‐eligibility status, (2) sex, and (3) patient‐reported race. We matched data based on patient name and date of birth. Our primary outcome was consent to the behavioral intervention. Secondarily, we reviewed posthospital utilization patterns, including hospital readmission, emergency‐department use, and use of home‐health services.
Statistical Analysis
We categorized patients into 2 groups (Figure 1): participants (consented to the behavioral intervention) and nonparticipants (eligible for the behavioral intervention but declined to participate). We excluded responses for those confused by the question (no response). For the response scales never, sometimes, almost always and not at all sure, somewhat sure, very sure, we isolated the most negative response, grouping the middle and most positive responses (Table 2). For the medication‐label question, we grouped incorrect and unable to answer (needs glasses, too tired, etc.) responses. We compared demographic differences between behavioral intervention participants and nonparticipants using 2 tests (categorical variables) and Student t tests (continuous variables). We then used multivariate logistic regression to analyze differences in consent to the behavioral intervention based on screening‐question responses, adjusting for demographics that differed significantly in the bivariate comparisons.

Screening‐Question Response | Adjusted OR (95% CI) | P Value |
---|---|---|
| ||
In the last week, how often have you felt that you are unable to control the important things in your life? | ||
Out of control (Almost always) | 0.35 (0.14‐0.92) | 0.034a |
In control (Sometimes, rarely) | 1.00 (Ref) | |
In the last week, how often have you felt that difficulties were piling up so high that you could not overcome them? | ||
Overwhelmed (Almost always) | 0.41 (0.16‐1.07) | 0.069 |
Not overwhelmed (Sometimes, rarely) | 1.00 (Ref) | |
How sure are you that you can go back to the way you felt before being hospitalized? | ||
Not confident (Not sure at all) | 0.17 (0.06‐0.45) | 0.001a |
Confident (Somewhat sure, very sure) | 1.00 (Ref) | |
Even if you have not made any decisions, have you talked with your family members or doctor about what you would want for medical care if you could not speak for yourself? | ||
No | 0.45 (0.13‐1.64) | 0.227 |
Yes | 1.00 (Ref) | |
How many times a day should someone take this medicine? (Show patient a medication label) | ||
Incorrect answer | 3.82 (1.12‐13.03) | 0.033a |
Correct answer | 1.00 (Ref) | |
Confused by any question? | ||
Yes | 0.11 (0.05‐0.24) | 0.001a |
No | 1.00 (Ref) |
The authors used SAS version 9.2 (SAS Institute, Inc., Cary, NC) for all analyses.
RESULTS
Of the 295 patients asked to complete the screening questions, 260 (88.1%) consented to answer the screening questions and 35 (11.9%) declined. More than half of those who answered the screening questions consented to participate in the behavioral intervention (160; 61.5%) (Figure 1). When compared with nonparticipants, participants in the behavioral intervention were younger (25.6% age 85 years vs 40% age 85 years, P=0.028), had a longer average length of hospital stay (7.9 vs 6.1 days, P=0.008), were more likely to be discharged home without clinical services (35.0% vs 23.0%, P=0.041), and were unevenly distributed between the 5 recruitment‐site hospitals, coming primarily from the teaching hospitals (P<0.001) (Table 3). There were no significant differences based on race, sex, dual‐eligible Medicare/Medicaid status, presence of a caregiver, or index diagnosis.
Patient Characteristics | Declined (n=100) | Consented (n=160) | P Value |
---|---|---|---|
| |||
Male, n (%) | 34 (34.0) | 52 (32.5) | 0.803 |
Race, n (%) | |||
White | 94 (94.0) | 151 (94.4) | 0.691 |
Black | 2 (2.0) | 5 (3.1) | |
Other | 4 (4.0) | 4 (2.5) | |
Age, n (%), y | |||
<65 | 17 (17.0) | 23 (14.4) | 0.028a |
6574 | 14 (14.0) | 42 (26.3) | |
7584 | 29 (29.0) | 54 (33.8) | |
85 | 40 (40.0) | 41 (25.6) | |
Dual eligible, n (%)b | 11 (11.0) | 24 (15.0) | 0.358 |
Caregiver present, n (%) | 17 (17.0) | 34 (21.3) | 0.401 |
Length of stay, mean (SD), d | 6.1 (4.1) | 7.9 (4.8) | 0.008a |
Index diagnosis, n (%) | |||
Acute MI | 3 (3.0) | 6 (3.8) | 0.806 |
CHF | 6 (6.0) | 20 (12.5) | 0.111 |
Pneumonia | 7 (7.0) | 9 (5.6) | 0.572 |
COPD | 6 (6.0) | 6 (8.8) | 0.484 |
Discharged home without clinical services, n (%)c | 23 (23.0) | 56 (35.0) | 0.041a |
Hospital site | |||
Hospital 1 | 15 (15.0) | 43 (26.9) | <0.001a |
Hospital 2 | 20 (20.0) | 26 (16.3) | |
Hospital 3 | 15 (15.0) | 23 (14.4) | |
Hospital 4 | 2 (2.0) | 48 (30.0) | |
Hospital 5 | 48 (48.0) | 20 (12.5) |
Patients who identified themselves as being unable to control important things in their lives were 65% less likely to consent to the behavioral intervention than those in control (odds ratio [OR]: 0.35, 95% confidence interval [CI]: 0.14‐0.92), and those who did not feel confident about recovering were 83% less likely to consent (OR: 0.17, 95% CI: 0.06‐0.45). Individuals who were confused by any question were 89% less likely to consent (OR: 0.11, 95% CI: 0.05‐0.24). Individuals who answered the medication question incorrectly were 3 times more likely to consent (OR: 3.82, 95% CI: 1.12‐13.03). There were no significant differences in consent for feeling overwhelmed (difficulties piling up) or for having discussed advance care planning with family members or doctors.
We had insufficient power to detect significant differences in posthospital utilization (including hospital readmission, emergency‐department use, and receipt of home health), based on screening‐question responses (data not shown).
DISCUSSION
We find that patients who declined to participate in the behavioral intervention (eligible nonparticipants) differed from participants in 3 important ways: perceived stress, recovery expectation, and health literacy. As hypothesized, patients with higher perceived stress and lower recovery expectation were less likely to consent to the behavioral intervention, even after adjusting for demographic and healthcare‐utilization differences. Contrary to our hypothesis, patients who incorrectly answered the medication question were more likely to consent to the intervention than those who correctly answered.
Characterizing nonparticipants and participants can offer important insight into the limitations of the research that informs clinical guidelines and behavioral interventions. Such characteristics could also indicate how to better engage patients in interventions or other aspects of their care, if associated with lower rates of adherence to recommended health behaviors or treatment plans. For example, self‐efficacy (closely related to perceived stress) and hopelessness regarding clinical outcomes (similar to low recovery expectation in the present study) are associated with nonadherence to medication plans and other care in some populations.[5, 6] Other more extreme stress, like that following a major medical event, has also been associated with a lower rate of adherence to medication regimens and a resulting higher rate of hospital readmission and mortality.[19, 20] People with low health literacy (compared with adequate health literacy) are more likely to report being confused about their medications, requesting help to read medication labels and missing appointments due to trouble reading reminder cards.[9] Identifying these characteristics may assist providers in helping patients address adherence barriers by first accurately identifying the root of patient issues (eg, where the lack of confidence in recovery is rooted in lack of resources or social support), then potentially referring to community resources where possible. For example, some states (including Rhode Island, this study's location) may have Aging and Disability Resource Centers dedicated to linking elderly people with transportation, decision support, and other resources to support quality care.
The association between health literacy and intervention participation remains uncertain. Our question, which assessed interpretation of a prescription label as a health‐literacy proxy, may have given patients insight into their limited health literacy that motivated them to accept the subsequent behavioral intervention. Others have found that lowerhealth literacy patients want their providers to know that they did not understand some health words,[9] though they may be less likely to ask questions, request additional services, or seek new information during a medical encounter.[21] In our study, those who correctly answered the medication‐label question were almost mutually exclusive from those who were otherwise stressed (12% overlap; data not shown). Thus, patients who correctly answer this question may correctly realize that they do not need the support offered by the behavioral intervention and decline to participate. For other patients, perceived stress and poor recovery expectations may be more immediate and important determinants of declination, with patients too stressed to volunteer for another task, even if it involves much‐needed assistance.
The frequency with which patients were confused by the questions merits further comment and may also be driven by stress. Though each question seeks to identify the impact of a specific construct (Table 1), being confused by any question may reflect a more general (or subacute) level of cognitive impairment or generalized low health literacy not limited to the applied numeracy of the medication‐label question. We excluded confused responses to demonstrate more clearly the impact of each individual construct.
The impact of these characteristics may be affected by study design or other characteristics. One of the few studies to examine (via RCT) how methods affect consent found that participation decreased with increasing complexity of the consent process: written consent yielded the lowest participation, limited written consent was higher, and verbal consent was the highest.[10] Other tactics to increase consent include monetary incentives,[22] culturally sensitive materials,[7] telephone reminders,[23] an opt‐out instead of opt‐in approach,[23] and an open design where participants know which treatment they are receiving.[23] We do not know how these tactics relate to the characteristics captured in our screening questions, although other characteristics we measured, such as patients' self‐identified race, have been associated with intervention participation and access to care,[8, 24, 25] and patients who perceive that the benefit of the intervention outweighs expected risks and time requirements are more likely to consent.[4] We intentionally minimized the number of screening questions to encourage participation. The high rate of consent to our screening questions compared with consent to the (more involved) behavioral intervention reveals how sensitive patients are to the perceived invasiveness of an intervention.
We note several limitations. First, overall generalizability is limited due to our small sample size, use of consecutive convenience sampling, and exclusion criteria (eg, patients discharged to long‐term or skilled nursing care). And, these results may not apply to patients who are not hospitalized; hospitalized patients may have different motivations and stressors regarding their involvement in their care. Additionally, although we included as many people with mild cognitive impairment as possible by proxy through caregivers, we excluded some that did not have caregivers, potentially undermining the accuracy of how cognition impacts the choice to accept the behavioral intervention. Because researchers often explicitly exclude individuals based on cognitive impairment, differences between recruited subjects and the population at large may be particularly high among elderly patients, where up to half of the eligible population may be affected by cognitive impairment.[26] Further research into successfully engaging caregivers as a way to reach otherwise‐excluded patients with cognitive impairment can help to mitigate threats to generalizability. Finally, our screening questions are based on validated questions, but we rearranged our question wording, simplified answer choices, and removed them from their original context. Thus, the questions were not validated in our population or when administered in this manner. Although we conducted cognitive testing, further validity and reliability testing are necessary to translate these questions into a general screening tool. The medication‐label question also requires revision; in data collection and analysis, we assume that patients who were unable to answer (needs glasses, too tired, etc.) were masking an inability to respond correctly. Though the use of this excuse is cited in the literature,[17] we cannot be certain that our treatment of it in these screening questions is generalizable. Generalizability also applies to how we group responses. Isolating the most negative response (by grouping the middle answer with the most positive answer) most specifically identifies individuals more likely to need assistance and is therefore clinically pertinent, but this also potentially fails to identify individuals who also need help but do not choose the more extreme answer. Further research to refine the screening questions might also consider the timeframe of the perceived stress questions (past week rather than past month); this timeframe may be specific to the acute medical situation rather than general or unrelated perceived stress. Though this study cannot test this hypothesis, individuals with higher pre‐illness perceived stress may be more interested in addressing the issues that were stressors prior to acute illness, rather than the offered behavioral intervention. Additionally, some of the questions were highly correlated (Q1 and Q2) and indicate a potential for shortening the screening questionnaire.
Still, these findings further the discussion of how to identify and consent hospitalized patients for participation in behavioral interventions, both for research and for routine clinical care. Researchers should specifically consider how to engage individuals who are stressed and are not confident about recovery to improve reach and effectiveness. For example, interventions should prospectively collect data on stress and confidence in recovery and include protocols to support people who are positively identified with these characteristics. These characteristics may also offer insight into improving patient and caregiver engagement; more research is needed into characteristics related to patients' willingness to seek assistance in care. We are not the first to suggest that characteristics not observed in medical charts may impact patient completion or response to behavioral interventions,[27, 28] and considering differences between participants and eligible nonparticipants in clinical care delivery and interventions can strengthen the evidence base for clinical improvements, particularly related to patient self‐management. The implications are useful for both practicing clinicians and larger systems examining the comparativeness of patient interventions and generalizing results from RCTs.
Acknowledgments
The authors thank Phil Clark, PhD, and the SENIOR Project (Study of Exercise and Nutrition in Older Rhode Islanders) research team at the University of Rhode Island for formulating 1 of the screening questions, and Marissa Meucci for her assistance with the cognitive testing and formative research for the screening questions.
Disclosures
The analyses on which this study is based were performed by Healthcentric Advisors under contract HHSM 5002011‐RI10C, titled Utilization and Quality Control Peer Review for the State of Rhode Island, sponsored by the Centers for Medicare and Medicaid Services, US Department of Health and Human Services. The content of this publication does not necessarily reflect the views or policies of the Department of Health and Human Services, nor does mention of trade names, commercial products, or organizations imply endorsement by the US government. The authors report no conflicts of interest.
- Evaluating the 'all‐comers' design: a comparison of participants in two 'all‐comers' PCI trials with non‐participants. Eur Heart J. 2011;32(17):2161–2167. , , , et al.
- Can the randomized controlled trial literature generalize to nonrandomized patients? J Consult Clin Psychol. 2005;73(1):127–135. , , , .
- The care transitions intervention: translating from efficacy to effectiveness. Arch Intern Med. 2011;171(14):1232–1237. , , , , , .
- Determinants of patient participation in clinical studies requiring informed consent: why patients enter a clinical trial. Patient Educ Couns. 1998;35(2):111–125. , , .
- Coping self‐efficacy as a predictor of adherence to antiretroviral therapy in men and women living with HIV in Kenya. AIDS Patient Care STDS. 2011;25(9):557–561. , , , .
- Self‐reported influences of hopelessness, health literacy, lifestyle action, and patient inertia on blood pressure control in a hypertensive emergency department population. Am J Med Sci. 2009;338(5):368–372. , , , , , .
- Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol. 2006;6:34. , .
- Age‐, sex‐, and race‐based differences among patients enrolled versus not enrolled in acute lung injury clinical trials. Crit Care Med. 2010;38(6):1450–1457. , , , , , .
- Patients' shame and attitudes toward discussing the results of literacy screening. J Health Commun. 2007;12(8):721–732. , , , , , .
- Impact of detailed informed consent on research subjects' participation: a prospective, randomized trial. J Emerg Med. 2008;34(3):269–275. .
- A global measure of perceived stress. J Health Soc Behav. 1983;24:385–396. Available at: http://www.psy.cmu.edu/∼scohen/globalmeas83.pdf. Accessed May 10, 2012. , , .
- .Pfizer, Inc. Clear Health Communication: The Newest Vital Sign. Available at: http://www.pfizerhealthliteracy.com/asset/pdf/NVS_Eng/files/nvs_flipbook_english_final.pdf. Accessed May 10, 2012.
- Relationship of preventive health practices and health literacy: a national study. Am J Health Behav. 2008;32(3):227–242. , , .
- Predictors of medication self‐management skill in a low‐literacy population. J Gen Intern Med. 2006;21:852–856. , , , , , .
- Does how you do depend on how you think you'll do? A systematic review of the evidence for a relation between patients' recovery expectations and health outcomes [published correction appears in CMAJ. 2001;165(10):1303]. CMAJ. 2001;165(2):174–179. , , .
- Health care costs in the last week of life: associations with end‐of‐life conversations. Arch Intern Med. 2009;169(5):480–488. , , , et al.
- Exploring health literacy competencies in community pharmacy. Health Expect. 2010;15(1):12–22. , , , , .
- The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):1822–1828. , , , .
- A prospective study of posttraumatic stress symptoms and non‐adherence in survivors of a myocardial infarction (MI). Gen Hosp Psychiatry. 2001;23:215–222. , , , et al.
- Posttraumatic stress, non‐adherence, and adverse outcomes in survivors of a myocardial infarction. Psychosom Med. 2004;66:521–526. , , , et al.
- The implications of health literacy on patient‐provider communication. Arch Dis Child. 2008;93:428–432. , .
- Strategies to improve recruitment to research studies. Cochrane Database Syst Rev. 2007;2:MR000013. , , .
- Strategies to improve recruitment to randomised controlled trials. Cochrane Database Syst Rev. 2010;4:MR000013. , , , et al.
- Addressing diabetes racial and ethnic disparities: lessons learned from quality improvement collaboratives. Diabetes Manag (Lond). 2011;1(6):653–660. , , , .
- Racial differences in eligibility and enrollment in a smoking cessation clinical trial. Health Psychol. 2011;30(1):40–48. , , , .
- The disappearing subject: exclusion of people with cognitive impairment from research. J Am Geriatr Soc. 2012;6:413–419. , , , .
- The role of patient preferences in cost‐effectiveness analysis: a conflict of values? Pharmacoeconomics. 2009;27(9):705–712. , , .
- A comprehensive care management program to prevent chronic obstructive pulmonary disease hospitalizations. Ann Intern Med. 2012;156(10):673–683. , , , et al.
- The role of expectations in patients' reports of post‐operative outcomes and improvement following therapy. Med Care. 1993;31:1043–1056. , , , , .
- Expectations and outcomes after hip fracture among the elderly. Int J Aging Hum Dev. 1992;34:339–350. , .
- Role of patients' view of their illness in predicting return to work and functioning after myocardial infarction: longitudinal study. BMJ. 1996;312:1191–1194. , , , .
- Evaluating the 'all‐comers' design: a comparison of participants in two 'all‐comers' PCI trials with non‐participants. Eur Heart J. 2011;32(17):2161–2167. , , , et al.
- Can the randomized controlled trial literature generalize to nonrandomized patients? J Consult Clin Psychol. 2005;73(1):127–135. , , , .
- The care transitions intervention: translating from efficacy to effectiveness. Arch Intern Med. 2011;171(14):1232–1237. , , , , , .
- Determinants of patient participation in clinical studies requiring informed consent: why patients enter a clinical trial. Patient Educ Couns. 1998;35(2):111–125. , , .
- Coping self‐efficacy as a predictor of adherence to antiretroviral therapy in men and women living with HIV in Kenya. AIDS Patient Care STDS. 2011;25(9):557–561. , , , .
- Self‐reported influences of hopelessness, health literacy, lifestyle action, and patient inertia on blood pressure control in a hypertensive emergency department population. Am J Med Sci. 2009;338(5):368–372. , , , , , .
- Increasing recruitment to randomised trials: a review of randomised controlled trials. BMC Med Res Methodol. 2006;6:34. , .
- Age‐, sex‐, and race‐based differences among patients enrolled versus not enrolled in acute lung injury clinical trials. Crit Care Med. 2010;38(6):1450–1457. , , , , , .
- Patients' shame and attitudes toward discussing the results of literacy screening. J Health Commun. 2007;12(8):721–732. , , , , , .
- Impact of detailed informed consent on research subjects' participation: a prospective, randomized trial. J Emerg Med. 2008;34(3):269–275. .
- A global measure of perceived stress. J Health Soc Behav. 1983;24:385–396. Available at: http://www.psy.cmu.edu/∼scohen/globalmeas83.pdf. Accessed May 10, 2012. , , .
- .Pfizer, Inc. Clear Health Communication: The Newest Vital Sign. Available at: http://www.pfizerhealthliteracy.com/asset/pdf/NVS_Eng/files/nvs_flipbook_english_final.pdf. Accessed May 10, 2012.
- Relationship of preventive health practices and health literacy: a national study. Am J Health Behav. 2008;32(3):227–242. , , .
- Predictors of medication self‐management skill in a low‐literacy population. J Gen Intern Med. 2006;21:852–856. , , , , , .
- Does how you do depend on how you think you'll do? A systematic review of the evidence for a relation between patients' recovery expectations and health outcomes [published correction appears in CMAJ. 2001;165(10):1303]. CMAJ. 2001;165(2):174–179. , , .
- Health care costs in the last week of life: associations with end‐of‐life conversations. Arch Intern Med. 2009;169(5):480–488. , , , et al.
- Exploring health literacy competencies in community pharmacy. Health Expect. 2010;15(1):12–22. , , , , .
- The care transitions intervention: results of a randomized controlled trial. Arch Intern Med. 2006;166(17):1822–1828. , , , .
- A prospective study of posttraumatic stress symptoms and non‐adherence in survivors of a myocardial infarction (MI). Gen Hosp Psychiatry. 2001;23:215–222. , , , et al.
- Posttraumatic stress, non‐adherence, and adverse outcomes in survivors of a myocardial infarction. Psychosom Med. 2004;66:521–526. , , , et al.
- The implications of health literacy on patient‐provider communication. Arch Dis Child. 2008;93:428–432. , .
- Strategies to improve recruitment to research studies. Cochrane Database Syst Rev. 2007;2:MR000013. , , .
- Strategies to improve recruitment to randomised controlled trials. Cochrane Database Syst Rev. 2010;4:MR000013. , , , et al.
- Addressing diabetes racial and ethnic disparities: lessons learned from quality improvement collaboratives. Diabetes Manag (Lond). 2011;1(6):653–660. , , , .
- Racial differences in eligibility and enrollment in a smoking cessation clinical trial. Health Psychol. 2011;30(1):40–48. , , , .
- The disappearing subject: exclusion of people with cognitive impairment from research. J Am Geriatr Soc. 2012;6:413–419. , , , .
- The role of patient preferences in cost‐effectiveness analysis: a conflict of values? Pharmacoeconomics. 2009;27(9):705–712. , , .
- A comprehensive care management program to prevent chronic obstructive pulmonary disease hospitalizations. Ann Intern Med. 2012;156(10):673–683. , , , et al.
- The role of expectations in patients' reports of post‐operative outcomes and improvement following therapy. Med Care. 1993;31:1043–1056. , , , , .
- Expectations and outcomes after hip fracture among the elderly. Int J Aging Hum Dev. 1992;34:339–350. , .
- Role of patients' view of their illness in predicting return to work and functioning after myocardial infarction: longitudinal study. BMJ. 1996;312:1191–1194. , , , .
Copyright © 2013 Society of Hospital Medicine
Handoff CEX
Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.
Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.
The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]
METHODS
Tool Design and Measures
The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.
Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.
Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.
Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.
Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).
Setting and Subjects
We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.
The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.
Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.
Data Collection
Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.
The study was approved by the institutional review boards at both UCM and Yale.
Statistical Analysis
We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).
RESULTS
A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).
Domain | Provider, N=343 | Recipient, N=330 | P Value | ||||
---|---|---|---|---|---|---|---|
Median (IQR) | Mean (SD) | Range | Median (IQR) | Mean (SD) | Range | ||
| |||||||
Setting | 7 (69) | 7.0 (1.7) | 29 | 7 (69) | 7.3 (1.6) | 29 | 0.05 |
Organization | 7 (68) | 7.2 (1.5) | 29 | 8 (69) | 7.4 (1.4) | 29 | 0.07 |
Communication | 7 (69) | 7.2 (1.6) | 19 | 8 (79) | 7.4 (1.5) | 29 | 0.22 |
Content | 7 (68) | 7.0 (1.6) | 29 | ||||
Judgment | 8 (68) | 7.3 (1.4) | 39 | 8 (79) | 7.5 (1.4) | 39 | 0.06 |
Professionalism | 8 (79) | 7.4 (1.5) | 29 | 8 (79) | 7.6 (1.4) | 39 | 0.23 |
Overall | 7 (68) | 7.1 (1.5) | 29 | 7 (68) | 7.4 (1.4) | 29 | 0.02 |
Handoff Providers
A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).
Handoff Recipients
A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).
Validity Testing
Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).
Domain | Median (Range) | P Value | |||
---|---|---|---|---|---|
NP/PA, N=33 | Subintern or Intern, N=170 | Resident, N=44 | Hospitalist, N=95 | ||
| |||||
Setting | 7 (29) | 7 (39) | 7 (49) | 7 (29) | 0.89 |
Organization | 8 (49) | 7 (29) | 7 (49) | 8 (39) | 0.11 |
Communication | 8 (49) | 7 (29) | 7 (49) | 8 (19) | 0.72 |
Content | 7 (39) | 7 (29) | 7 (49) | 7 (29) | 0.92 |
Judgment | 8 (59) | 7 (39) | 8 (49) | 8 (49) | 0.09 |
Professionalism | 8 (49) | 7 (29) | 8 (39) | 8 (49) | 0.82 |
Overall | 7 (39) | 7 (29) | 8 (49) | 7 (29) | 0.28 |
Provider, Median (Range) | Recipient, Median (Range) | |||||||
---|---|---|---|---|---|---|---|---|
Domain | Peer, N=152 | Resident, Supervisor, N=43 | External, N=147 | P Value | Peer, N=145 | Resident Supervisor, N=43 | External, N=142 | P Value |
| ||||||||
Setting | 8 (39) | 7 (39) | 7 (29) | 0.02 | 8 (29) | 7 (39) | 7 (29) | <0.001 |
Organization | 8 (39) | 8 (39) | 7 (29) | 0.18 | 8 (39) | 8 (69) | 7 (29) | <0.001 |
Communication | 8 (39) | 8 (39) | 7 (19) | <0.001 | 8 (39) | 8 (49) | 7 (29) | <0.001 |
Content | 8 (39) | 8 (29) | 7 (29) | <0.001 | N/A | N/A | N/A | N/A |
Judgment | 8 (49) | 8 (39) | 7 (39) | <0.001 | 8 (39) | 8 (49) | 7 (39) | <0.001 |
Professionalism | 8 (39) | 8 (59) | 7 (29) | 0.02 | 8 (39) | 8 (69) | 7 (39) | <0.001 |
Overall | 8 (39) | 8 (39) | 7 (29) | 0.001 | 8 (29) | 8 (49) | 7 (29) | <0.001 |
Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).
Spearman Correlation Coefficients | ||||||
---|---|---|---|---|---|---|
Setting | Organization | Communication | Content | Judgment | Professionalism | |
| ||||||
Setting | 1.000 | 0.40 | 0.40 | 0.39 | 0.39 | 0.41 |
Organization | 0.40 | 1.00 | 0.80 | 0.71 | 0.77 | 0.73 |
Communication | 0.40 | 0.80 | 1.00 | 0.79 | 0.82 | 0.77 |
Content | 0.39 | 0.71 | 0.79 | 1.00 | 0.80 | 0.74 |
Judgment | 0.39 | 0.77 | 0.82 | 0.80 | 1.00 | 0.78 |
Professionalism | 0.41 | 0.73 | 0.77 | 0.74 | 0.78 | 1.00 |
Overall | 0.55 | 0.80 | 0.84 | 0.83 | 0.86 | 0.82 |
We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).
Reliability Testing
Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).
Domain | Provider | Recipient | ||
---|---|---|---|---|
External vs Peer, N=144 (95% CI) | Resident vs Peer, N=42 (95% CI) | External vs Peer, N=134 (95% CI) | Resident vs Peer, N=43 (95% CI) | |
| ||||
Setting | 0.39 (0.24, 0.54) | 0.28 (0.01, 0.56) | 0.34 (0.20, 0.48) | 0.48 (0.27, 0.69) |
Organization | 0.43 (0.29, 0.58) | 0.59 (0.39, 0.80) | 0.39 (0.22, 0.55) | 0.03 (0.23, 0.29) |
Communication | 0.34 (0.19, 0.49) | 0.52 (0.37, 0.68) | 0.36 (0.22, 0.51) | 0.02 (0.18, 0.23) |
Content | 0.38 (0.25, 0.51) | 0.53 (0.27, 0.80) | N/A (N/A) | N/A (N/A) |
Judgment | 0.36 (0.22, 0.49) | 0.54 (0.25, 0.83) | 0.28 (0.15, 0.42) | 0.12 (0.34, 0.09) |
Professionalism | 0.47 (0.32, 0.63) | 0.47 (0.23, 0.72) | 0.35 (0.18, 0.51) | 0.01 (0.29, 0.26) |
Overall | 0.50 (0.36, 0.64) | 0.45 (0.24, 0.67) | 0.31 (0.16, 0.48) | 0.07 (0.20, 0.34) |
DISCUSSION
In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.
It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.
Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.
The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]
In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.
We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.
The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.
There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).
In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.
ACKNOWLEDGMENTS
Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.
Appendix
A
PROVIDER HAND‐OFF CEX TOOL
RECIPIENT HAND‐OFF CEX TOOL
Appendix
B
Handoff CEX scores by site of evaluation
Domain | Provider | Recipient | ||||
Median (Range) | P‐value | Median (Range) | P‐value | |||
UC | Yale | UC | Yale | |||
N=172 | N=170 | N=163 | N=167 | |||
Setting | 7 (29) | 7 (39) | 0.32 | 7 (29) | 7 (39) | 0.36 |
Organization | 8 (29) | 7 (39) | 0.30 | 7 (29) | 8 (59) | 0.001 |
Communication | 7 (19) | 7 (39) | 0.67 | 7 (29) | 8 (49) | 0.03 |
Content | 7 (29) | 7 (29) | N/A | N/A | N/A | |
Judgment | 8 (39) | 7 (39) | 0.60 | 7 (39) | 8 (49) | 0.001 |
Professionalism | 8 (29) | 8 (39) | 0.67 | 8 (39) | 8 (49) | 0.35 |
Overall | 7 (29) | 7 (39) | 0.41 | 7 (29) | 8 (49) | 0.005 |
Appendix
C
Spearman correlation, recipients (N=330)
SpearmanCorrelationCoefficients | |||||
Setting | Organization | Communication | Judgment | Professionalism | |
Setting | 1.0 | 0.46 | 0.48 | 0.47 | 0.40 |
Organization | 0.46 | 1.00 | 0.78 | 0.75 | 0.75 |
Communication | 0.48 | 0.78 | 1.00 | 0.85 | 0.77 |
Judgment | 0.47 | 0.75 | 0.85 | 1.00 | 0.74 |
Professionalism | 0.40 | 0.75 | 0.77 | 0.74 | 1.00 |
Overall | 0.60 | 0.77 | 0.84 | 0.82 | 0.77 |
All p values <0.0001
Appendix
D
Factor analysis results for provider evaluations
Rotated Factor Pattern (Standardized Regression Coefficients) N=336 | ||
Factor1 | Factor2 | |
Organization | 0.64 | 0.27 |
Communication | 0.79 | 0.16 |
Content | 0.82 | 0.06 |
Judgment | 0.86 | 0.06 |
Professionalism | 0.66 | 0.23 |
Setting | 0.18 | 0.29 |
- Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):1173–1177. , , , .
- Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
- Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866–872. , , , , .
- Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186–194. , , .
- Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401–407. , , , , .
- Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):1755–1760. , , , , .
- Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):6–10. , , , .
- What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248–255. , , , , .
- Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):1182–1188. , .
- Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128–133. , , , .
- Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105–111. , , , et al.
- Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287–291. , , , et al.
- Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x. , , , , , .
- The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795–799. , , , .
- Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):27–33. , , , .
- Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900–904. , , , .
- Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826–830. , , , , .
- Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701–710.e4. , , , , , .
- Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):1470–1474. , , .
- Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114. , , , .
- Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368–378. , , , et al.
- An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863–871. , , , et al.
- A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646–655. , .
- Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):1751–1755. , , , , .
- A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):11–14. , , , .
- Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433–440. , , , , , .
- Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491–496. , , , , .
- Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244–245. , .
- Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257–266. , , , , .
- Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121–126. , , .
- SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167–175. , , .
- Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):4–5. .
- Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125–132. , , , , .
- Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758–763. , , , , , .
- Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673–676. , .
- Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415–418. , , , , .
- A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274–284. , .
- Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129–134. , , , et al.
- Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563–570. , , , et al.
- A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860–867. , , .
- Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138. , , , , .
- A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25. .
Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.
Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.
The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]
METHODS
Tool Design and Measures
The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.
Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.
Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.
Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.
Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).
Setting and Subjects
We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.
The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.
Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.
Data Collection
Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.
The study was approved by the institutional review boards at both UCM and Yale.
Statistical Analysis
We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).
RESULTS
A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).
Domain | Provider, N=343 | Recipient, N=330 | P Value | ||||
---|---|---|---|---|---|---|---|
Median (IQR) | Mean (SD) | Range | Median (IQR) | Mean (SD) | Range | ||
| |||||||
Setting | 7 (69) | 7.0 (1.7) | 29 | 7 (69) | 7.3 (1.6) | 29 | 0.05 |
Organization | 7 (68) | 7.2 (1.5) | 29 | 8 (69) | 7.4 (1.4) | 29 | 0.07 |
Communication | 7 (69) | 7.2 (1.6) | 19 | 8 (79) | 7.4 (1.5) | 29 | 0.22 |
Content | 7 (68) | 7.0 (1.6) | 29 | ||||
Judgment | 8 (68) | 7.3 (1.4) | 39 | 8 (79) | 7.5 (1.4) | 39 | 0.06 |
Professionalism | 8 (79) | 7.4 (1.5) | 29 | 8 (79) | 7.6 (1.4) | 39 | 0.23 |
Overall | 7 (68) | 7.1 (1.5) | 29 | 7 (68) | 7.4 (1.4) | 29 | 0.02 |
Handoff Providers
A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).
Handoff Recipients
A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).
Validity Testing
Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).
Domain | Median (Range) | P Value | |||
---|---|---|---|---|---|
NP/PA, N=33 | Subintern or Intern, N=170 | Resident, N=44 | Hospitalist, N=95 | ||
| |||||
Setting | 7 (29) | 7 (39) | 7 (49) | 7 (29) | 0.89 |
Organization | 8 (49) | 7 (29) | 7 (49) | 8 (39) | 0.11 |
Communication | 8 (49) | 7 (29) | 7 (49) | 8 (19) | 0.72 |
Content | 7 (39) | 7 (29) | 7 (49) | 7 (29) | 0.92 |
Judgment | 8 (59) | 7 (39) | 8 (49) | 8 (49) | 0.09 |
Professionalism | 8 (49) | 7 (29) | 8 (39) | 8 (49) | 0.82 |
Overall | 7 (39) | 7 (29) | 8 (49) | 7 (29) | 0.28 |
Provider, Median (Range) | Recipient, Median (Range) | |||||||
---|---|---|---|---|---|---|---|---|
Domain | Peer, N=152 | Resident, Supervisor, N=43 | External, N=147 | P Value | Peer, N=145 | Resident Supervisor, N=43 | External, N=142 | P Value |
| ||||||||
Setting | 8 (39) | 7 (39) | 7 (29) | 0.02 | 8 (29) | 7 (39) | 7 (29) | <0.001 |
Organization | 8 (39) | 8 (39) | 7 (29) | 0.18 | 8 (39) | 8 (69) | 7 (29) | <0.001 |
Communication | 8 (39) | 8 (39) | 7 (19) | <0.001 | 8 (39) | 8 (49) | 7 (29) | <0.001 |
Content | 8 (39) | 8 (29) | 7 (29) | <0.001 | N/A | N/A | N/A | N/A |
Judgment | 8 (49) | 8 (39) | 7 (39) | <0.001 | 8 (39) | 8 (49) | 7 (39) | <0.001 |
Professionalism | 8 (39) | 8 (59) | 7 (29) | 0.02 | 8 (39) | 8 (69) | 7 (39) | <0.001 |
Overall | 8 (39) | 8 (39) | 7 (29) | 0.001 | 8 (29) | 8 (49) | 7 (29) | <0.001 |
Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).
Spearman Correlation Coefficients | ||||||
---|---|---|---|---|---|---|
Setting | Organization | Communication | Content | Judgment | Professionalism | |
| ||||||
Setting | 1.000 | 0.40 | 0.40 | 0.39 | 0.39 | 0.41 |
Organization | 0.40 | 1.00 | 0.80 | 0.71 | 0.77 | 0.73 |
Communication | 0.40 | 0.80 | 1.00 | 0.79 | 0.82 | 0.77 |
Content | 0.39 | 0.71 | 0.79 | 1.00 | 0.80 | 0.74 |
Judgment | 0.39 | 0.77 | 0.82 | 0.80 | 1.00 | 0.78 |
Professionalism | 0.41 | 0.73 | 0.77 | 0.74 | 0.78 | 1.00 |
Overall | 0.55 | 0.80 | 0.84 | 0.83 | 0.86 | 0.82 |
We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).
Reliability Testing
Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).
Domain | Provider | Recipient | ||
---|---|---|---|---|
External vs Peer, N=144 (95% CI) | Resident vs Peer, N=42 (95% CI) | External vs Peer, N=134 (95% CI) | Resident vs Peer, N=43 (95% CI) | |
| ||||
Setting | 0.39 (0.24, 0.54) | 0.28 (0.01, 0.56) | 0.34 (0.20, 0.48) | 0.48 (0.27, 0.69) |
Organization | 0.43 (0.29, 0.58) | 0.59 (0.39, 0.80) | 0.39 (0.22, 0.55) | 0.03 (0.23, 0.29) |
Communication | 0.34 (0.19, 0.49) | 0.52 (0.37, 0.68) | 0.36 (0.22, 0.51) | 0.02 (0.18, 0.23) |
Content | 0.38 (0.25, 0.51) | 0.53 (0.27, 0.80) | N/A (N/A) | N/A (N/A) |
Judgment | 0.36 (0.22, 0.49) | 0.54 (0.25, 0.83) | 0.28 (0.15, 0.42) | 0.12 (0.34, 0.09) |
Professionalism | 0.47 (0.32, 0.63) | 0.47 (0.23, 0.72) | 0.35 (0.18, 0.51) | 0.01 (0.29, 0.26) |
Overall | 0.50 (0.36, 0.64) | 0.45 (0.24, 0.67) | 0.31 (0.16, 0.48) | 0.07 (0.20, 0.34) |
DISCUSSION
In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.
It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.
Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.
The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]
In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.
We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.
The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.
There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).
In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.
ACKNOWLEDGMENTS
Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.
Appendix
A
PROVIDER HAND‐OFF CEX TOOL
RECIPIENT HAND‐OFF CEX TOOL
Appendix
B
Handoff CEX scores by site of evaluation
Domain | Provider | Recipient | ||||
Median (Range) | P‐value | Median (Range) | P‐value | |||
UC | Yale | UC | Yale | |||
N=172 | N=170 | N=163 | N=167 | |||
Setting | 7 (29) | 7 (39) | 0.32 | 7 (29) | 7 (39) | 0.36 |
Organization | 8 (29) | 7 (39) | 0.30 | 7 (29) | 8 (59) | 0.001 |
Communication | 7 (19) | 7 (39) | 0.67 | 7 (29) | 8 (49) | 0.03 |
Content | 7 (29) | 7 (29) | N/A | N/A | N/A | |
Judgment | 8 (39) | 7 (39) | 0.60 | 7 (39) | 8 (49) | 0.001 |
Professionalism | 8 (29) | 8 (39) | 0.67 | 8 (39) | 8 (49) | 0.35 |
Overall | 7 (29) | 7 (39) | 0.41 | 7 (29) | 8 (49) | 0.005 |
Appendix
C
Spearman correlation, recipients (N=330)
SpearmanCorrelationCoefficients | |||||
Setting | Organization | Communication | Judgment | Professionalism | |
Setting | 1.0 | 0.46 | 0.48 | 0.47 | 0.40 |
Organization | 0.46 | 1.00 | 0.78 | 0.75 | 0.75 |
Communication | 0.48 | 0.78 | 1.00 | 0.85 | 0.77 |
Judgment | 0.47 | 0.75 | 0.85 | 1.00 | 0.74 |
Professionalism | 0.40 | 0.75 | 0.77 | 0.74 | 1.00 |
Overall | 0.60 | 0.77 | 0.84 | 0.82 | 0.77 |
All p values <0.0001
Appendix
D
Factor analysis results for provider evaluations
Rotated Factor Pattern (Standardized Regression Coefficients) N=336 | ||
Factor1 | Factor2 | |
Organization | 0.64 | 0.27 |
Communication | 0.79 | 0.16 |
Content | 0.82 | 0.06 |
Judgment | 0.86 | 0.06 |
Professionalism | 0.66 | 0.23 |
Setting | 0.18 | 0.29 |
Transfers among trainee physicians within the hospital typically occur at least twice a day and have been increasing among trainees as work hours have declined.[1] The 2011 Accreditation Council for Graduate Medical Education (ACGME) guidelines,[2] which restrict intern working hours to 16 hours from a previous maximum of 30, have likely increased the frequency of physician trainee handoffs even further. Similarly, transfers among hospitalist attendings occur at least twice a day, given typical shifts of 8 to 12 hours.
Given the frequency of transfers, and the potential for harm generated by failed transitions,[3, 4, 5, 6] the end‐of‐shift written and verbal handoffs have assumed increasingly greater importance in hospital care among both trainees and hospitalist attendings.
The ACGME now requires that programs assess the competency of trainees in handoff communication.[2] Yet, there are few tools for assessing the quality of sign‐out communication. Those that exist primarily focus on the written sign‐out, and are rarely validated.[7, 8, 9, 10, 11, 12] Furthermore, it is uncertain whether such assessments must be done by supervisors or whether peers can participate in the evaluation. In this prospective multi‐institutional study we assess the performance characteristics of a verbal sign‐out evaluation tool for internal medicine housestaff and hospitalist attendings, and examine whether it can be used by peers as well as by external evaluators. This tool has previously been found to effectively discriminate between experienced and inexperienced nurses conducting nursing handoffs.[13]
METHODS
Tool Design and Measures
The Handoff CEX (clinical evaluation exercise) is a structured assessment based on the format of the mini‐CEX, an instrument used to assess the quality of history and physical examination by trainees for which validation studies have previously been conducted.[14, 15, 16, 17] We developed the tool based on themes we identified from our own expertise,[1, 5, 6, 8, 18, 19, 20, 21, 22, 23, 24, 25, 26, 27, 28, 29] the ACGME core competencies for trainees,[2] and the literature to maximize content validity. First, standardization has numerous demonstrable benefits for safety in general and handoffs in particular.[30, 31, 32] Consequently we created a domain for organization in which standardization was a characteristic of high performance.
Second, there is evidence that people engaged in conversation routinely overestimate peer comprehension,[27] and that explicit strategies to combat this overestimation, such as confirming understanding, explicitly assigning tasks rather than using open‐ended language, and using concrete language, are effective.[33] Accordingly we created a domain for communication skills, which is also an ACGME competency.
Third, although there were no formal guidelines for sign‐out content when we developed this tool, our own research had demonstrated that the content elements most often missing and felt to be important by stakeholders were related to clinical condition and explicating thinking processes,[5, 6] so we created a domain for content that highlighted these areas and met the ACGME competency of medical knowledge. In accordance with standards for evaluation of learners, we incorporated a domain for judgment to identify where trainees were in the RIME spectrum of reporter, interpreter, master, and educator.
Next, we added a section for professionalism in accordance with the ACGME core competencies of professionalism and patient care.[34] To avoid the disinclination of peers to label each other unprofessional, we labeled the professionalism domain as patient‐focused on the tool.
Finally, we included a domain for setting because of an extensive literature demonstrating increased handoff failures in noisy or interruptive settings.[35, 36, 37] We then revised the tool slightly based on our experiences among nurses and students.[13, 38] The final tool included the 6 domains described above and an assessment of overall competency. Each domain was scored on a 9‐point scale and included descriptive anchors at high and low ends of performance. We further divided the scale into 3 main sections: unsatisfactory (score 13), satisfactory (46), and superior (79). We designed 2 tools, 1 to assess the person providing the handoff and 1 to assess the handoff recipient, each with its own descriptive anchors. The recipient tool did not include a content domain (see Supporting Information, Appendix 1, in the online version of this article).
Setting and Subjects
We tested the tool in 2 different urban academic medical centers: the University of Chicago Medicine (UCM) and Yale‐New Haven Hospital (Yale). At UCM, we tested the tool among hospitalists, nurse practitioners, and physician assistants during the Monday and Tuesday morning and Friday evening sign‐out sessions. At Yale, we tested the tool among housestaff during the evening sign‐out session from the primary team to the on‐call covering team.
The UCM is a 550‐bed urban academic medical center in which the nonteaching hospitalist service cares for patients with liver disease, or end‐stage renal or lung disease awaiting transplant, and a small fraction of general medicine and oncology patients when the housestaff service exceeds its cap. No formal training on sign‐out is provided to attending or midlevel providers. The nonteaching hospitalist service operates as a separate service from the housestaff service and consists of 38 hospitalist clinicians (hospitalist attendings, nurse practitioners, and physicians assistants). There are 2 handoffs each day. In the morning the departing night hospitalist hands off to the incoming daytime hospitalist or midlevel provider. These handoffs occur at 7:30 am in a dedicated room. In the evening the daytime hospitalist or midlevel provider hands off to an incoming night hospitalist. This handoff occurs at 5:30 pm or 7:30 pm in a dedicated location. The written sign‐out is maintained on a Microsoft Word (Microsoft Corp., Redmond, WA) document on a password‐protected server and updated daily.
Yale is a 946‐bed urban academic medical center with a large internal medicine training program. Formal sign‐out education that covers the main domains of the tool is provided to new interns during the first 3 months of the year,[19] and a templated electronic medical record‐based electronic written handoff report is produced by the housestaff for all patients.[22] Approximately half of inpatient medicine patients are cared for by housestaff teams, which are entirely separate from the hospitalist service. Housestaff sign‐out occurs between 4 pm and 7 pm every night. At a minimum, the departing intern signs out to the incoming intern; this handoff is typically supervised by at least 1 second‐ or third‐year resident. All patients are signed out verbally; in addition, the written handoff report is provided to the incoming team. Most handoffs occur in a quiet charting room.
Data Collection
Data collection at UCM occurred between March and December 2010 on 3 days of each week: Mondays, Tuesdays, and Fridays. On Mondays and Tuesdays the morning handoffs were observed; on Fridays the evening handoffs were observed. Data collection at Yale occurred between March and May 2011. Only evening handoffs from the primary team to the overnight coverage were observed. At both sites, participants provided verbal informed consent prior to data collection. At the time of an eligible sign‐out session, a research assistant (D.R. at Yale, P.S. at UCM) provided the evaluation tools to all members of the incoming and outgoing teams, and observed the sign‐out session himself. Each person providing a handoff was asked to evaluate the recipient of the handoff; each person receiving a handoff was asked to evaluate the provider of the handoff. In addition, the trained third‐party observer (D.R., P.S.) evaluated both the provider and recipient of the handoff. The external evaluators were trained in principles of effective communication and the use of the tool, with specific review of anchors at each end of each domain. One evaluator had a DO degree and was completing an MPH degree. The second evaluator was an experienced clinical research assistant whose training consisted of supervised observation of 10 handoffs by a physician investigator. At Yale, if a resident was present, she or he was also asked to evaluate both the provider and recipient of the handoff. Consequently, every sign‐out session included at least 2 evaluations of each participant, 1 by a peer evaluator and 1 by a consistent external evaluator who did not know the patients. At Yale, many sign‐outs also included a third evaluation by a resident supervisor.
The study was approved by the institutional review boards at both UCM and Yale.
Statistical Analysis
We obtained mean, median, and interquartile range of scores for each subdomain of the tool as well as the overall assessment of handoff quality. We assessed convergent construct validity by assessing performance of the tool in different contexts. To do so, we determined whether scores differed by type of participant (provider or recipient), by site, by training level of evaluatee, or by type of evaluator (external, resident supervisor, or peer) by using Wilcoxon rank sum tests and Kruskal‐Wallis tests. For the assessment of differences in ratings by training level, we used evaluations of sign‐out providers only, because the 2 sites differed in scores for recipients. We also assessed construct validity by using Spearman rank correlation coefficients to describe the internal consistency of the tool in terms of the correlation between domains of the tool, and we conducted an exploratory factor analysis to gain insight into whether the subdomains of the tool were measuring the same construct. In conducting this analysis, we restricted the dataset to evaluations of sign‐out providers only, and used a principal components estimation method, a promax rotation, and squared multiple correlation communality priors. Finally, we conducted some preliminary studies of reliability by testing whether different types of evaluators provided similar assessments. We calculated a weighted kappa using Fleiss‐Cohen weights for external versus peer scores and again for supervising resident versus peer scores (Yale only). We were not able to assess test‐retest reliability by nature of the sign‐out process. Statistical significance was defined by a P value 0.05, and analyses were performed using SAS 9.2 (SAS Institute, Cary, NC).
RESULTS
A total of 149 handoff sessions were observed: 89 at UCM and 60 at Yale. Each site conducted a similar total number of evaluations: 336 at UCM, 337 at Yale. These sessions involved 97 unique individuals, 34 at UCM and 63 at Yale. Overall scores were high at both sites, but a wide range of scores was applied (Table 1).
Domain | Provider, N=343 | Recipient, N=330 | P Value | ||||
---|---|---|---|---|---|---|---|
Median (IQR) | Mean (SD) | Range | Median (IQR) | Mean (SD) | Range | ||
| |||||||
Setting | 7 (69) | 7.0 (1.7) | 29 | 7 (69) | 7.3 (1.6) | 29 | 0.05 |
Organization | 7 (68) | 7.2 (1.5) | 29 | 8 (69) | 7.4 (1.4) | 29 | 0.07 |
Communication | 7 (69) | 7.2 (1.6) | 19 | 8 (79) | 7.4 (1.5) | 29 | 0.22 |
Content | 7 (68) | 7.0 (1.6) | 29 | ||||
Judgment | 8 (68) | 7.3 (1.4) | 39 | 8 (79) | 7.5 (1.4) | 39 | 0.06 |
Professionalism | 8 (79) | 7.4 (1.5) | 29 | 8 (79) | 7.6 (1.4) | 39 | 0.23 |
Overall | 7 (68) | 7.1 (1.5) | 29 | 7 (68) | 7.4 (1.4) | 29 | 0.02 |
Handoff Providers
A total of 343 evaluations of handoff providers were completed regarding 67 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism (median: 8; interquartile range [IQR]: 79). The lowest rated domain was content (median: 7; IQR: 68) (Table 1).
Handoff Recipients
A total of 330 evaluations of handoff recipients were completed regarding 58 unique individuals. For each domain, scores spanned the full range from unsatisfactory to superior. The highest rated domain on the handoff provider evaluation tool was professionalism, with a median of 8 (IQR: 79). The lowest rated domain was setting, with a median score of 7 (IQR: 6‐9) (Table 1).
Validity Testing
Comparing provider scores to recipient scores, recipients received significantly higher scores for overall assessment (Table 1). Scores at UCM and Yale were similar in all domains for providers but were slightly lower at UCM in several domains for recipients (see Supporting Information, Appendix 2, in the online version of this article). Scores did not differ significantly by training level (Table 2). Third‐party external evaluators consistently gave lower marks for the same handoff than peer evaluators did (Table 3).
Domain | Median (Range) | P Value | |||
---|---|---|---|---|---|
NP/PA, N=33 | Subintern or Intern, N=170 | Resident, N=44 | Hospitalist, N=95 | ||
| |||||
Setting | 7 (29) | 7 (39) | 7 (49) | 7 (29) | 0.89 |
Organization | 8 (49) | 7 (29) | 7 (49) | 8 (39) | 0.11 |
Communication | 8 (49) | 7 (29) | 7 (49) | 8 (19) | 0.72 |
Content | 7 (39) | 7 (29) | 7 (49) | 7 (29) | 0.92 |
Judgment | 8 (59) | 7 (39) | 8 (49) | 8 (49) | 0.09 |
Professionalism | 8 (49) | 7 (29) | 8 (39) | 8 (49) | 0.82 |
Overall | 7 (39) | 7 (29) | 8 (49) | 7 (29) | 0.28 |
Provider, Median (Range) | Recipient, Median (Range) | |||||||
---|---|---|---|---|---|---|---|---|
Domain | Peer, N=152 | Resident, Supervisor, N=43 | External, N=147 | P Value | Peer, N=145 | Resident Supervisor, N=43 | External, N=142 | P Value |
| ||||||||
Setting | 8 (39) | 7 (39) | 7 (29) | 0.02 | 8 (29) | 7 (39) | 7 (29) | <0.001 |
Organization | 8 (39) | 8 (39) | 7 (29) | 0.18 | 8 (39) | 8 (69) | 7 (29) | <0.001 |
Communication | 8 (39) | 8 (39) | 7 (19) | <0.001 | 8 (39) | 8 (49) | 7 (29) | <0.001 |
Content | 8 (39) | 8 (29) | 7 (29) | <0.001 | N/A | N/A | N/A | N/A |
Judgment | 8 (49) | 8 (39) | 7 (39) | <0.001 | 8 (39) | 8 (49) | 7 (39) | <0.001 |
Professionalism | 8 (39) | 8 (59) | 7 (29) | 0.02 | 8 (39) | 8 (69) | 7 (39) | <0.001 |
Overall | 8 (39) | 8 (39) | 7 (29) | 0.001 | 8 (29) | 8 (49) | 7 (29) | <0.001 |
Spearman rank correlation coefficients among the CEX subdomains for provider scores ranged from 0.71 to 0.86, except for setting (Table 4). Setting was less well correlated with the other subdomains, with correlation coefficients ranging from 0.39 to 0.41. Correlations between individual domains and the overall rating ranged from 0.80 to 0.86, except setting, which had a correlation of 0.55. Every correlation was significant at P<0.001. Correlation coefficients for recipient scores were very similar to those for provider scores (see Supporting Information, Appendix 3, in the online version of this article).
Spearman Correlation Coefficients | ||||||
---|---|---|---|---|---|---|
Setting | Organization | Communication | Content | Judgment | Professionalism | |
| ||||||
Setting | 1.000 | 0.40 | 0.40 | 0.39 | 0.39 | 0.41 |
Organization | 0.40 | 1.00 | 0.80 | 0.71 | 0.77 | 0.73 |
Communication | 0.40 | 0.80 | 1.00 | 0.79 | 0.82 | 0.77 |
Content | 0.39 | 0.71 | 0.79 | 1.00 | 0.80 | 0.74 |
Judgment | 0.39 | 0.77 | 0.82 | 0.80 | 1.00 | 0.78 |
Professionalism | 0.41 | 0.73 | 0.77 | 0.74 | 0.78 | 1.00 |
Overall | 0.55 | 0.80 | 0.84 | 0.83 | 0.86 | 0.82 |
We analyzed 343 provider evaluations in the factor analysis; there were 6 missing values. The scree plot of eigenvalues did not support more than 1 factor; however, the rotated factor pattern for standardized regression coefficients for the first factor and the final communality estimates showed the setting component yielding smaller values than did other scale components (see Supporting Information, Appendix 4, in the online version of this article).
Reliability Testing
Weighted kappa scores for provider evaluations ranged from 0.28 (95% confidence interval [CI]: 0.01, 0.56) for setting to 0.59 (95% CI: 0.38, 0.80) for organization, and were generally higher for resident versus peer comparisons than for external versus peer comparisons. Weighted kappa scores for recipient evaluation were slightly lower for external versus peer evaluations, but agreement was no better than chance for resident versus peer evaluations (Table 5).
Domain | Provider | Recipient | ||
---|---|---|---|---|
External vs Peer, N=144 (95% CI) | Resident vs Peer, N=42 (95% CI) | External vs Peer, N=134 (95% CI) | Resident vs Peer, N=43 (95% CI) | |
| ||||
Setting | 0.39 (0.24, 0.54) | 0.28 (0.01, 0.56) | 0.34 (0.20, 0.48) | 0.48 (0.27, 0.69) |
Organization | 0.43 (0.29, 0.58) | 0.59 (0.39, 0.80) | 0.39 (0.22, 0.55) | 0.03 (0.23, 0.29) |
Communication | 0.34 (0.19, 0.49) | 0.52 (0.37, 0.68) | 0.36 (0.22, 0.51) | 0.02 (0.18, 0.23) |
Content | 0.38 (0.25, 0.51) | 0.53 (0.27, 0.80) | N/A (N/A) | N/A (N/A) |
Judgment | 0.36 (0.22, 0.49) | 0.54 (0.25, 0.83) | 0.28 (0.15, 0.42) | 0.12 (0.34, 0.09) |
Professionalism | 0.47 (0.32, 0.63) | 0.47 (0.23, 0.72) | 0.35 (0.18, 0.51) | 0.01 (0.29, 0.26) |
Overall | 0.50 (0.36, 0.64) | 0.45 (0.24, 0.67) | 0.31 (0.16, 0.48) | 0.07 (0.20, 0.34) |
DISCUSSION
In this study we found that an evaluation tool for direct observation of housestaff and hospitalists generated a range of scores and was well validated in the sense of performing similarly across 2 different institutions and among both trainees and attendings, while having high internal consistency. However, external evaluators gave consistently lower marks than peer evaluators at both sites, resulting in low reliability when comparing these 2 groups of raters.
It has traditionally been difficult to conduct direct evaluations of handoffs, because they may occur at haphazard times, in variable locations, and without very much advance notice. For this reason, several attempts have been made to incorporate peers in evaluations of handoff practices.[5, 39, 40] Using peers to conduct evaluations also has the advantage that peers are more likely to be familiar with the patients being handed off and might recognize handoff flaws that external evaluators would miss. Nonetheless, peer evaluations have some important liabilities. Peers may be unwilling or unable to provide honest critiques of their colleagues given that they must work closely together for years. Trainee peers may also lack sufficient clinical expertise or experience to accurately assess competence. In our study, we found that peers gave consistently higher marks to their colleagues than did external evaluators, suggesting they may have found it difficult to criticize their colleagues. We conclude that peer evaluation alone is likely an insufficient means of evaluating handoff quality.
Supervising residents gave very similar marks as intern peers, suggesting that they also are unwilling to criticize, are insufficiently experienced to evaluate, or alternatively, that the peer evaluations were reasonable. We suspect the latter is unlikely given that external evaluator scores were consistently lower than peers. One would expect the external evaluators to be biased toward higher scores given that they are not familiar with the patients and are not able to comment on inaccuracies or omissions in the sign‐out.
The tool appeared to perform less well in most cases for recipients than for providers, with a narrower range of scores and low‐weighted kappa scores. Although recipients play a key role in ensuring a high‐quality sign‐out by paying close attention, ensuring it is a bidirectional conversation, asking appropriate questions, and reading back key information, it may be that evaluators were unable to place these activities within the same domains that were used for the provider evaluation. An altogether different recipient evaluation approach may be necessary.[41]
In general, scores were clustered at the top of the score range, as is typical for evaluations. One strategy to spread out scores further would be to refine the tool by adding anchors for satisfactory performance not just the extremes. A second approach might be to reduce the grading scale to only 3 points (unsatisfactory, satisfactory, superior) to force more scores to the middle. However, this approach might limit the discrimination ability of the tool.
We have previously studied the use of this tool among nurses. In that study, we also found consistently higher scores by peers than by external evaluators. We did, however, find a positive effect of experience, in which more experienced nurses received higher scores on average. We did not observe a similar training effect in this study. There are several possible explanations for the lack of a training effect. It is possible that the types of handoffs assessed played a role. At UCM, some assessed handoffs were night staff to day staff, which might be lower quality than day staff to night staff handoffs, whereas at Yale, all handoffs were day to night teams. Thus, average scores at UCM (primarily hospitalists) might have been lowered by the type of handoff provided. Given that hospitalist evaluations were conducted exclusively at UCM and housestaff evaluations exclusively at Yale, lack of difference between hospitalists and housestaff may also have been related to differences in evaluation practice or handoff practice at the 2 sites, not necessarily related to training level. Third, in our experience, attending physicians provide briefer less‐comprehensive sign‐outs than trainees, particularly when communicating with equally experienced attendings; these sign‐outs may appropriately be scored lower on the tool. Fourth, the great majority of the hospitalists at UCM were within 5 years of residency and therefore not very much more experienced than the trainees. Finally, it is possible that skills do not improve over time given widespread lack of observation and feedback during training years for this important skill.
The high internal consistency of most of the subdomains and the loading of all subdomains except setting onto 1 factor are evidence of convergent construct validity, but also suggest that evaluators have difficulty distinguishing among components of sign‐out quality. Internal consistency may also reflect a halo effect, in which scores on different domains are all influenced by a common overall judgment.[42] We are currently testing a shorter version of the tool including domains only for content, professionalism, and setting in addition to overall score. The fact that setting did not correlate as well with the other domains suggests that sign‐out practitioners may not have or exercise control over their surroundings. Consequently, it may ultimately be reasonable to drop this domain from the tool, or alternatively, to refocus on the need to ensure a quiet setting during sign‐out skills training.
There are several limitations to this study. External evaluations were conducted by personnel who were not familiar with the patients, and they may therefore have overestimated the quality of sign‐out. Studying different types of physicians at different sites might have limited our ability to identify differences by training level. As is commonly seen in evaluation studies, scores were skewed to the high end, although we did observe some use of the full range of the tool. Finally, we were limited in our ability to test inter‐rater reliability because of the multiple sources of variability in the data (numerous different raters, with different backgrounds at different settings, rating different individuals).
In summary, we developed a handoff evaluation tool that was easily completed by housestaff and attendings without training, that performed similarly in a variety of different settings at 2 institutions, and that can in principle be used either for peer evaluations or for external evaluations, although peer evaluations may be positively biased. Further work will be done to refine and simplify the tool.
ACKNOWLEDGMENTS
Disclosures: Development and evaluation of the sign‐out CEX was supported by a grant from the Agency for Healthcare Research and Quality (1R03HS018278‐01). Dr. Arora is supported by a National Institute on Aging (K23 AG033763). Dr. Horwitz is supported by the National Institute on Aging (K08 AG038336) and by the American Federation for Aging Research through the Paul B. Beeson Career Development Award Program. Dr. Horwitz is also a Pepper Scholar with support from the Claude D. Pepper Older Americans Independence Center at Yale University School of Medicine (P30AG021342 NIH/NIA). No funding source had any role in the study design; in the collection, analysis, and interpretation of data; in the writing of the report; or in the decision to submit the article for publication. The content is solely the responsibility of the authors and does not necessarily represent the official views of the Agency for Healthcare Research and Quality, the National Institute on Aging, the National Institutes of Health, or the American Federation for Aging Research. Dr. Horwitz had full access to all of the data in the study and takes responsibility for the integrity of the data and the accuracy of the data analysis. An earlier version of this work was presented as a poster presentation at the Society of General Internal Medicine Annual Meeting in Orlando, Florida on May 9, 2012. Dr. Rand is now with the Department of Medicine, University of Vermont College of Medicine, Burlington, Vermont. Mr. Staisiunas is now with the Law School, Marquette University, Milwaukee, Wisconsin. The authors declare they have no conflicts of interest.
Appendix
A
PROVIDER HAND‐OFF CEX TOOL
RECIPIENT HAND‐OFF CEX TOOL
Appendix
B
Handoff CEX scores by site of evaluation
Domain | Provider | Recipient | ||||
Median (Range) | P‐value | Median (Range) | P‐value | |||
UC | Yale | UC | Yale | |||
N=172 | N=170 | N=163 | N=167 | |||
Setting | 7 (29) | 7 (39) | 0.32 | 7 (29) | 7 (39) | 0.36 |
Organization | 8 (29) | 7 (39) | 0.30 | 7 (29) | 8 (59) | 0.001 |
Communication | 7 (19) | 7 (39) | 0.67 | 7 (29) | 8 (49) | 0.03 |
Content | 7 (29) | 7 (29) | N/A | N/A | N/A | |
Judgment | 8 (39) | 7 (39) | 0.60 | 7 (39) | 8 (49) | 0.001 |
Professionalism | 8 (29) | 8 (39) | 0.67 | 8 (39) | 8 (49) | 0.35 |
Overall | 7 (29) | 7 (39) | 0.41 | 7 (29) | 8 (49) | 0.005 |
Appendix
C
Spearman correlation, recipients (N=330)
SpearmanCorrelationCoefficients | |||||
Setting | Organization | Communication | Judgment | Professionalism | |
Setting | 1.0 | 0.46 | 0.48 | 0.47 | 0.40 |
Organization | 0.46 | 1.00 | 0.78 | 0.75 | 0.75 |
Communication | 0.48 | 0.78 | 1.00 | 0.85 | 0.77 |
Judgment | 0.47 | 0.75 | 0.85 | 1.00 | 0.74 |
Professionalism | 0.40 | 0.75 | 0.77 | 0.74 | 1.00 |
Overall | 0.60 | 0.77 | 0.84 | 0.82 | 0.77 |
All p values <0.0001
Appendix
D
Factor analysis results for provider evaluations
Rotated Factor Pattern (Standardized Regression Coefficients) N=336 | ||
Factor1 | Factor2 | |
Organization | 0.64 | 0.27 |
Communication | 0.79 | 0.16 |
Content | 0.82 | 0.06 |
Judgment | 0.86 | 0.06 |
Professionalism | 0.66 | 0.23 |
Setting | 0.18 | 0.29 |
- Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):1173–1177. , , , .
- Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
- Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866–872. , , , , .
- Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186–194. , , .
- Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401–407. , , , , .
- Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):1755–1760. , , , , .
- Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):6–10. , , , .
- What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248–255. , , , , .
- Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):1182–1188. , .
- Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128–133. , , , .
- Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105–111. , , , et al.
- Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287–291. , , , et al.
- Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x. , , , , , .
- The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795–799. , , , .
- Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):27–33. , , , .
- Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900–904. , , , .
- Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826–830. , , , , .
- Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701–710.e4. , , , , , .
- Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):1470–1474. , , .
- Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114. , , , .
- Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368–378. , , , et al.
- An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863–871. , , , et al.
- A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646–655. , .
- Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):1751–1755. , , , , .
- A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):11–14. , , , .
- Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433–440. , , , , , .
- Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491–496. , , , , .
- Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244–245. , .
- Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257–266. , , , , .
- Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121–126. , , .
- SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167–175. , , .
- Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):4–5. .
- Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125–132. , , , , .
- Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758–763. , , , , , .
- Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673–676. , .
- Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415–418. , , , , .
- A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274–284. , .
- Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129–134. , , , et al.
- Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563–570. , , , et al.
- A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860–867. , , .
- Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138. , , , , .
- A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25. .
- Transfers of patient care between house staff on internal medicine wards: a national survey. Arch Intern Med. 2006;166(11):1173–1177. , , , .
- Accreditation Council for Graduate Medical Education. Common program requirements. 2011; http://www.acgme‐2010standards.org/pdf/Common_Program_Requirements_07012011.pdf. Accessed August 23, 2011.
- Does housestaff discontinuity of care increase the risk for preventable adverse events? Ann Intern Med. 1994;121(11):866–872. , , , , .
- Communication failures: an insidious contributor to medical mishaps. Acad Med. 2004;79(2):186–194. , , .
- Communication failures in patient sign‐out and suggestions for improvement: a critical incident analysis. Qual Saf Health Care. 2005;14(6):401–407. , , , , .
- Consequences of inadequate sign‐out for patient care. Arch Intern Med. 2008;168(16):1755–1760. , , , , .
- Adequacy of information transferred at resident sign‐out (in‐hospital handover of care): a prospective survey. Qual Saf Health Care. 2008;17(1):6–10. , , , .
- What are covering doctors told about their patients? Analysis of sign‐out among internal medicine house staff. Qual Saf Health Care. 2009;18(4):248–255. , , , , .
- Using direct observation, formal evaluation, and an interactive curriculum to improve the sign‐out practices of internal medicine interns. Acad Med. 2010;85(7):1182–1188. , .
- Doctors' handovers in hospitals: a literature review. Qual Saf Health Care. 2011;20(2):128–133. , , , .
- Resident sign‐out and patient hand‐offs: opportunities for improvement. Teach Learn Med. 2011;23(2):105–111. , , , et al.
- Use of an appreciative inquiry approach to improve resident sign‐out in an era of multiple shift changes. J Gen Intern Med. 2012;27(3):287–291. , , , et al.
- Validation of a handoff assessment tool: the Handoff CEX [published online ahead of print June 7, 2012]. J Clin Nurs. doi: 10.1111/j.1365–2702.2012.04131.x. , , , , , .
- The mini‐CEX (clinical evaluation exercise): a preliminary investigation. Ann Intern Med. 1995;123(10):795–799. , , , .
- Examiner differences in the mini‐CEX. Adv Health Sci Educ Theory Pract. 1997;2(1):27–33. , , , .
- Assessing the reliability and validity of the mini‐clinical evaluation exercise for internal medicine residency training. Acad Med. 2002;77(9):900–904. , , , .
- Construct validity of the miniclinical evaluation exercise (miniCEX). Acad Med. 2003;78(8):826–830. , , , , .
- Dropping the baton: a qualitative analysis of failures during the transition from emergency department to inpatient care. Ann Emerg Med. 2009;53(6):701–710.e4. , , , , , .
- Development and implementation of an oral sign‐out skills curriculum. J Gen Intern Med. 2007;22(10):1470–1474. , , .
- Mixed methods evaluation of oral sign‐out practices. J Gen Intern Med. 2007;22(S1):S114. , , , .
- Evaluation of an asynchronous physician voicemail sign‐out for emergency department admissions. Ann Emerg Med. 2009;54(3):368–378. , , , et al.
- An institution‐wide handoff task force to standardise and improve physician handoffs. BMJ Qual Saf. 2012;21(10):863–871. , , , et al.
- A model for building a standardized hand‐off protocol. Jt Comm J Qual Patient Saf. 2006;32(11):646–655. , .
- Medication discrepancies in resident sign‐outs and their potential to harm. J Gen Intern Med. 2007;22(12):1751–1755. , , , , .
- A theoretical framework and competency‐based approach to improving handoffs. Qual Saf Health Care. 2008;17(1):11–14. , , , .
- Hospitalist handoffs: a systematic review and task force recommendations. J Hosp Med. 2009;4(7):433–440. , , , , , .
- Interns overestimate the effectiveness of their hand‐off communication. Pediatrics. 2010;125(3):491–496. , , , , .
- Improving clinical handovers: creating local solutions for a global problem. Qual Saf Health Care. 2009;18(4):244–245. , .
- Managing discontinuity in academic medical centers: strategies for a safe and effective resident sign‐out. J Hosp Med. 2006;1(4):257–266. , , , , .
- Standardized sign‐out reduces intern perception of medical errors on the general internal medicine ward. Teach Learn Med. 2009;21(2):121–126. , , .
- SBAR: a shared mental model for improving communication between clinicians. Jt Comm J Qual Patient Saf. 2006;32(3):167–175. , , .
- Structuring flexibility: the potential good, bad and ugly in standardisation of handovers. Qual Saf Health Care. 2008;17(1):4–5. .
- Handoff strategies in settings with high consequences for failure: lessons for health care operations. Int J Qual Health Care. 2004;16(2):125–132. , , , , .
- Residents' perceptions of professionalism in training and practice: barriers, promoters, and duty hour requirements. J Gen Intern Med. 2006;21(7):758–763. , , , , , .
- Communication behaviours in a hospital setting: an observational study. BMJ. 1998;316(7132):673–676. , .
- Communication loads on clinical staff in the emergency department. Med J Aust. 2002;176(9):415–418. , , , , .
- A systematic review of failures in handoff communication during intrahospital transfers. Jt Comm J Qual Patient Saf. 2011;37(6):274–284. , .
- Hand‐off education and evaluation: piloting the observed simulated hand‐off experience (OSHE). J Gen Intern Med. 2010;25(2):129–134. , , , et al.
- Handoffs causing patient harm: a survey of medical and surgical house staff. Jt Comm J Qual Patient Saf. 2008;34(10):563–570. , , , et al.
- A prospective observational study of physician handoff for intensive‐care‐unit‐to‐ward patient transfers. Am J Med. 2011;124(9):860–867. , , .
- Characterizing physician listening behavior during hospitalist handoffs using the HEAR checklist (published online ahead of print December 20, 2012]. BMJ Qual Saf. doi:10.1136/bmjqs‐2012‐001138. , , , , .
- A constant error in psychological ratings. J Appl Psychol. 1920;4(1):25. .
Copyright © 2013 Society of Hospital Medicine
A new way to treat ear infections
A new way to treat ear infections in children called "individualized care" is described in the May 2013 issue of Pediatric Infectious Diseases Journal. It explains how to reduce the frequency of repeated ear infections nearly 500% and how to reduce the need for ear tube surgery by 600% in your practice.
Dr. Janet Casey at Legacy Pediatrics in Rochester, N.Y.; Anthony Almudevar, Ph.D., of the University of Rochester; and I conducted the prospective, longitudinal multiyear study with the support of the National Institutes of Health’s National Institute for Deafness and Communication Disorders and the Thrasher Research Fund (Pediatr. Infect. Dis. J. 2013 Jan. 21 [Epub ahead of print]).
The study compared three groups: children who were in the Legacy Pediatrics practice and received individualized care; control children in the Legacy practice who did not participate because their parents declined participation (they did not want venipunctures or ear taps); and community controls drawn from a different pediatric practice in the suburbs of Rochester that used the diagnostic criteria of the American Academy of Pediatrics and treated all children empirically with high-dose amoxicillin as endorsed by the former and new AAP treatment guidelines (Pediatrics 2013;131:e964-99).
The new treatment paradigm of individualized care included a tympanocentesis procedure, also called an ear tap, to determine precisely the bacteria causing the ear infection. Treatment was started with high-dose amoxicillin/clavulanate. The sample of fluid then was taken to my laboratory at the Rochester General Hospital Research Institute, where the bacteria isolated were tested against a panel of antibiotics to determine whether to continue with amoxicillin/clavulanate or switch to a more effective antibiotic for the child based on culture susceptibility. By doing the ear tap and antibiotic testing, the frequency of repeated ear infections was reduced by 250%, compared with the Legacy practice controls who did not participate, and by 460%, compared with the community controls.
The most common reason for children to receive ear tubes is repeated ear infections, so when the frequency of ear infections was reduced so too was the frequency of ear tube surgery. The new treatment approach resulted in 260% fewer ear tube surgeries in the individualized care group, compared with the Legacy Pediatrics controls, and 620% fewer surgeries than the community controls.
Allowing the child to receive an ear tap was a requirement for the study. Dr. Casey and I found a way to do the procedure painlessly by instilling 8% Novocain in the ear canal as drops to anesthetize the tympanic membrane. After 15 minutes there was no pain when the tap was done. We used a papoose to hold the child still.
The ear-tap procedure not only allowed individualized care with the astonishing results reported, it also allowed more rapid healing of the ear since removal of the pus and bacteria from behind the ear allowed the antibiotics to work better and the immune system to clear the infection more effectively.
The article discusses reasons for the remarkable difference in results with the individualized care approach. First, Dr. Casey and I have undergone special training from ear, nose, and throat (ENT) doctors in the diagnosis of ear infections.
In earlier studies, a group of experts in otitis media diagnosis joined together in a continuing medical education course sponsored by Outcomes Management Education Workshops to use video exams to test whether pediatricians, family physicians, and urgent care physicians knew how to correctly distinguish true acute otitis media (AOM) from otitis media with effusion (OME) and variations of normal in the tympanic membrane exam. We found that all three specialty groups and residents in training in all three specialties and nurse practitioners and physician assistants overdiagnosed AOM about half the time.
Second, the selection of antibiotic proved to be key. Dr. Casey and I have the only otitis media research center in the United States providing tympanocentesis data at the current time. We have found that amoxicillin kills the bacteria causing AOM infections in children in the Rochester area only about 30% of the time. By knowing the bacteria, an evidence-based antibiotic can be chosen.
I expect that readers of this column will believe they diagnose AOM correctly nearly all the time and that it is the other physician who overdiagnoses. I expect that readers will be reluctant to not adhere to the AAP guideline recommendation of using amoxicillin as the treatment of first choice. Most of all, I expect readers to be reluctant to undertake training on how to do the ear tap procedure. Change is always resisted by the majority, and only with time does it occur if the evidence is strong and there is growing adoption.
Nevertheless, I encourage all to find an opportunity to attend a CME course on AOM diagnosis and I hope that resident training programs will incorporate more effective teaching on AOM diagnosis. I recommend high-dose amoxicillin/clavulanate as the treatment of choice for AOM; if it is not tolerated, then one of the preferred cephalosporins endorsed by the AAP guideline should be chosen.
I recommend that resident training programs include tympanocentesis as part of the curriculum. Why are residents taught how to do a spinal tap, arterial artery puncture, and lung tap but not an ear tap? I also recommend that practicing pediatricians gain the skill to perform tympanocentesis as well. I recognize that some just won’t have the hand/eye coordination or steady hand needed, so it’s not for everyone. However, especially in group practices, a few trained providers could become an internal referral resource for getting the procedure done.
Arguments about malpractice are a smokescreen. The risks of tympanocentesis are no greater than venipuncture in trained and skilled hands. It is included as a standard procedure for pediatricians in our state without any additional malpractice insurance costs. And Dr. Casey and I have effectively managed to get the procedure done when a patient needs it without blowing our schedules off the map and raising the ire of patients and staff. It just takes a commitment.
It would be convenient to refer to an ENT doctor for a tympanocentesis, but most ENT doctors have not been trained to do the procedure while the child is awake and prefer to have the child asleep. Also, try to get a child in for an appointment with an ENT with no notice on the same day! Moreover, ENT doctors have been trained that if an ear tap is needed then it is advisable to go ahead and put in an ear tube.
Because of the success of this research, our center received a renewal of support from NIH in 2012 to continue the study through 2017. Several pediatric practices in Rochester are part of the research – Long Pond Pediatrics, Westfall Pediatrics, Sunrise Pediatrics, Lewis Pediatrics, and Pathway Pediatrics – as well as Dr. Margo Benoit of the department of otolaryngology at the University of Rochester and Dr. Frank Salamone and Dr. Kevin Kozara of the Rochester Otolaryngology Group, which is affiliated with Rochester General Hospital.
Dr. Pichichero, a specialist in pediatric infectious diseases, is director of the Rochester (N.Y.) General Hospital Research Institute. He is also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant financial conflicts of interest to disclose.
A new way to treat ear infections in children called "individualized care" is described in the May 2013 issue of Pediatric Infectious Diseases Journal. It explains how to reduce the frequency of repeated ear infections nearly 500% and how to reduce the need for ear tube surgery by 600% in your practice.
Dr. Janet Casey at Legacy Pediatrics in Rochester, N.Y.; Anthony Almudevar, Ph.D., of the University of Rochester; and I conducted the prospective, longitudinal multiyear study with the support of the National Institutes of Health’s National Institute for Deafness and Communication Disorders and the Thrasher Research Fund (Pediatr. Infect. Dis. J. 2013 Jan. 21 [Epub ahead of print]).
The study compared three groups: children who were in the Legacy Pediatrics practice and received individualized care; control children in the Legacy practice who did not participate because their parents declined participation (they did not want venipunctures or ear taps); and community controls drawn from a different pediatric practice in the suburbs of Rochester that used the diagnostic criteria of the American Academy of Pediatrics and treated all children empirically with high-dose amoxicillin as endorsed by the former and new AAP treatment guidelines (Pediatrics 2013;131:e964-99).
The new treatment paradigm of individualized care included a tympanocentesis procedure, also called an ear tap, to determine precisely the bacteria causing the ear infection. Treatment was started with high-dose amoxicillin/clavulanate. The sample of fluid then was taken to my laboratory at the Rochester General Hospital Research Institute, where the bacteria isolated were tested against a panel of antibiotics to determine whether to continue with amoxicillin/clavulanate or switch to a more effective antibiotic for the child based on culture susceptibility. By doing the ear tap and antibiotic testing, the frequency of repeated ear infections was reduced by 250%, compared with the Legacy practice controls who did not participate, and by 460%, compared with the community controls.
The most common reason for children to receive ear tubes is repeated ear infections, so when the frequency of ear infections was reduced so too was the frequency of ear tube surgery. The new treatment approach resulted in 260% fewer ear tube surgeries in the individualized care group, compared with the Legacy Pediatrics controls, and 620% fewer surgeries than the community controls.
Allowing the child to receive an ear tap was a requirement for the study. Dr. Casey and I found a way to do the procedure painlessly by instilling 8% Novocain in the ear canal as drops to anesthetize the tympanic membrane. After 15 minutes there was no pain when the tap was done. We used a papoose to hold the child still.
The ear-tap procedure not only allowed individualized care with the astonishing results reported, it also allowed more rapid healing of the ear since removal of the pus and bacteria from behind the ear allowed the antibiotics to work better and the immune system to clear the infection more effectively.
The article discusses reasons for the remarkable difference in results with the individualized care approach. First, Dr. Casey and I have undergone special training from ear, nose, and throat (ENT) doctors in the diagnosis of ear infections.
In earlier studies, a group of experts in otitis media diagnosis joined together in a continuing medical education course sponsored by Outcomes Management Education Workshops to use video exams to test whether pediatricians, family physicians, and urgent care physicians knew how to correctly distinguish true acute otitis media (AOM) from otitis media with effusion (OME) and variations of normal in the tympanic membrane exam. We found that all three specialty groups and residents in training in all three specialties and nurse practitioners and physician assistants overdiagnosed AOM about half the time.
Second, the selection of antibiotic proved to be key. Dr. Casey and I have the only otitis media research center in the United States providing tympanocentesis data at the current time. We have found that amoxicillin kills the bacteria causing AOM infections in children in the Rochester area only about 30% of the time. By knowing the bacteria, an evidence-based antibiotic can be chosen.
I expect that readers of this column will believe they diagnose AOM correctly nearly all the time and that it is the other physician who overdiagnoses. I expect that readers will be reluctant to not adhere to the AAP guideline recommendation of using amoxicillin as the treatment of first choice. Most of all, I expect readers to be reluctant to undertake training on how to do the ear tap procedure. Change is always resisted by the majority, and only with time does it occur if the evidence is strong and there is growing adoption.
Nevertheless, I encourage all to find an opportunity to attend a CME course on AOM diagnosis and I hope that resident training programs will incorporate more effective teaching on AOM diagnosis. I recommend high-dose amoxicillin/clavulanate as the treatment of choice for AOM; if it is not tolerated, then one of the preferred cephalosporins endorsed by the AAP guideline should be chosen.
I recommend that resident training programs include tympanocentesis as part of the curriculum. Why are residents taught how to do a spinal tap, arterial artery puncture, and lung tap but not an ear tap? I also recommend that practicing pediatricians gain the skill to perform tympanocentesis as well. I recognize that some just won’t have the hand/eye coordination or steady hand needed, so it’s not for everyone. However, especially in group practices, a few trained providers could become an internal referral resource for getting the procedure done.
Arguments about malpractice are a smokescreen. The risks of tympanocentesis are no greater than venipuncture in trained and skilled hands. It is included as a standard procedure for pediatricians in our state without any additional malpractice insurance costs. And Dr. Casey and I have effectively managed to get the procedure done when a patient needs it without blowing our schedules off the map and raising the ire of patients and staff. It just takes a commitment.
It would be convenient to refer to an ENT doctor for a tympanocentesis, but most ENT doctors have not been trained to do the procedure while the child is awake and prefer to have the child asleep. Also, try to get a child in for an appointment with an ENT with no notice on the same day! Moreover, ENT doctors have been trained that if an ear tap is needed then it is advisable to go ahead and put in an ear tube.
Because of the success of this research, our center received a renewal of support from NIH in 2012 to continue the study through 2017. Several pediatric practices in Rochester are part of the research – Long Pond Pediatrics, Westfall Pediatrics, Sunrise Pediatrics, Lewis Pediatrics, and Pathway Pediatrics – as well as Dr. Margo Benoit of the department of otolaryngology at the University of Rochester and Dr. Frank Salamone and Dr. Kevin Kozara of the Rochester Otolaryngology Group, which is affiliated with Rochester General Hospital.
Dr. Pichichero, a specialist in pediatric infectious diseases, is director of the Rochester (N.Y.) General Hospital Research Institute. He is also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant financial conflicts of interest to disclose.
A new way to treat ear infections in children called "individualized care" is described in the May 2013 issue of Pediatric Infectious Diseases Journal. It explains how to reduce the frequency of repeated ear infections nearly 500% and how to reduce the need for ear tube surgery by 600% in your practice.
Dr. Janet Casey at Legacy Pediatrics in Rochester, N.Y.; Anthony Almudevar, Ph.D., of the University of Rochester; and I conducted the prospective, longitudinal multiyear study with the support of the National Institutes of Health’s National Institute for Deafness and Communication Disorders and the Thrasher Research Fund (Pediatr. Infect. Dis. J. 2013 Jan. 21 [Epub ahead of print]).
The study compared three groups: children who were in the Legacy Pediatrics practice and received individualized care; control children in the Legacy practice who did not participate because their parents declined participation (they did not want venipunctures or ear taps); and community controls drawn from a different pediatric practice in the suburbs of Rochester that used the diagnostic criteria of the American Academy of Pediatrics and treated all children empirically with high-dose amoxicillin as endorsed by the former and new AAP treatment guidelines (Pediatrics 2013;131:e964-99).
The new treatment paradigm of individualized care included a tympanocentesis procedure, also called an ear tap, to determine precisely the bacteria causing the ear infection. Treatment was started with high-dose amoxicillin/clavulanate. The sample of fluid then was taken to my laboratory at the Rochester General Hospital Research Institute, where the bacteria isolated were tested against a panel of antibiotics to determine whether to continue with amoxicillin/clavulanate or switch to a more effective antibiotic for the child based on culture susceptibility. By doing the ear tap and antibiotic testing, the frequency of repeated ear infections was reduced by 250%, compared with the Legacy practice controls who did not participate, and by 460%, compared with the community controls.
The most common reason for children to receive ear tubes is repeated ear infections, so when the frequency of ear infections was reduced so too was the frequency of ear tube surgery. The new treatment approach resulted in 260% fewer ear tube surgeries in the individualized care group, compared with the Legacy Pediatrics controls, and 620% fewer surgeries than the community controls.
Allowing the child to receive an ear tap was a requirement for the study. Dr. Casey and I found a way to do the procedure painlessly by instilling 8% Novocain in the ear canal as drops to anesthetize the tympanic membrane. After 15 minutes there was no pain when the tap was done. We used a papoose to hold the child still.
The ear-tap procedure not only allowed individualized care with the astonishing results reported, it also allowed more rapid healing of the ear since removal of the pus and bacteria from behind the ear allowed the antibiotics to work better and the immune system to clear the infection more effectively.
The article discusses reasons for the remarkable difference in results with the individualized care approach. First, Dr. Casey and I have undergone special training from ear, nose, and throat (ENT) doctors in the diagnosis of ear infections.
In earlier studies, a group of experts in otitis media diagnosis joined together in a continuing medical education course sponsored by Outcomes Management Education Workshops to use video exams to test whether pediatricians, family physicians, and urgent care physicians knew how to correctly distinguish true acute otitis media (AOM) from otitis media with effusion (OME) and variations of normal in the tympanic membrane exam. We found that all three specialty groups and residents in training in all three specialties and nurse practitioners and physician assistants overdiagnosed AOM about half the time.
Second, the selection of antibiotic proved to be key. Dr. Casey and I have the only otitis media research center in the United States providing tympanocentesis data at the current time. We have found that amoxicillin kills the bacteria causing AOM infections in children in the Rochester area only about 30% of the time. By knowing the bacteria, an evidence-based antibiotic can be chosen.
I expect that readers of this column will believe they diagnose AOM correctly nearly all the time and that it is the other physician who overdiagnoses. I expect that readers will be reluctant to not adhere to the AAP guideline recommendation of using amoxicillin as the treatment of first choice. Most of all, I expect readers to be reluctant to undertake training on how to do the ear tap procedure. Change is always resisted by the majority, and only with time does it occur if the evidence is strong and there is growing adoption.
Nevertheless, I encourage all to find an opportunity to attend a CME course on AOM diagnosis and I hope that resident training programs will incorporate more effective teaching on AOM diagnosis. I recommend high-dose amoxicillin/clavulanate as the treatment of choice for AOM; if it is not tolerated, then one of the preferred cephalosporins endorsed by the AAP guideline should be chosen.
I recommend that resident training programs include tympanocentesis as part of the curriculum. Why are residents taught how to do a spinal tap, arterial artery puncture, and lung tap but not an ear tap? I also recommend that practicing pediatricians gain the skill to perform tympanocentesis as well. I recognize that some just won’t have the hand/eye coordination or steady hand needed, so it’s not for everyone. However, especially in group practices, a few trained providers could become an internal referral resource for getting the procedure done.
Arguments about malpractice are a smokescreen. The risks of tympanocentesis are no greater than venipuncture in trained and skilled hands. It is included as a standard procedure for pediatricians in our state without any additional malpractice insurance costs. And Dr. Casey and I have effectively managed to get the procedure done when a patient needs it without blowing our schedules off the map and raising the ire of patients and staff. It just takes a commitment.
It would be convenient to refer to an ENT doctor for a tympanocentesis, but most ENT doctors have not been trained to do the procedure while the child is awake and prefer to have the child asleep. Also, try to get a child in for an appointment with an ENT with no notice on the same day! Moreover, ENT doctors have been trained that if an ear tap is needed then it is advisable to go ahead and put in an ear tube.
Because of the success of this research, our center received a renewal of support from NIH in 2012 to continue the study through 2017. Several pediatric practices in Rochester are part of the research – Long Pond Pediatrics, Westfall Pediatrics, Sunrise Pediatrics, Lewis Pediatrics, and Pathway Pediatrics – as well as Dr. Margo Benoit of the department of otolaryngology at the University of Rochester and Dr. Frank Salamone and Dr. Kevin Kozara of the Rochester Otolaryngology Group, which is affiliated with Rochester General Hospital.
Dr. Pichichero, a specialist in pediatric infectious diseases, is director of the Rochester (N.Y.) General Hospital Research Institute. He is also a pediatrician at Legacy Pediatrics in Rochester. He said he had no relevant financial conflicts of interest to disclose.