User login
PODCAST: An Inside Look at Medication Reconciliation
This month’s feature highlights an initiative of the Society of Hospital Medicine aimed at helping hospitalists drive team-based medication reconciliation programs.
Dr. Jeffrey Schnipper, director of clinical research for the hospitalist service at Brigham and Women’s Hospital and associate professor of medicine at Harvard Medical School, both in Boston, discusses the opportunities and challenges of med-rec and why he thinks med-rec shouldn’t be viewed as just a regulatory issue.
Dr. Stephanie Mueller, a clinician investigator and hospitalist researcher at Brigham and Women’s Hospital, talks about how MARQUIS components were developed and the role patients can play in med-rec. Dr. Amanda Salanitro, a hospitalist at the VA Tennessee Valley Healthcare System and an instructor at Vanderbilt University, both in Nashville, shares why she sees accountability as a critical component of med-rec quality improvement and her thoughts about how IT can help the process.
Click hear to listen to our medication reconciliation podcast.
This month’s feature highlights an initiative of the Society of Hospital Medicine aimed at helping hospitalists drive team-based medication reconciliation programs.
Dr. Jeffrey Schnipper, director of clinical research for the hospitalist service at Brigham and Women’s Hospital and associate professor of medicine at Harvard Medical School, both in Boston, discusses the opportunities and challenges of med-rec and why he thinks med-rec shouldn’t be viewed as just a regulatory issue.
Dr. Stephanie Mueller, a clinician investigator and hospitalist researcher at Brigham and Women’s Hospital, talks about how MARQUIS components were developed and the role patients can play in med-rec. Dr. Amanda Salanitro, a hospitalist at the VA Tennessee Valley Healthcare System and an instructor at Vanderbilt University, both in Nashville, shares why she sees accountability as a critical component of med-rec quality improvement and her thoughts about how IT can help the process.
Click hear to listen to our medication reconciliation podcast.
This month’s feature highlights an initiative of the Society of Hospital Medicine aimed at helping hospitalists drive team-based medication reconciliation programs.
Dr. Jeffrey Schnipper, director of clinical research for the hospitalist service at Brigham and Women’s Hospital and associate professor of medicine at Harvard Medical School, both in Boston, discusses the opportunities and challenges of med-rec and why he thinks med-rec shouldn’t be viewed as just a regulatory issue.
Dr. Stephanie Mueller, a clinician investigator and hospitalist researcher at Brigham and Women’s Hospital, talks about how MARQUIS components were developed and the role patients can play in med-rec. Dr. Amanda Salanitro, a hospitalist at the VA Tennessee Valley Healthcare System and an instructor at Vanderbilt University, both in Nashville, shares why she sees accountability as a critical component of med-rec quality improvement and her thoughts about how IT can help the process.
Click hear to listen to our medication reconciliation podcast.
Data Mining Expert Explains Role Performance Tools Will Play in Future
Click here to listen to more of our interview with Paul Roscoe, CEO of the Washington, D.C.-based Advisory Board Company
Click here to listen to more of our interview with Paul Roscoe, CEO of the Washington, D.C.-based Advisory Board Company
Click here to listen to more of our interview with Paul Roscoe, CEO of the Washington, D.C.-based Advisory Board Company
The Why and How Data Mining Is Applicable to Hospital Medicine
Click here to listen to excerpts of our interview with Dr. Deitelzweig, chair of SHM’s Practice Analysis Committee.
Click here to listen to excerpts of our interview with Dr. Deitelzweig, chair of SHM’s Practice Analysis Committee.
Click here to listen to excerpts of our interview with Dr. Deitelzweig, chair of SHM’s Practice Analysis Committee.
Learn How Best To Avoid Some of Data Mining’s Potential Pitfalls
Ensuring data quality and equivalency can present major challenges in data analytics, especially given the field’s dearth of uniform standards.
“The joke is that the great thing about health-care data standards is that there’s so many to choose from,” says Brett Davis, general manager of Deloitte Health Informatics. If data integration remains a big challenge, however, Davis says the cost and complexity of the technology is dropping rapidly.
A lack of electronic health records (EHR) can limit more advanced data-mining functions. But that’s no excuse for not exploring the technology, says Steven Deitelzweig, MD, SFHM, system chairman for hospital medicine at Ochsner Health System in New Orleans and chair of SHM’s Practice Analysis Committee.
Deployment of that partial prerequisite also seems to be happening quickly around the country. The Office of the National Coordinator for Health IT (ONC) estimates that hospital adoption of at least a basic EHR system more than tripled between 2009 and 2012, to 44% from 12%. Meanwhile, an estimated 85% of hospitals were at least in possession of certified EHR technology by 2012.
Despite the falling barriers, Davis cautions that users should have clear goals in mind when setting up a new system. “There is the risk of building bridges to nowhere, where you just integrate data for the sake of integrating data but not knowing what questions and insights you want to glean from it,” he says.
ONC spokesman Peter Ashkenaz agrees, citing governance within a hospital or health center and education of all participants as important elements of any data-analytics plan. Among the questions that must be addressed, he says, are these: “Have we collected the right information? Are we doing so efficiently and securely with respect to privacy requirements? Are we sharing the data with the appropriate parties? Are we doing so in a way that is easily understood? Are we asking the right questions about how to use the information?”
The most fundamental question, Dr. Deitelzweig says, may be whether a hospitalist group, hospital, or health system is truly committed to using the technology. “If you’re going to make the investment in such things, then you really better be dedicated to understanding them and how best to utilize them. And give it some time,” he says. “I think people want solutions fast, and often they don’t take the time to individualize it or customize it.” TH
Bryn Nelson is a freelance medical writer in Seattle.
Ensuring data quality and equivalency can present major challenges in data analytics, especially given the field’s dearth of uniform standards.
“The joke is that the great thing about health-care data standards is that there’s so many to choose from,” says Brett Davis, general manager of Deloitte Health Informatics. If data integration remains a big challenge, however, Davis says the cost and complexity of the technology is dropping rapidly.
A lack of electronic health records (EHR) can limit more advanced data-mining functions. But that’s no excuse for not exploring the technology, says Steven Deitelzweig, MD, SFHM, system chairman for hospital medicine at Ochsner Health System in New Orleans and chair of SHM’s Practice Analysis Committee.
Deployment of that partial prerequisite also seems to be happening quickly around the country. The Office of the National Coordinator for Health IT (ONC) estimates that hospital adoption of at least a basic EHR system more than tripled between 2009 and 2012, to 44% from 12%. Meanwhile, an estimated 85% of hospitals were at least in possession of certified EHR technology by 2012.
Despite the falling barriers, Davis cautions that users should have clear goals in mind when setting up a new system. “There is the risk of building bridges to nowhere, where you just integrate data for the sake of integrating data but not knowing what questions and insights you want to glean from it,” he says.
ONC spokesman Peter Ashkenaz agrees, citing governance within a hospital or health center and education of all participants as important elements of any data-analytics plan. Among the questions that must be addressed, he says, are these: “Have we collected the right information? Are we doing so efficiently and securely with respect to privacy requirements? Are we sharing the data with the appropriate parties? Are we doing so in a way that is easily understood? Are we asking the right questions about how to use the information?”
The most fundamental question, Dr. Deitelzweig says, may be whether a hospitalist group, hospital, or health system is truly committed to using the technology. “If you’re going to make the investment in such things, then you really better be dedicated to understanding them and how best to utilize them. And give it some time,” he says. “I think people want solutions fast, and often they don’t take the time to individualize it or customize it.” TH
Bryn Nelson is a freelance medical writer in Seattle.
Ensuring data quality and equivalency can present major challenges in data analytics, especially given the field’s dearth of uniform standards.
“The joke is that the great thing about health-care data standards is that there’s so many to choose from,” says Brett Davis, general manager of Deloitte Health Informatics. If data integration remains a big challenge, however, Davis says the cost and complexity of the technology is dropping rapidly.
A lack of electronic health records (EHR) can limit more advanced data-mining functions. But that’s no excuse for not exploring the technology, says Steven Deitelzweig, MD, SFHM, system chairman for hospital medicine at Ochsner Health System in New Orleans and chair of SHM’s Practice Analysis Committee.
Deployment of that partial prerequisite also seems to be happening quickly around the country. The Office of the National Coordinator for Health IT (ONC) estimates that hospital adoption of at least a basic EHR system more than tripled between 2009 and 2012, to 44% from 12%. Meanwhile, an estimated 85% of hospitals were at least in possession of certified EHR technology by 2012.
Despite the falling barriers, Davis cautions that users should have clear goals in mind when setting up a new system. “There is the risk of building bridges to nowhere, where you just integrate data for the sake of integrating data but not knowing what questions and insights you want to glean from it,” he says.
ONC spokesman Peter Ashkenaz agrees, citing governance within a hospital or health center and education of all participants as important elements of any data-analytics plan. Among the questions that must be addressed, he says, are these: “Have we collected the right information? Are we doing so efficiently and securely with respect to privacy requirements? Are we sharing the data with the appropriate parties? Are we doing so in a way that is easily understood? Are we asking the right questions about how to use the information?”
The most fundamental question, Dr. Deitelzweig says, may be whether a hospitalist group, hospital, or health system is truly committed to using the technology. “If you’re going to make the investment in such things, then you really better be dedicated to understanding them and how best to utilize them. And give it some time,” he says. “I think people want solutions fast, and often they don’t take the time to individualize it or customize it.” TH
Bryn Nelson is a freelance medical writer in Seattle.
MARQUIS Highlights Need for Improved Medication Reconciliation
What is the best possible medication history? How is it done? Who should do it? When should it be done during a patient’s journey in and out of the hospital? What medication discrepancies—and potential adverse drug events—are most likely?
Those are questions veteran hospitalist Jason Stein, MD, tried to answer during an HM13 breakout session on medication reconciliation at the Gaylord National Resort and Conference Center in National Harbor, Md.
“How do you know as the discharging provider if the medication list you’re looking at is gold or garbage?” said Dr. Stein, associate director for quality improvement (QI) at Emory University in Atlanta and a mentor for SHM’s Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS) quality-research initiative.
“Sometimes it’s impossible to know what the patient was or wasn’t taking, but it doesn’t mean you don’t do your best,” he said, adding that hospitalists should attempt to get at least one reliable, corroborating source of information for a patient’s medical history.
Sometimes it is necessary to speak to family members or the community pharmacy, Dr. Schnipper said, because many patients can’t remember all of the drugs they are taking. Trying to do medication reconciliation at the time of discharge when BPMH has not been done can lead to more work for the provider, medication errors, or rehospitalizations. Ideally, knowledge of what the patient was taking before admission, as well as the patient’s health literacy and adherence history, should be gathered and documented once, early, and well during the hospitalization by a trained provider, according to Dr. Schnipper.
An SHM survey, however, showed 50% to 70% percent of front-line providers have never received BPMH training, and 60% say they are not given the time.1
“Not knowing means a diligent provider would need to take a BPMH at discharge, which is a waste,” Dr. Stein said. It would be nice to tell from the electronic health record whether a true BPMH had been taken for every hospitalized patient—or at least every high-risk patient—but this goal is not well-supported by current information technology, MARQUIS investigators said they have learned.
The MARQUIS program was launched in 2011 with a grant from the federal Agency for Healthcare Research and Quality. It began with a thorough review of the literature on medication reconciliation and the development of a toolkit of best practices. In 2012, six pilot sites were offered a menu of 11 MARQUIS medication-reconciliation interventions to choose from and help in implementing them from an SHM mentor, with expertise in both QI and medication safety.
Listen to more of our interview with MARQUIS principal investigator Jeffrey Schnipper, MD, MPH, FHM.
Participating sites have mobilized high-level hospital leadership and utilize a local champion, usually a hospitalist, tools for assessing high-risk patients, medication-reconciliation assistants or counselors, and pharmacist involvement. Different sites have employed different professional staff to take medication histories.
Dr. Schnipper said he expects another round of MARQUIS-mentored implementation, probably in 2014, after data from the first round have been analyzed. The program is tracking such outcomes as the number of potentially harmful, unintentional medication discrepancies per patient at participating sites.
The MARQUIS toolkit is available on the SHM website. TH
Larry Beresford is a freelance writer in San Francisco.
Reference
1. Schnipper JL, Mueller SK, Salanitro AH, Stein J. Got Med Wreck? Targeted Repairs from the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS). PowerPoint presentation at Society of Hospital Medicine annual meeting, May 16-19, 2013, National Harbor, Md.
What is the best possible medication history? How is it done? Who should do it? When should it be done during a patient’s journey in and out of the hospital? What medication discrepancies—and potential adverse drug events—are most likely?
Those are questions veteran hospitalist Jason Stein, MD, tried to answer during an HM13 breakout session on medication reconciliation at the Gaylord National Resort and Conference Center in National Harbor, Md.
“How do you know as the discharging provider if the medication list you’re looking at is gold or garbage?” said Dr. Stein, associate director for quality improvement (QI) at Emory University in Atlanta and a mentor for SHM’s Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS) quality-research initiative.
“Sometimes it’s impossible to know what the patient was or wasn’t taking, but it doesn’t mean you don’t do your best,” he said, adding that hospitalists should attempt to get at least one reliable, corroborating source of information for a patient’s medical history.
Sometimes it is necessary to speak to family members or the community pharmacy, Dr. Schnipper said, because many patients can’t remember all of the drugs they are taking. Trying to do medication reconciliation at the time of discharge when BPMH has not been done can lead to more work for the provider, medication errors, or rehospitalizations. Ideally, knowledge of what the patient was taking before admission, as well as the patient’s health literacy and adherence history, should be gathered and documented once, early, and well during the hospitalization by a trained provider, according to Dr. Schnipper.
An SHM survey, however, showed 50% to 70% percent of front-line providers have never received BPMH training, and 60% say they are not given the time.1
“Not knowing means a diligent provider would need to take a BPMH at discharge, which is a waste,” Dr. Stein said. It would be nice to tell from the electronic health record whether a true BPMH had been taken for every hospitalized patient—or at least every high-risk patient—but this goal is not well-supported by current information technology, MARQUIS investigators said they have learned.
The MARQUIS program was launched in 2011 with a grant from the federal Agency for Healthcare Research and Quality. It began with a thorough review of the literature on medication reconciliation and the development of a toolkit of best practices. In 2012, six pilot sites were offered a menu of 11 MARQUIS medication-reconciliation interventions to choose from and help in implementing them from an SHM mentor, with expertise in both QI and medication safety.
Listen to more of our interview with MARQUIS principal investigator Jeffrey Schnipper, MD, MPH, FHM.
Participating sites have mobilized high-level hospital leadership and utilize a local champion, usually a hospitalist, tools for assessing high-risk patients, medication-reconciliation assistants or counselors, and pharmacist involvement. Different sites have employed different professional staff to take medication histories.
Dr. Schnipper said he expects another round of MARQUIS-mentored implementation, probably in 2014, after data from the first round have been analyzed. The program is tracking such outcomes as the number of potentially harmful, unintentional medication discrepancies per patient at participating sites.
The MARQUIS toolkit is available on the SHM website. TH
Larry Beresford is a freelance writer in San Francisco.
Reference
1. Schnipper JL, Mueller SK, Salanitro AH, Stein J. Got Med Wreck? Targeted Repairs from the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS). PowerPoint presentation at Society of Hospital Medicine annual meeting, May 16-19, 2013, National Harbor, Md.
What is the best possible medication history? How is it done? Who should do it? When should it be done during a patient’s journey in and out of the hospital? What medication discrepancies—and potential adverse drug events—are most likely?
Those are questions veteran hospitalist Jason Stein, MD, tried to answer during an HM13 breakout session on medication reconciliation at the Gaylord National Resort and Conference Center in National Harbor, Md.
“How do you know as the discharging provider if the medication list you’re looking at is gold or garbage?” said Dr. Stein, associate director for quality improvement (QI) at Emory University in Atlanta and a mentor for SHM’s Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS) quality-research initiative.
“Sometimes it’s impossible to know what the patient was or wasn’t taking, but it doesn’t mean you don’t do your best,” he said, adding that hospitalists should attempt to get at least one reliable, corroborating source of information for a patient’s medical history.
Sometimes it is necessary to speak to family members or the community pharmacy, Dr. Schnipper said, because many patients can’t remember all of the drugs they are taking. Trying to do medication reconciliation at the time of discharge when BPMH has not been done can lead to more work for the provider, medication errors, or rehospitalizations. Ideally, knowledge of what the patient was taking before admission, as well as the patient’s health literacy and adherence history, should be gathered and documented once, early, and well during the hospitalization by a trained provider, according to Dr. Schnipper.
An SHM survey, however, showed 50% to 70% percent of front-line providers have never received BPMH training, and 60% say they are not given the time.1
“Not knowing means a diligent provider would need to take a BPMH at discharge, which is a waste,” Dr. Stein said. It would be nice to tell from the electronic health record whether a true BPMH had been taken for every hospitalized patient—or at least every high-risk patient—but this goal is not well-supported by current information technology, MARQUIS investigators said they have learned.
The MARQUIS program was launched in 2011 with a grant from the federal Agency for Healthcare Research and Quality. It began with a thorough review of the literature on medication reconciliation and the development of a toolkit of best practices. In 2012, six pilot sites were offered a menu of 11 MARQUIS medication-reconciliation interventions to choose from and help in implementing them from an SHM mentor, with expertise in both QI and medication safety.
Listen to more of our interview with MARQUIS principal investigator Jeffrey Schnipper, MD, MPH, FHM.
Participating sites have mobilized high-level hospital leadership and utilize a local champion, usually a hospitalist, tools for assessing high-risk patients, medication-reconciliation assistants or counselors, and pharmacist involvement. Different sites have employed different professional staff to take medication histories.
Dr. Schnipper said he expects another round of MARQUIS-mentored implementation, probably in 2014, after data from the first round have been analyzed. The program is tracking such outcomes as the number of potentially harmful, unintentional medication discrepancies per patient at participating sites.
The MARQUIS toolkit is available on the SHM website. TH
Larry Beresford is a freelance writer in San Francisco.
Reference
1. Schnipper JL, Mueller SK, Salanitro AH, Stein J. Got Med Wreck? Targeted Repairs from the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS). PowerPoint presentation at Society of Hospital Medicine annual meeting, May 16-19, 2013, National Harbor, Md.
The RAC man cometh
If you have never heard of the Recovery Audit Contractor (RAC) program, it’s only a matter of time. A little bit of history is in order here. Between 2005 and 2008, a demonstration program that used Recovery Auditors identified Medicare overpayments, as well as underpayments, to both providers and suppliers of health care in random states. The result was that a whopping $900 million in overpayments was returned to the Medicare Trust Fund, while close to $38 million in underpayments was given to health care providers.
Obviously, this program was a tremendous success for the Centers for Medicare and Medicaid Services (CMS), and it has since taken off in all 50 states. And, you guessed it, it remains a great boon for the Medicare Trust Fund.
In fiscal year 2010, $75.4 million in overpayments was collected, and $16.9 million returned, and in fiscal year 2013, $2.2 billion in overpayments was collected, while $370 million was returned. Since the program’s inception, there has been $5.7 billion in total corrections, of which, $5.4 billion was collected from overpayments.
Surprised? I think most of us, and our hospitals, could benefit by hospitalists learning more about the RAC and what we could do to guard against a successful audit and penalty. The Program for Evaluating Payment Patterns Electronic Report (PEPPER) provides provider-specific Medicare data stats for discharges and services that are vulnerable. Pepperresources.org was developed by TMF Health Quality Institute, which was contracted by the CMS.
PEPPER has many uses, but one of the most useful for hospitals is to compare its claims data over time to identify concerning trends, such as significant changes in billing practices, increasing length of stay, and over- or undercoding. In 2013, practicing good medicine just isn’t enough. You have to make sure you are documenting appropriately to justify the codes you bill. Outliers beware!
Dr. Hester is a hospitalist with Baltimore-Washington Medical Center who has a passion for empowering patients to partner in their health care. She is the creator of the Patient Whiz, a patient-engagement app for iOS.
If you have never heard of the Recovery Audit Contractor (RAC) program, it’s only a matter of time. A little bit of history is in order here. Between 2005 and 2008, a demonstration program that used Recovery Auditors identified Medicare overpayments, as well as underpayments, to both providers and suppliers of health care in random states. The result was that a whopping $900 million in overpayments was returned to the Medicare Trust Fund, while close to $38 million in underpayments was given to health care providers.
Obviously, this program was a tremendous success for the Centers for Medicare and Medicaid Services (CMS), and it has since taken off in all 50 states. And, you guessed it, it remains a great boon for the Medicare Trust Fund.
In fiscal year 2010, $75.4 million in overpayments was collected, and $16.9 million returned, and in fiscal year 2013, $2.2 billion in overpayments was collected, while $370 million was returned. Since the program’s inception, there has been $5.7 billion in total corrections, of which, $5.4 billion was collected from overpayments.
Surprised? I think most of us, and our hospitals, could benefit by hospitalists learning more about the RAC and what we could do to guard against a successful audit and penalty. The Program for Evaluating Payment Patterns Electronic Report (PEPPER) provides provider-specific Medicare data stats for discharges and services that are vulnerable. Pepperresources.org was developed by TMF Health Quality Institute, which was contracted by the CMS.
PEPPER has many uses, but one of the most useful for hospitals is to compare its claims data over time to identify concerning trends, such as significant changes in billing practices, increasing length of stay, and over- or undercoding. In 2013, practicing good medicine just isn’t enough. You have to make sure you are documenting appropriately to justify the codes you bill. Outliers beware!
Dr. Hester is a hospitalist with Baltimore-Washington Medical Center who has a passion for empowering patients to partner in their health care. She is the creator of the Patient Whiz, a patient-engagement app for iOS.
If you have never heard of the Recovery Audit Contractor (RAC) program, it’s only a matter of time. A little bit of history is in order here. Between 2005 and 2008, a demonstration program that used Recovery Auditors identified Medicare overpayments, as well as underpayments, to both providers and suppliers of health care in random states. The result was that a whopping $900 million in overpayments was returned to the Medicare Trust Fund, while close to $38 million in underpayments was given to health care providers.
Obviously, this program was a tremendous success for the Centers for Medicare and Medicaid Services (CMS), and it has since taken off in all 50 states. And, you guessed it, it remains a great boon for the Medicare Trust Fund.
In fiscal year 2010, $75.4 million in overpayments was collected, and $16.9 million returned, and in fiscal year 2013, $2.2 billion in overpayments was collected, while $370 million was returned. Since the program’s inception, there has been $5.7 billion in total corrections, of which, $5.4 billion was collected from overpayments.
Surprised? I think most of us, and our hospitals, could benefit by hospitalists learning more about the RAC and what we could do to guard against a successful audit and penalty. The Program for Evaluating Payment Patterns Electronic Report (PEPPER) provides provider-specific Medicare data stats for discharges and services that are vulnerable. Pepperresources.org was developed by TMF Health Quality Institute, which was contracted by the CMS.
PEPPER has many uses, but one of the most useful for hospitals is to compare its claims data over time to identify concerning trends, such as significant changes in billing practices, increasing length of stay, and over- or undercoding. In 2013, practicing good medicine just isn’t enough. You have to make sure you are documenting appropriately to justify the codes you bill. Outliers beware!
Dr. Hester is a hospitalist with Baltimore-Washington Medical Center who has a passion for empowering patients to partner in their health care. She is the creator of the Patient Whiz, a patient-engagement app for iOS.
Spare the hippocampus, preserve the memory in whole brain irradiation
ATLANTA – Sparing the hippocampus during whole brain irradiation can pay off in memory preservation for months to come, according to Dr. Vinai Gondi.
Adults with brain metastases who underwent whole brain radiation therapy (WBRT) with a conformal technique designed to minimize radiation dose to the hippocampus had a significantly smaller mean decline in verbal memory 4 months after treatment than did historical controls, reported Dr. Gondi, codirector of the Cadence Health Brain Tumor Center in Chicago and a coprincipal investigator in the Radiation Therapy Oncology Group Trial 0933.
"These phase II results are promising, and highlight the importance of the hippocampus as a radiosensitive structure central to memory toxicity," Dr. Gondi said in a briefing prior to his presentation in a plenary session of the American Society for Radiation Oncology.
The hippocampus has been shown to play host to neural stem cells that are constantly differentiating into new neurons throughout adult life, a process important for maintaining memory function, he noted.
Previous studies have shown that cranial irradiation with WBRT is associated with a 4- to 6-month decline in memory function, as measured by the Hopkins Verbal Learning Test (HVLT) total recall and delayed recall items.
By using intensity modulated radiation therapy (IMRT) to shape the beam and largely spare the pocket of neural stem cells in the dentate gyrus portion of the hippocampus, the investigators hoped to avoid the decrements in memory function seen with earlier, less discriminating WBRT techniques, he said.
They enrolled 113 adults with brain metastases from various primary malignancies and assigned them to receive hippocampal-avoiding WBRT of 30 Gy delivered in 10 fractions. Radiation oncologists participating in the trial were trained in the technique, which involves careful identification of hippocampal landmarks and titration of the dose to minimize exposure of the hippocampus in general, and the dentate gyrus in particular. Under the protocol, the total radiation dose to the entire volume of the hippocampus can be no more than 10 Gy, and no single point in the hippocampus can receive more than 17 Gy.
Controls were patients in an earlier phase III clinical trial who underwent WBRT without hippocampal avoidance.
At 4 months, 100 patients treated with the hippocampal-sparing technique who were available for analysis had a 7% decline in the primary endpoint – delayed recall scores from baseline – compared with 30% for historical controls (P = .0003).
Among the 29 patients for whom 6-month data were available, the mean relative decline from baseline in delayed recall was 2% and in immediate recall was 0.7%. In contrast, there was a 3% increase in total recall scores.
The risk of metastasis to the hippocampus was 4.5% during follow-up, Dr. Gondi said.
The Radiation Oncology Therapy Group is currently developing a phase III trial of prophylactic cranial radiation with or without hippocampal avoidance for patients with small cell lung cancer.
The study demonstrates the value of improving and incorporating into practice newer radiation delivery technologies such as IMRT, said Dr. Bruce G. Haffty, a radiation oncologist at the Cancer Institute of New Jersey in New Brunswick, and ASTRO president-elect.
"It’s nice to have that technology available, and it’s now nice to see that we can use that technology to [reduce] memory loss and improve quality of life for our patients undergoing whole brain radiation therapy," he said.
Dr. Haffty moderated the briefing, but was not involved in the study.
RTOG 0993 was supported by the National Cancer Institute. Dr. Gondi and Dr. Haffty reported having no relevant financial conflicts.
ATLANTA – Sparing the hippocampus during whole brain irradiation can pay off in memory preservation for months to come, according to Dr. Vinai Gondi.
Adults with brain metastases who underwent whole brain radiation therapy (WBRT) with a conformal technique designed to minimize radiation dose to the hippocampus had a significantly smaller mean decline in verbal memory 4 months after treatment than did historical controls, reported Dr. Gondi, codirector of the Cadence Health Brain Tumor Center in Chicago and a coprincipal investigator in the Radiation Therapy Oncology Group Trial 0933.
"These phase II results are promising, and highlight the importance of the hippocampus as a radiosensitive structure central to memory toxicity," Dr. Gondi said in a briefing prior to his presentation in a plenary session of the American Society for Radiation Oncology.
The hippocampus has been shown to play host to neural stem cells that are constantly differentiating into new neurons throughout adult life, a process important for maintaining memory function, he noted.
Previous studies have shown that cranial irradiation with WBRT is associated with a 4- to 6-month decline in memory function, as measured by the Hopkins Verbal Learning Test (HVLT) total recall and delayed recall items.
By using intensity modulated radiation therapy (IMRT) to shape the beam and largely spare the pocket of neural stem cells in the dentate gyrus portion of the hippocampus, the investigators hoped to avoid the decrements in memory function seen with earlier, less discriminating WBRT techniques, he said.
They enrolled 113 adults with brain metastases from various primary malignancies and assigned them to receive hippocampal-avoiding WBRT of 30 Gy delivered in 10 fractions. Radiation oncologists participating in the trial were trained in the technique, which involves careful identification of hippocampal landmarks and titration of the dose to minimize exposure of the hippocampus in general, and the dentate gyrus in particular. Under the protocol, the total radiation dose to the entire volume of the hippocampus can be no more than 10 Gy, and no single point in the hippocampus can receive more than 17 Gy.
Controls were patients in an earlier phase III clinical trial who underwent WBRT without hippocampal avoidance.
At 4 months, 100 patients treated with the hippocampal-sparing technique who were available for analysis had a 7% decline in the primary endpoint – delayed recall scores from baseline – compared with 30% for historical controls (P = .0003).
Among the 29 patients for whom 6-month data were available, the mean relative decline from baseline in delayed recall was 2% and in immediate recall was 0.7%. In contrast, there was a 3% increase in total recall scores.
The risk of metastasis to the hippocampus was 4.5% during follow-up, Dr. Gondi said.
The Radiation Oncology Therapy Group is currently developing a phase III trial of prophylactic cranial radiation with or without hippocampal avoidance for patients with small cell lung cancer.
The study demonstrates the value of improving and incorporating into practice newer radiation delivery technologies such as IMRT, said Dr. Bruce G. Haffty, a radiation oncologist at the Cancer Institute of New Jersey in New Brunswick, and ASTRO president-elect.
"It’s nice to have that technology available, and it’s now nice to see that we can use that technology to [reduce] memory loss and improve quality of life for our patients undergoing whole brain radiation therapy," he said.
Dr. Haffty moderated the briefing, but was not involved in the study.
RTOG 0993 was supported by the National Cancer Institute. Dr. Gondi and Dr. Haffty reported having no relevant financial conflicts.
ATLANTA – Sparing the hippocampus during whole brain irradiation can pay off in memory preservation for months to come, according to Dr. Vinai Gondi.
Adults with brain metastases who underwent whole brain radiation therapy (WBRT) with a conformal technique designed to minimize radiation dose to the hippocampus had a significantly smaller mean decline in verbal memory 4 months after treatment than did historical controls, reported Dr. Gondi, codirector of the Cadence Health Brain Tumor Center in Chicago and a coprincipal investigator in the Radiation Therapy Oncology Group Trial 0933.
"These phase II results are promising, and highlight the importance of the hippocampus as a radiosensitive structure central to memory toxicity," Dr. Gondi said in a briefing prior to his presentation in a plenary session of the American Society for Radiation Oncology.
The hippocampus has been shown to play host to neural stem cells that are constantly differentiating into new neurons throughout adult life, a process important for maintaining memory function, he noted.
Previous studies have shown that cranial irradiation with WBRT is associated with a 4- to 6-month decline in memory function, as measured by the Hopkins Verbal Learning Test (HVLT) total recall and delayed recall items.
By using intensity modulated radiation therapy (IMRT) to shape the beam and largely spare the pocket of neural stem cells in the dentate gyrus portion of the hippocampus, the investigators hoped to avoid the decrements in memory function seen with earlier, less discriminating WBRT techniques, he said.
They enrolled 113 adults with brain metastases from various primary malignancies and assigned them to receive hippocampal-avoiding WBRT of 30 Gy delivered in 10 fractions. Radiation oncologists participating in the trial were trained in the technique, which involves careful identification of hippocampal landmarks and titration of the dose to minimize exposure of the hippocampus in general, and the dentate gyrus in particular. Under the protocol, the total radiation dose to the entire volume of the hippocampus can be no more than 10 Gy, and no single point in the hippocampus can receive more than 17 Gy.
Controls were patients in an earlier phase III clinical trial who underwent WBRT without hippocampal avoidance.
At 4 months, 100 patients treated with the hippocampal-sparing technique who were available for analysis had a 7% decline in the primary endpoint – delayed recall scores from baseline – compared with 30% for historical controls (P = .0003).
Among the 29 patients for whom 6-month data were available, the mean relative decline from baseline in delayed recall was 2% and in immediate recall was 0.7%. In contrast, there was a 3% increase in total recall scores.
The risk of metastasis to the hippocampus was 4.5% during follow-up, Dr. Gondi said.
The Radiation Oncology Therapy Group is currently developing a phase III trial of prophylactic cranial radiation with or without hippocampal avoidance for patients with small cell lung cancer.
The study demonstrates the value of improving and incorporating into practice newer radiation delivery technologies such as IMRT, said Dr. Bruce G. Haffty, a radiation oncologist at the Cancer Institute of New Jersey in New Brunswick, and ASTRO president-elect.
"It’s nice to have that technology available, and it’s now nice to see that we can use that technology to [reduce] memory loss and improve quality of life for our patients undergoing whole brain radiation therapy," he said.
Dr. Haffty moderated the briefing, but was not involved in the study.
RTOG 0993 was supported by the National Cancer Institute. Dr. Gondi and Dr. Haffty reported having no relevant financial conflicts.
AT THE ASTRO ANNUAL MEETING
Major finding: Patients who underwent whole brain radiation therapy with hippocampal avoidance had a 7% decline in delayed recall at 4 months, compared with 30% for historical controls.
Data source: A prospective phase II clinical trial of 113 patients vs. historical controls.
Disclosures: RTOG 0993 was supported by the National Cancer Institute. Dr. Gondi and Dr. Haffty reported having no relevant financial conflicts.
Hospitalists Should Take Wait-and-See Approach to Newly Approved Medications
Wait-and-See Approach Best for Newly Approved Meds
I am a new hospitalist, out of residency for two years, and feel very uncertain about using new or recently approved medications on my patients. Do you have any suggestions about how or when new medications should be used in practice?
–David Ray, MD
Dr. Hospitalist responds:
I certainly can understand your trepidation about using newly approved medications. Although our system of evaluating and approving medications for clinical use is considered the most rigorous in the world, 16 so-called novel medications were pulled from the shelves from 2000 to 2010, which equates to 6% of the total approved during that period. All in all, not a bad ratio, but the number of poor outcomes associated with a high-profile dud can be astronomical.
I think there are several major reasons why we have adverse issues with medications that have survived the rigors of the initial FDA approval process. First, many human drug trials are conducted in developing countries, where the human genome is much more homogenous and the liabilities for injuries are way less than in the U.S. Many researchers have acknowledged the significant role of pharmacogenomics, and how each physiology and pathology is unique. Couple these with the tendency to test drugs one at a time in younger cohorts—very few medications are administered in this manner in the U.S.—and one can quickly see how complex the equation becomes.
Another reason is the influence relegated to clinical trials. All clinicians should be familiar with the stages (0 to 4) and processes of how the FDA analyzes human drug trials. The FDA usually requires that two “adequate and well-controlled” trials confirm that a drug is safe and effective before it approves it for sale to the public. Once a drug completes Stage 3, an extensive statistical analysis is conducted to assure a drug’s demonstrated benefit is real and not the result of chance. But as it turns out, because the measured effects in most clinical trials are so small, chance is very hard to prove or disprove.
This was astutely demonstrated in a 2005 article published in the Journal of the American Medical Association (2005;294(2):218-228). John P. Ioannidis, MD, examined the results of 49 high-profile clinical-research studies in which 45 found that proposed intervention was effective. Of the 45 claiming effectiveness, seven (16%) were contradicted by subsequent studies, and seven others had found effects that were stronger than those of subsequent studies. Of the 26 randomly controlled trials that were followed up by larger trials, the initial finding was entirely contradicted in three cases (12%); another six cases (23%) found the benefit to be less than half of what had been initially reported.
In most instances, it wasn’t the therapy that changed but the sample size. In fact, many clinicians and biostatisticians believe many more so-called “evidence-based” practices or medicinals would be legitimately challenged if subjected to rigorous follow-up studies.
In my own personal experience as a hospitalist, I can think of two areas where the general medical community accepted initial studies only to refute them later: perioperative use of beta-blockers and inpatient glycemic control.
In light of the many high-profile medications that have been pulled from the market, I don’t like being in the first group to jump on the bandwagon. My general rule is to wait three to five years after a drug has been released before prescribing for patients. As always, there are exceptions. In instances where new medications have profound or life-altering potential (i.e. the new anticoagulants or gene-targeting meds for certain cancers) and the risks are substantiated, I’m all in!
Do you have a problem or concern that you’d like Dr. Hospitalist to address? Email your questions to [email protected].
Wait-and-See Approach Best for Newly Approved Meds
I am a new hospitalist, out of residency for two years, and feel very uncertain about using new or recently approved medications on my patients. Do you have any suggestions about how or when new medications should be used in practice?
–David Ray, MD
Dr. Hospitalist responds:
I certainly can understand your trepidation about using newly approved medications. Although our system of evaluating and approving medications for clinical use is considered the most rigorous in the world, 16 so-called novel medications were pulled from the shelves from 2000 to 2010, which equates to 6% of the total approved during that period. All in all, not a bad ratio, but the number of poor outcomes associated with a high-profile dud can be astronomical.
I think there are several major reasons why we have adverse issues with medications that have survived the rigors of the initial FDA approval process. First, many human drug trials are conducted in developing countries, where the human genome is much more homogenous and the liabilities for injuries are way less than in the U.S. Many researchers have acknowledged the significant role of pharmacogenomics, and how each physiology and pathology is unique. Couple these with the tendency to test drugs one at a time in younger cohorts—very few medications are administered in this manner in the U.S.—and one can quickly see how complex the equation becomes.
Another reason is the influence relegated to clinical trials. All clinicians should be familiar with the stages (0 to 4) and processes of how the FDA analyzes human drug trials. The FDA usually requires that two “adequate and well-controlled” trials confirm that a drug is safe and effective before it approves it for sale to the public. Once a drug completes Stage 3, an extensive statistical analysis is conducted to assure a drug’s demonstrated benefit is real and not the result of chance. But as it turns out, because the measured effects in most clinical trials are so small, chance is very hard to prove or disprove.
This was astutely demonstrated in a 2005 article published in the Journal of the American Medical Association (2005;294(2):218-228). John P. Ioannidis, MD, examined the results of 49 high-profile clinical-research studies in which 45 found that proposed intervention was effective. Of the 45 claiming effectiveness, seven (16%) were contradicted by subsequent studies, and seven others had found effects that were stronger than those of subsequent studies. Of the 26 randomly controlled trials that were followed up by larger trials, the initial finding was entirely contradicted in three cases (12%); another six cases (23%) found the benefit to be less than half of what had been initially reported.
In most instances, it wasn’t the therapy that changed but the sample size. In fact, many clinicians and biostatisticians believe many more so-called “evidence-based” practices or medicinals would be legitimately challenged if subjected to rigorous follow-up studies.
In my own personal experience as a hospitalist, I can think of two areas where the general medical community accepted initial studies only to refute them later: perioperative use of beta-blockers and inpatient glycemic control.
In light of the many high-profile medications that have been pulled from the market, I don’t like being in the first group to jump on the bandwagon. My general rule is to wait three to five years after a drug has been released before prescribing for patients. As always, there are exceptions. In instances where new medications have profound or life-altering potential (i.e. the new anticoagulants or gene-targeting meds for certain cancers) and the risks are substantiated, I’m all in!
Do you have a problem or concern that you’d like Dr. Hospitalist to address? Email your questions to [email protected].
Wait-and-See Approach Best for Newly Approved Meds
I am a new hospitalist, out of residency for two years, and feel very uncertain about using new or recently approved medications on my patients. Do you have any suggestions about how or when new medications should be used in practice?
–David Ray, MD
Dr. Hospitalist responds:
I certainly can understand your trepidation about using newly approved medications. Although our system of evaluating and approving medications for clinical use is considered the most rigorous in the world, 16 so-called novel medications were pulled from the shelves from 2000 to 2010, which equates to 6% of the total approved during that period. All in all, not a bad ratio, but the number of poor outcomes associated with a high-profile dud can be astronomical.
I think there are several major reasons why we have adverse issues with medications that have survived the rigors of the initial FDA approval process. First, many human drug trials are conducted in developing countries, where the human genome is much more homogenous and the liabilities for injuries are way less than in the U.S. Many researchers have acknowledged the significant role of pharmacogenomics, and how each physiology and pathology is unique. Couple these with the tendency to test drugs one at a time in younger cohorts—very few medications are administered in this manner in the U.S.—and one can quickly see how complex the equation becomes.
Another reason is the influence relegated to clinical trials. All clinicians should be familiar with the stages (0 to 4) and processes of how the FDA analyzes human drug trials. The FDA usually requires that two “adequate and well-controlled” trials confirm that a drug is safe and effective before it approves it for sale to the public. Once a drug completes Stage 3, an extensive statistical analysis is conducted to assure a drug’s demonstrated benefit is real and not the result of chance. But as it turns out, because the measured effects in most clinical trials are so small, chance is very hard to prove or disprove.
This was astutely demonstrated in a 2005 article published in the Journal of the American Medical Association (2005;294(2):218-228). John P. Ioannidis, MD, examined the results of 49 high-profile clinical-research studies in which 45 found that proposed intervention was effective. Of the 45 claiming effectiveness, seven (16%) were contradicted by subsequent studies, and seven others had found effects that were stronger than those of subsequent studies. Of the 26 randomly controlled trials that were followed up by larger trials, the initial finding was entirely contradicted in three cases (12%); another six cases (23%) found the benefit to be less than half of what had been initially reported.
In most instances, it wasn’t the therapy that changed but the sample size. In fact, many clinicians and biostatisticians believe many more so-called “evidence-based” practices or medicinals would be legitimately challenged if subjected to rigorous follow-up studies.
In my own personal experience as a hospitalist, I can think of two areas where the general medical community accepted initial studies only to refute them later: perioperative use of beta-blockers and inpatient glycemic control.
In light of the many high-profile medications that have been pulled from the market, I don’t like being in the first group to jump on the bandwagon. My general rule is to wait three to five years after a drug has been released before prescribing for patients. As always, there are exceptions. In instances where new medications have profound or life-altering potential (i.e. the new anticoagulants or gene-targeting meds for certain cancers) and the risks are substantiated, I’m all in!
Do you have a problem or concern that you’d like Dr. Hospitalist to address? Email your questions to [email protected].
MGMA Surveys Make Hospitalists' Productivity Hard to Assess
The Medical Group Management Association (MGMA) surveys regard both a doctor who works the standard number of annual shifts their practice defines as full time, and a doctor who works many extra shifts, as one full-time equivalent (FTE). This can cause confusion when assessing productivity per FTE (see “SHM and MGMA Survey History,” right).
For example, consider a hospitalist who generated 4,000 wRVUs while working 182 shifts—the standard number of shifts to be full time in that doctor’s practice—during the survey year. In the same practice, another hospitalist worked 39 extra shifts over the same year for a total of 220 shifts, generating 4,860 wRVUs. If the survey contained only these two doctors, it would show them both as full time, with an average productivity per FTE of 4,430 wRVUs. But that would be misleading because 1.0 FTE worth of work as defined by their practice for both doctors would have come to 4,000 wRVUs generated while working 182 shifts.
In prior columns, I’ve highlighted some other numbers in hospitalist productivity and compensation surveys that can lead to confusion. But the MGMA survey methodology, which assigns a particular FTE to a single doctor, may be the most confusing issue, potentially leading to meaningful misunderstandings.
More Details on FTE Definition
MGMA has been conducting physician compensation and productivity surveys across essentially all medical specialties for decades. Competing organizations conduct similar surveys, but most regard the MGMA survey as the most relevant and valuable.
For a long time, MGMA has regarded as “full time” any doctor working 0.75 FTE or greater, using the respondent practice’s definition of an FTE. No single doctor can ever be counted as more than 1.0 FTE, regardless of how much extra the doctor may have worked. Any doctor working 0.35-0.75 FTE is regarded as part time, and those working less than 0.35 FTE are excluded from the survey report. The fact that each practice might have a different definition of what constitutes an FTE is addressed by having a large number of respondents in most medical specialties.
I’m uncertain how MGMA ended up not counting any single doctor as more than 1.0 FTE, even when they work a lot of extra shifts. But my guess is that for the first years, or even decades, that MGMA conducted its survey, few, if any, medical practices even had a strict definition of what constituted 1.0 FTE and simply didn’t keep track of which doctors worked extra shifts or days. So even if MGMA had wanted to know, for example, when a doctor worked extra shifts and should be counted as more than 1.0 FTE, few if any practices even thought about the precise number of shifts or days worked constituting full time versus what was an “extra” shift. So it probably made sense to simply have two categories: full time and part time.
As more practices began assigning FTE with greater precision, like nearly all hospitalist practices do, then using 0.75 FTE to separate full time and part time seemed practical, though imprecise. But keep in mind it also means that all of the doctors who work from 0.75 to 0.99 FTE (that is, something less than 1.0) offset, at least partially, those who work lots of extra shifts (i.e., above 1.0 FTE).
Data Application
My anecdotal experience is that a large portion of hospitalists, probably around half, work more shifts than what their practice regards as full time. I don’t know of any survey database that quantifies this, but my guess is that 25% to 35% of full-time hospitalists work extra shifts at their own practice, and maybe another 15% to 20% moonlight at a different practice. Let’s consider only those in the first category.
Chronic staffing shortages is one of the reasons hospitalists so commonly work extra shifts at their own practice. Extra shifts are sometimes even required by the practice to make up for open positions. And in some places, the hospitalists choose not to fill positions to preserve their ability to continue working more than the number of shifts required to be full time.
It would be great if we had a precise way to adjust the MGMA survey data for hospitalists who work above 1.0 FTE. For example, let’s make three assumptions so that we can then adjust the reported compensation and productivity data to remove the effect of the many doctors working extra shifts, thereby more clearly matching 1.0 FTE. These numbers are my guesses based on lots of anecdotal experience. But they are only guesses. Don’t make too much of them.
Assume 25% of hospitalists nationally work an average of 20% more than the full-time number of shifts for their practice. That is my best guess and intentionally leaves out those who moonlight for a practice other than their own.
Some portion of those working extra shifts (above 1.0 FTE) is offset by survey respondents working between 0.75 and 1.0 FTE, resulting in a wild guess of a net 20% of hospitalists working extra shifts.
Last, let’s assume that their productivity and compensation on extra shifts is identical to their “normal” shifts. This is not true for many practices, but when aggregating the data, it is probably reasonably close.
Using these assumptions (guesses, really), we can decrease both the reported survey mean and median productivity and compensation by about 5% to more accurately reflect results for hospitalists doing only the number of shifts required by the practice to be full time—no extra shifts. I’ll spare you the simple math showing how I arrived at the approximately 5%, but basically it is removing the 20% additional compensation and productivity generated by the net 20% of hospitalists who work extra shifts above 1.0 FTE.
Does It Really Matter?
The whole issue of hospitalists working many extra shifts yet only counting as 1.0 FTE in the MGMA survey might matter a lot for some, and others might see it as useless hand-wringing. As long as a meaningful number of hospitalists work extra shifts, then survey values for productivity and compensation will always be a little higher than the “average” 1.0 FTE hospitalists working no extra shifts. But it may still be well within the range of error of the survey anyway. And the compensation per unit of work (wRVUs or encounters) probably isn’t much affected by this FTE issue.
Dr. Nelson has been a practicing hospitalist since 1988. He is co-founder and past president of SHM, and principal in Nelson Flores Hospital Medicine Consultants. He is co-director for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. Write to him at [email protected].
The Medical Group Management Association (MGMA) surveys regard both a doctor who works the standard number of annual shifts their practice defines as full time, and a doctor who works many extra shifts, as one full-time equivalent (FTE). This can cause confusion when assessing productivity per FTE (see “SHM and MGMA Survey History,” right).
For example, consider a hospitalist who generated 4,000 wRVUs while working 182 shifts—the standard number of shifts to be full time in that doctor’s practice—during the survey year. In the same practice, another hospitalist worked 39 extra shifts over the same year for a total of 220 shifts, generating 4,860 wRVUs. If the survey contained only these two doctors, it would show them both as full time, with an average productivity per FTE of 4,430 wRVUs. But that would be misleading because 1.0 FTE worth of work as defined by their practice for both doctors would have come to 4,000 wRVUs generated while working 182 shifts.
In prior columns, I’ve highlighted some other numbers in hospitalist productivity and compensation surveys that can lead to confusion. But the MGMA survey methodology, which assigns a particular FTE to a single doctor, may be the most confusing issue, potentially leading to meaningful misunderstandings.
More Details on FTE Definition
MGMA has been conducting physician compensation and productivity surveys across essentially all medical specialties for decades. Competing organizations conduct similar surveys, but most regard the MGMA survey as the most relevant and valuable.
For a long time, MGMA has regarded as “full time” any doctor working 0.75 FTE or greater, using the respondent practice’s definition of an FTE. No single doctor can ever be counted as more than 1.0 FTE, regardless of how much extra the doctor may have worked. Any doctor working 0.35-0.75 FTE is regarded as part time, and those working less than 0.35 FTE are excluded from the survey report. The fact that each practice might have a different definition of what constitutes an FTE is addressed by having a large number of respondents in most medical specialties.
I’m uncertain how MGMA ended up not counting any single doctor as more than 1.0 FTE, even when they work a lot of extra shifts. But my guess is that for the first years, or even decades, that MGMA conducted its survey, few, if any, medical practices even had a strict definition of what constituted 1.0 FTE and simply didn’t keep track of which doctors worked extra shifts or days. So even if MGMA had wanted to know, for example, when a doctor worked extra shifts and should be counted as more than 1.0 FTE, few if any practices even thought about the precise number of shifts or days worked constituting full time versus what was an “extra” shift. So it probably made sense to simply have two categories: full time and part time.
As more practices began assigning FTE with greater precision, like nearly all hospitalist practices do, then using 0.75 FTE to separate full time and part time seemed practical, though imprecise. But keep in mind it also means that all of the doctors who work from 0.75 to 0.99 FTE (that is, something less than 1.0) offset, at least partially, those who work lots of extra shifts (i.e., above 1.0 FTE).
Data Application
My anecdotal experience is that a large portion of hospitalists, probably around half, work more shifts than what their practice regards as full time. I don’t know of any survey database that quantifies this, but my guess is that 25% to 35% of full-time hospitalists work extra shifts at their own practice, and maybe another 15% to 20% moonlight at a different practice. Let’s consider only those in the first category.
Chronic staffing shortages is one of the reasons hospitalists so commonly work extra shifts at their own practice. Extra shifts are sometimes even required by the practice to make up for open positions. And in some places, the hospitalists choose not to fill positions to preserve their ability to continue working more than the number of shifts required to be full time.
It would be great if we had a precise way to adjust the MGMA survey data for hospitalists who work above 1.0 FTE. For example, let’s make three assumptions so that we can then adjust the reported compensation and productivity data to remove the effect of the many doctors working extra shifts, thereby more clearly matching 1.0 FTE. These numbers are my guesses based on lots of anecdotal experience. But they are only guesses. Don’t make too much of them.
Assume 25% of hospitalists nationally work an average of 20% more than the full-time number of shifts for their practice. That is my best guess and intentionally leaves out those who moonlight for a practice other than their own.
Some portion of those working extra shifts (above 1.0 FTE) is offset by survey respondents working between 0.75 and 1.0 FTE, resulting in a wild guess of a net 20% of hospitalists working extra shifts.
Last, let’s assume that their productivity and compensation on extra shifts is identical to their “normal” shifts. This is not true for many practices, but when aggregating the data, it is probably reasonably close.
Using these assumptions (guesses, really), we can decrease both the reported survey mean and median productivity and compensation by about 5% to more accurately reflect results for hospitalists doing only the number of shifts required by the practice to be full time—no extra shifts. I’ll spare you the simple math showing how I arrived at the approximately 5%, but basically it is removing the 20% additional compensation and productivity generated by the net 20% of hospitalists who work extra shifts above 1.0 FTE.
Does It Really Matter?
The whole issue of hospitalists working many extra shifts yet only counting as 1.0 FTE in the MGMA survey might matter a lot for some, and others might see it as useless hand-wringing. As long as a meaningful number of hospitalists work extra shifts, then survey values for productivity and compensation will always be a little higher than the “average” 1.0 FTE hospitalists working no extra shifts. But it may still be well within the range of error of the survey anyway. And the compensation per unit of work (wRVUs or encounters) probably isn’t much affected by this FTE issue.
Dr. Nelson has been a practicing hospitalist since 1988. He is co-founder and past president of SHM, and principal in Nelson Flores Hospital Medicine Consultants. He is co-director for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. Write to him at [email protected].
The Medical Group Management Association (MGMA) surveys regard both a doctor who works the standard number of annual shifts their practice defines as full time, and a doctor who works many extra shifts, as one full-time equivalent (FTE). This can cause confusion when assessing productivity per FTE (see “SHM and MGMA Survey History,” right).
For example, consider a hospitalist who generated 4,000 wRVUs while working 182 shifts—the standard number of shifts to be full time in that doctor’s practice—during the survey year. In the same practice, another hospitalist worked 39 extra shifts over the same year for a total of 220 shifts, generating 4,860 wRVUs. If the survey contained only these two doctors, it would show them both as full time, with an average productivity per FTE of 4,430 wRVUs. But that would be misleading because 1.0 FTE worth of work as defined by their practice for both doctors would have come to 4,000 wRVUs generated while working 182 shifts.
In prior columns, I’ve highlighted some other numbers in hospitalist productivity and compensation surveys that can lead to confusion. But the MGMA survey methodology, which assigns a particular FTE to a single doctor, may be the most confusing issue, potentially leading to meaningful misunderstandings.
More Details on FTE Definition
MGMA has been conducting physician compensation and productivity surveys across essentially all medical specialties for decades. Competing organizations conduct similar surveys, but most regard the MGMA survey as the most relevant and valuable.
For a long time, MGMA has regarded as “full time” any doctor working 0.75 FTE or greater, using the respondent practice’s definition of an FTE. No single doctor can ever be counted as more than 1.0 FTE, regardless of how much extra the doctor may have worked. Any doctor working 0.35-0.75 FTE is regarded as part time, and those working less than 0.35 FTE are excluded from the survey report. The fact that each practice might have a different definition of what constitutes an FTE is addressed by having a large number of respondents in most medical specialties.
I’m uncertain how MGMA ended up not counting any single doctor as more than 1.0 FTE, even when they work a lot of extra shifts. But my guess is that for the first years, or even decades, that MGMA conducted its survey, few, if any, medical practices even had a strict definition of what constituted 1.0 FTE and simply didn’t keep track of which doctors worked extra shifts or days. So even if MGMA had wanted to know, for example, when a doctor worked extra shifts and should be counted as more than 1.0 FTE, few if any practices even thought about the precise number of shifts or days worked constituting full time versus what was an “extra” shift. So it probably made sense to simply have two categories: full time and part time.
As more practices began assigning FTE with greater precision, like nearly all hospitalist practices do, then using 0.75 FTE to separate full time and part time seemed practical, though imprecise. But keep in mind it also means that all of the doctors who work from 0.75 to 0.99 FTE (that is, something less than 1.0) offset, at least partially, those who work lots of extra shifts (i.e., above 1.0 FTE).
Data Application
My anecdotal experience is that a large portion of hospitalists, probably around half, work more shifts than what their practice regards as full time. I don’t know of any survey database that quantifies this, but my guess is that 25% to 35% of full-time hospitalists work extra shifts at their own practice, and maybe another 15% to 20% moonlight at a different practice. Let’s consider only those in the first category.
Chronic staffing shortages is one of the reasons hospitalists so commonly work extra shifts at their own practice. Extra shifts are sometimes even required by the practice to make up for open positions. And in some places, the hospitalists choose not to fill positions to preserve their ability to continue working more than the number of shifts required to be full time.
It would be great if we had a precise way to adjust the MGMA survey data for hospitalists who work above 1.0 FTE. For example, let’s make three assumptions so that we can then adjust the reported compensation and productivity data to remove the effect of the many doctors working extra shifts, thereby more clearly matching 1.0 FTE. These numbers are my guesses based on lots of anecdotal experience. But they are only guesses. Don’t make too much of them.
Assume 25% of hospitalists nationally work an average of 20% more than the full-time number of shifts for their practice. That is my best guess and intentionally leaves out those who moonlight for a practice other than their own.
Some portion of those working extra shifts (above 1.0 FTE) is offset by survey respondents working between 0.75 and 1.0 FTE, resulting in a wild guess of a net 20% of hospitalists working extra shifts.
Last, let’s assume that their productivity and compensation on extra shifts is identical to their “normal” shifts. This is not true for many practices, but when aggregating the data, it is probably reasonably close.
Using these assumptions (guesses, really), we can decrease both the reported survey mean and median productivity and compensation by about 5% to more accurately reflect results for hospitalists doing only the number of shifts required by the practice to be full time—no extra shifts. I’ll spare you the simple math showing how I arrived at the approximately 5%, but basically it is removing the 20% additional compensation and productivity generated by the net 20% of hospitalists who work extra shifts above 1.0 FTE.
Does It Really Matter?
The whole issue of hospitalists working many extra shifts yet only counting as 1.0 FTE in the MGMA survey might matter a lot for some, and others might see it as useless hand-wringing. As long as a meaningful number of hospitalists work extra shifts, then survey values for productivity and compensation will always be a little higher than the “average” 1.0 FTE hospitalists working no extra shifts. But it may still be well within the range of error of the survey anyway. And the compensation per unit of work (wRVUs or encounters) probably isn’t much affected by this FTE issue.
Dr. Nelson has been a practicing hospitalist since 1988. He is co-founder and past president of SHM, and principal in Nelson Flores Hospital Medicine Consultants. He is co-director for SHM’s “Best Practices in Managing a Hospital Medicine Program” course. Write to him at [email protected].
Why Hospitalists Should Focus on Patient-Care Basics
We all are too familiar with the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, a standardized set of questions randomly deployed to recently discharged patients. More recently, hospitalists have noticed the introduction of the Clinician and Groups Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) survey, randomly deployed to recently evaluated ambulatory patients. HCAHPS has been publicly reported since 2008. CG-CAHPS will be in the near future. In addition to these, there are a variety of other types of CAHPS surveys, ranging from ambulatory surgery to patient-centered medical homes. For HCAHPS alone, there are more than 8,200 adult surveys completed every day from almost 4,000 different U.S. hospitals.1
In addition to these surveys being publicly reported and widely viewed online by patients, payors, and employers, the results now are tightly coupled to the reimbursement of hospitals and, in some cases, individual providers. As of October 2012, Medicare has relegated 30% of its hospital value-based purchasing (VBP) program to the results of hospitals’ HCAPHS survey results. For the foreseeable future, about one-third of the financial bonus—or penalty—of a hospital rests in the hands of how well our patients perceive their care. Many individual hospitals and practice groups have started coupling individual physicians’ compensation to their patients’ CAHPS scores. Within the (approximately) seven minutes it takes to complete the survey, our patients determine millions of dollars of physician and hospital reimbursement.1
With all of the financial and reputational emphasis on HCAHPS, it is vital that hospitalists understand what it is these surveys are actually measuring, and if they have any correlation with the quality of the care the patient receives. The questions currently address 11 different domains of hospital care:
- Communication with doctors;
- Communication with nurses;
- Responsiveness of hospital staff;
- Pain management;
- Communication about medicines;
- Discharge information;
- Cleanliness of hospital environment;
- Quietness of hospital environment;
- Transitions of care;
- Overall rating of the hospital; and
- Willingness to recommend the hospital.
As the domains of care are all very different, one can imagine a wide range of answers to the various questions; a patient can perceive that communication was excellent but the quietness and cleanliness was disgraceful. And, depending on what they consider the most important aspects of their stay, they therefore may rate their overall stay as excellent or disgraceful. Why? Because each of these rest in the eye of the beholder.
But to keep pace, hospitals and providers across the country have invested millions of hours dissecting the meaning of the results and trying to improve upon them. My hospital has struggled for years with the “cleanliness” question, trying to figure out what our patients are trying to tell us: that we need to sweep and mop more often, that hospital supplies are cluttering our patient rooms, that the trashcans are overflowing or within eyesight? When we ask focus groups, we often get all of the above—and then try to implement several solutions all at once.
The quietness question is much easier to interpret but certainly difficult to improve upon. We have implemented “yacker trackers,” “quiet time,” and soft-wheeled trash cans. And the results of the surveys take months to come back and get analyzed, so it is difficult to quickly know if your interventions are actually working. Given that so many hospitals and providers are back-flipping to “play to the test,” we really need some validation that care is truly improving based on this patient feedback.
A recent New York Times article calls to light a natural paradox in the medical field, in that patients who understand more about disease processes and medical information actually feel less, rather than more, informed. In other words, those who are actually the most well-informed may rate communication the lowest. The article also calls to light the natural paradox between providers being honest and providers being likable, especially considering they routinely have to deliver messages that patients do not want to hear:
- You need to quit smoking;
- Your weight is affecting your health; and
- Your disease is not curable.
Given these natural paradoxes, the article argues that it is difficult to reconcile why hospitals and providers should be held financially accountable for their patients’ perception of care, when that perception may not equate to “real” care quality.2
However, there is some evidence that patient satisfaction surveys may actually be good proxies for care quality. A large study found that hospitals with the highest quartile HCAHPS ratings also have about 2%-3% higher quality scores for acute MI, CHF, pneumonia, and surgery, compared to those in the lowest quartile. The highest scoring hospitals also have about 2%-3% lower readmission rates for acute MI, CHF, and pneumonia.3,4 And, similar to other quality metrics, there is evidence that the longer a hospital has been administering HCAHPS, the better are their scores. So maybe hospital systems and providers can improve not only the perception a patient has of the quality of the care they received, but improve the quality, as measured by the patient’s perception.
Although there are legitimate arguments on both sides as to whether a patient’s perception of care reflects “real” care quality, in the end these CAHPS surveys are, and have been publicly reported, and will be tightly coupled to reimbursement for hospitals and (likely) providers for the foreseeable future. So in the meantime, we should continue to focus on patient-centered care, take seriously any voiced concerns, and have a relentless pursuit of perfection for how patients perceive their care. Because in the end, you would do it for your family so we should do it for our patients.
References
- Centers for Medicare & Medicaid Services. Spring 2013 HCAHPS Executive Insight Letter. Available at: www.hcahpsonline.org/Executive_Insight. Accessed Aug. 15, 2013.
- Rosenbaum L. When doctors tell patients what they don’t want to hear. The New Yorker website. Available at: www.newyorker.com/online/blogs/elements/2013/07/when-doctors-tell-patients-what-they-dont-want-to-hear.html. Published July 23, 2013. Accessed Aug. 15, 2013.
- Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the US. New Eng J Med. 2008;359(18):1921-1931.
- Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41-48.
Dr. Scheurer is a hospitalist and chief quality officer at the Medical University of South Carolina in Charleston. She is physician editor of The Hospitalist. Email her at [email protected].
We all are too familiar with the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, a standardized set of questions randomly deployed to recently discharged patients. More recently, hospitalists have noticed the introduction of the Clinician and Groups Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) survey, randomly deployed to recently evaluated ambulatory patients. HCAHPS has been publicly reported since 2008. CG-CAHPS will be in the near future. In addition to these, there are a variety of other types of CAHPS surveys, ranging from ambulatory surgery to patient-centered medical homes. For HCAHPS alone, there are more than 8,200 adult surveys completed every day from almost 4,000 different U.S. hospitals.1
In addition to these surveys being publicly reported and widely viewed online by patients, payors, and employers, the results now are tightly coupled to the reimbursement of hospitals and, in some cases, individual providers. As of October 2012, Medicare has relegated 30% of its hospital value-based purchasing (VBP) program to the results of hospitals’ HCAPHS survey results. For the foreseeable future, about one-third of the financial bonus—or penalty—of a hospital rests in the hands of how well our patients perceive their care. Many individual hospitals and practice groups have started coupling individual physicians’ compensation to their patients’ CAHPS scores. Within the (approximately) seven minutes it takes to complete the survey, our patients determine millions of dollars of physician and hospital reimbursement.1
With all of the financial and reputational emphasis on HCAHPS, it is vital that hospitalists understand what it is these surveys are actually measuring, and if they have any correlation with the quality of the care the patient receives. The questions currently address 11 different domains of hospital care:
- Communication with doctors;
- Communication with nurses;
- Responsiveness of hospital staff;
- Pain management;
- Communication about medicines;
- Discharge information;
- Cleanliness of hospital environment;
- Quietness of hospital environment;
- Transitions of care;
- Overall rating of the hospital; and
- Willingness to recommend the hospital.
As the domains of care are all very different, one can imagine a wide range of answers to the various questions; a patient can perceive that communication was excellent but the quietness and cleanliness was disgraceful. And, depending on what they consider the most important aspects of their stay, they therefore may rate their overall stay as excellent or disgraceful. Why? Because each of these rest in the eye of the beholder.
But to keep pace, hospitals and providers across the country have invested millions of hours dissecting the meaning of the results and trying to improve upon them. My hospital has struggled for years with the “cleanliness” question, trying to figure out what our patients are trying to tell us: that we need to sweep and mop more often, that hospital supplies are cluttering our patient rooms, that the trashcans are overflowing or within eyesight? When we ask focus groups, we often get all of the above—and then try to implement several solutions all at once.
The quietness question is much easier to interpret but certainly difficult to improve upon. We have implemented “yacker trackers,” “quiet time,” and soft-wheeled trash cans. And the results of the surveys take months to come back and get analyzed, so it is difficult to quickly know if your interventions are actually working. Given that so many hospitals and providers are back-flipping to “play to the test,” we really need some validation that care is truly improving based on this patient feedback.
A recent New York Times article calls to light a natural paradox in the medical field, in that patients who understand more about disease processes and medical information actually feel less, rather than more, informed. In other words, those who are actually the most well-informed may rate communication the lowest. The article also calls to light the natural paradox between providers being honest and providers being likable, especially considering they routinely have to deliver messages that patients do not want to hear:
- You need to quit smoking;
- Your weight is affecting your health; and
- Your disease is not curable.
Given these natural paradoxes, the article argues that it is difficult to reconcile why hospitals and providers should be held financially accountable for their patients’ perception of care, when that perception may not equate to “real” care quality.2
However, there is some evidence that patient satisfaction surveys may actually be good proxies for care quality. A large study found that hospitals with the highest quartile HCAHPS ratings also have about 2%-3% higher quality scores for acute MI, CHF, pneumonia, and surgery, compared to those in the lowest quartile. The highest scoring hospitals also have about 2%-3% lower readmission rates for acute MI, CHF, and pneumonia.3,4 And, similar to other quality metrics, there is evidence that the longer a hospital has been administering HCAHPS, the better are their scores. So maybe hospital systems and providers can improve not only the perception a patient has of the quality of the care they received, but improve the quality, as measured by the patient’s perception.
Although there are legitimate arguments on both sides as to whether a patient’s perception of care reflects “real” care quality, in the end these CAHPS surveys are, and have been publicly reported, and will be tightly coupled to reimbursement for hospitals and (likely) providers for the foreseeable future. So in the meantime, we should continue to focus on patient-centered care, take seriously any voiced concerns, and have a relentless pursuit of perfection for how patients perceive their care. Because in the end, you would do it for your family so we should do it for our patients.
References
- Centers for Medicare & Medicaid Services. Spring 2013 HCAHPS Executive Insight Letter. Available at: www.hcahpsonline.org/Executive_Insight. Accessed Aug. 15, 2013.
- Rosenbaum L. When doctors tell patients what they don’t want to hear. The New Yorker website. Available at: www.newyorker.com/online/blogs/elements/2013/07/when-doctors-tell-patients-what-they-dont-want-to-hear.html. Published July 23, 2013. Accessed Aug. 15, 2013.
- Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the US. New Eng J Med. 2008;359(18):1921-1931.
- Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41-48.
Dr. Scheurer is a hospitalist and chief quality officer at the Medical University of South Carolina in Charleston. She is physician editor of The Hospitalist. Email her at [email protected].
We all are too familiar with the Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) survey, a standardized set of questions randomly deployed to recently discharged patients. More recently, hospitalists have noticed the introduction of the Clinician and Groups Consumer Assessment of Healthcare Providers and Systems (CG-CAHPS) survey, randomly deployed to recently evaluated ambulatory patients. HCAHPS has been publicly reported since 2008. CG-CAHPS will be in the near future. In addition to these, there are a variety of other types of CAHPS surveys, ranging from ambulatory surgery to patient-centered medical homes. For HCAHPS alone, there are more than 8,200 adult surveys completed every day from almost 4,000 different U.S. hospitals.1
In addition to these surveys being publicly reported and widely viewed online by patients, payors, and employers, the results now are tightly coupled to the reimbursement of hospitals and, in some cases, individual providers. As of October 2012, Medicare has relegated 30% of its hospital value-based purchasing (VBP) program to the results of hospitals’ HCAPHS survey results. For the foreseeable future, about one-third of the financial bonus—or penalty—of a hospital rests in the hands of how well our patients perceive their care. Many individual hospitals and practice groups have started coupling individual physicians’ compensation to their patients’ CAHPS scores. Within the (approximately) seven minutes it takes to complete the survey, our patients determine millions of dollars of physician and hospital reimbursement.1
With all of the financial and reputational emphasis on HCAHPS, it is vital that hospitalists understand what it is these surveys are actually measuring, and if they have any correlation with the quality of the care the patient receives. The questions currently address 11 different domains of hospital care:
- Communication with doctors;
- Communication with nurses;
- Responsiveness of hospital staff;
- Pain management;
- Communication about medicines;
- Discharge information;
- Cleanliness of hospital environment;
- Quietness of hospital environment;
- Transitions of care;
- Overall rating of the hospital; and
- Willingness to recommend the hospital.
As the domains of care are all very different, one can imagine a wide range of answers to the various questions; a patient can perceive that communication was excellent but the quietness and cleanliness was disgraceful. And, depending on what they consider the most important aspects of their stay, they therefore may rate their overall stay as excellent or disgraceful. Why? Because each of these rest in the eye of the beholder.
But to keep pace, hospitals and providers across the country have invested millions of hours dissecting the meaning of the results and trying to improve upon them. My hospital has struggled for years with the “cleanliness” question, trying to figure out what our patients are trying to tell us: that we need to sweep and mop more often, that hospital supplies are cluttering our patient rooms, that the trashcans are overflowing or within eyesight? When we ask focus groups, we often get all of the above—and then try to implement several solutions all at once.
The quietness question is much easier to interpret but certainly difficult to improve upon. We have implemented “yacker trackers,” “quiet time,” and soft-wheeled trash cans. And the results of the surveys take months to come back and get analyzed, so it is difficult to quickly know if your interventions are actually working. Given that so many hospitals and providers are back-flipping to “play to the test,” we really need some validation that care is truly improving based on this patient feedback.
A recent New York Times article calls to light a natural paradox in the medical field, in that patients who understand more about disease processes and medical information actually feel less, rather than more, informed. In other words, those who are actually the most well-informed may rate communication the lowest. The article also calls to light the natural paradox between providers being honest and providers being likable, especially considering they routinely have to deliver messages that patients do not want to hear:
- You need to quit smoking;
- Your weight is affecting your health; and
- Your disease is not curable.
Given these natural paradoxes, the article argues that it is difficult to reconcile why hospitals and providers should be held financially accountable for their patients’ perception of care, when that perception may not equate to “real” care quality.2
However, there is some evidence that patient satisfaction surveys may actually be good proxies for care quality. A large study found that hospitals with the highest quartile HCAHPS ratings also have about 2%-3% higher quality scores for acute MI, CHF, pneumonia, and surgery, compared to those in the lowest quartile. The highest scoring hospitals also have about 2%-3% lower readmission rates for acute MI, CHF, and pneumonia.3,4 And, similar to other quality metrics, there is evidence that the longer a hospital has been administering HCAHPS, the better are their scores. So maybe hospital systems and providers can improve not only the perception a patient has of the quality of the care they received, but improve the quality, as measured by the patient’s perception.
Although there are legitimate arguments on both sides as to whether a patient’s perception of care reflects “real” care quality, in the end these CAHPS surveys are, and have been publicly reported, and will be tightly coupled to reimbursement for hospitals and (likely) providers for the foreseeable future. So in the meantime, we should continue to focus on patient-centered care, take seriously any voiced concerns, and have a relentless pursuit of perfection for how patients perceive their care. Because in the end, you would do it for your family so we should do it for our patients.
References
- Centers for Medicare & Medicaid Services. Spring 2013 HCAHPS Executive Insight Letter. Available at: www.hcahpsonline.org/Executive_Insight. Accessed Aug. 15, 2013.
- Rosenbaum L. When doctors tell patients what they don’t want to hear. The New Yorker website. Available at: www.newyorker.com/online/blogs/elements/2013/07/when-doctors-tell-patients-what-they-dont-want-to-hear.html. Published July 23, 2013. Accessed Aug. 15, 2013.
- Jha AK, Orav EJ, Zheng J, Epstein AM. Patients' perception of hospital care in the US. New Eng J Med. 2008;359(18):1921-1931.
- Boulding W, Glickman SW, Manary MP, Schulman KA, Staelin R. Relationship between patient satisfaction with inpatient care and hospital readmission within 30 days. Am J Manag Care. 2011;17(1):41-48.
Dr. Scheurer is a hospitalist and chief quality officer at the Medical University of South Carolina in Charleston. She is physician editor of The Hospitalist. Email her at [email protected].





