A Method for Attributing Patient-Level Metrics to Rotating Providers in an Inpatient Setting

Article Type
Changed
Wed, 08/15/2018 - 06:54

Hospitalists’ performance is routinely evaluated by third-party payers, employers, and patients. As hospitalist programs mature, there is a need to develop processes to identify, internally measure, and report on individual and group performance. We know from Society of Hospital Medicine (SHM) data that a significant amount of hospitalists’ total compensation is at least partially based on performance. Often this is based at least in part on quality data. In 2006, SHM issued a white paper detailing the key elements of a successful performance monitoring and reporting process.1,2 Recommendations included the identification of meaningful operational and clinical performance metrics, and the ability to monitor and report both group and individual metrics was highlighted as an essential component. There is evidence that comparison of individual provider performance with that of their peers is a necessary element of successful provider dashboards.3 Additionally, regular feedback and a clear, visual presentation of the data are important components of successful provider feedback dashboards.3-6

Much of the literature regarding provider feedback dashboards has been based in the outpatient setting. The majority of these dashboards focus on the management of chronic illnesses (eg, diabetes and hypertension), rates of preventative care services (eg, colonoscopy or mammogram), or avoidance of unnecessary care (eg, antibiotics for sinusitis).4,5 Unlike in the outpatient setting, in which 1 provider often provides a majority of the care for a given episode of care, hospitalized patients are often cared for by multiple providers, challenging the appropriate attribution of patient-level metrics to specific providers. Under the standard approach, an entire hospitalization is attributed to 1 physician, generally the attending of record for the hospitalization, which may be the admitting provider or the discharging provider, depending on the approach used by the hospital. However, assigning responsibility for an entire hospitalization to a provider who may have only seen the patient for a small percentage of a hospitalization may jeopardize the validity of metrics. As provider metrics are increasingly being used for compensation, it is important to ensure that the method for attribution correctly identifies the providers caring for patients. To our knowledge there is no gold standard approach for attributing metrics to providers when patients are cared for by multiple providers, and the standard attending of record–based approach may lack face validity in many cases.

We aimed to develop and operationalize a system to more fairly attribute patient-level data to individual providers across a single hospitalization even when multiple providers cared for the patient. We then compared our methodology to the standard approach, in which the attending of record receives full attribution for each metric, to determine the difference on a provider level between the 2 models.

METHODS

Clinical Setting

The Johns Hopkins Hospital is a 1145-bed, tertiary-care hospital. Over the years of this project, the Johns Hopkins Hospitalist Program was an approximately 20-physician group providing care in a variety of settings, including a dedicated hospitalist floor, where this metrics program was initiated. Hospitalists in this setting work Monday through Friday, with 1 hospitalist and a moonlighter covering on the weekends. Admissions are performed by an admitter, and overnight care is provided by a nocturnist. Initially 17 beds, this unit expanded to 24 beds in June 2012. For the purposes of this article, we included all general medicine patients admitted to this floor between July 1, 2010, and June 30, 2014, who were cared for by hospitalists. During this period, all patients were inpatients; no patients were admitted under observation status. All of these patients were cared for by hospitalists without housestaff or advanced practitioners. Since 2014, the metrics program has been expanded to other hospitalist-run services in the hospital, but for simplicity, we have not presented these more recent data.

Individual Provider Metrics

Metrics were chosen to reflect institutional quality and efficiency priorities. Our choice of metrics was restricted to those that (1) plausibly reflect provider performance, at least in part, and (2) could be accessed in electronic form (without any manual chart review). Whenever possible, we chose metrics with objective data. Additionally, because funding for this effort was provided by the hospital, we sought to ensure that enough of the metrics were related to cost to justify ongoing hospital support of the project. SAS 9.2 (SAS Institute Inc, Cary, NC) was used to calculate metric weights. Specific metrics included American College of Chest Physicians (ACCP)–compliant venous thromboembolism (VTE) prophylaxis,7 observed-to-expected length of stay (LOS) ratio, percentage of discharges per day, discharges before 3 pm, depth of coding, patient satisfaction, readmissions, communication with the primary care provider, and time to signature for discharge summaries (Table 1).

 

 

Appropriate prophylaxis for VTE was calculated by using an algorithm embedded within the computerized provider order entry system, which assessed the prescription of ACCP-compliant VTE prophylaxis within 24 hours following admission. This included a risk assessment, and credit was given for no prophylaxis and/or mechanical and/or pharmacologic prophylaxis per the ACCP guidelines.7

Observed-to-expected LOS was defined by using the University HealthSystem Consortium (UHC; now Vizient Inc) expected LOS for the given calendar year. This approach incorporates patient diagnoses, demographics, and other administrative variables to define an expected LOS for each patient.

The percent of patients discharged per day was defined from billing data as the percentage of a provider’s evaluation and management charges that were the final charge of a patient’s stay (regardless of whether a discharge day service was coded).

Discharge prior to 3 pm was defined from administrative data as the time a patient was discharged from the electronic medical system.

Depth of coding was defined as the number of coded diagnoses submitted to the Maryland Health Services Cost Review Commission for determining payment and was viewed as an indicator of the thoroughness of provider documentation.

Patient satisfaction was defined at the patient level (for those patients who turned in patient satisfaction surveys) as the pooled value of the 5 provider questions on the hospital’s patient satisfaction survey administered by Press Ganey: “time the physician spent with you,” “did the physician show concern for your questions/worries,” “did the physician keep you informed,” “friendliness/courtesy of the physician,” and “skill of the physician.”8

Readmission rates were defined as same-hospital readmissions divided by the total number of patients discharged by a given provider, with exclusions based on the Centers for Medicare and Medicaid Services hospital-wide, all-cause readmission measure.1 The expected same-hospital readmission rate was defined for each patient as the observed readmission rate in the entire UHC (Vizient) data set for all patients with the same All Patient Refined Diagnosis Related Group and severity of illness, as we have described previously.9

Communication with the primary care provider was the only self-reported metric used. It was based on a mandatory prompt on the discharge worksheet in the electronic medical record (EMR). Successful communication with the outpatient provider was defined as verbal or electronic communication by the hospitalist with the outpatient provider. Partial (50%) credit was given for providers who attempted but were unsuccessful in communicating with the outpatient provider, for patients for whom the provider had access to the Johns Hopkins EMR system, and for planned admissions without new or important information to convey. No credit was given for providers who indicated that communication was not indicated, who indicated that a patient and/or family would update the provider, or who indicated that the discharge summary would be sufficient.9 Because the discharge worksheet could be initiated at any time during the hospitalization, providers could document communication with the outpatient provider at any point during hospitalization.

Discharge summary turnaround was defined as the average number of days elapsed between the day of discharge and the signing of the discharge summary in the EMR.

Assigning Ownership of Patients to Individual Providers

Using billing data, we assigned ownership of patient care based on the type, timing, and number of charges that occurred during each hospitalization (Figure 1). Eligible charges included all history and physical (codes 99221, 99222, and 99223), subsequent care (codes 99231, 99232, and 99233), and discharge charges (codes 99238 and 99239).

By using a unique identifier assigned for each hospitalization, professional fees submitted by providers were used to identify which provider saw the patient on the admission day, discharge day, as well as subsequent care days. Providers’ productivity, bonus supplements, and policy compliance were determined by using billing data, which encouraged the prompt submittal of charges.

The provider who billed the admission history and physical (codes 99221, 99222, and 99223) within 1 calendar date of the patient’s initial admission was defined as the admitting provider. Patients transferred to the hospitalist service from other services were not assigned an admitting hospitalist. The sole metric assigned to the admitting hospitalist was ACCP-compliant VTE prophylaxis.

The provider who billed the final subsequent care or discharge code (codes 99231, 99232, 99233, 99238, and 99239) within 1 calendar date of discharge was defined as the discharging provider. For hospitalizations characterized by a single provider charge (eg, for patients admitted and discharged on the same day), the provider billing this charge was assigned as both the admitting and discharging physician. Patients upgraded to the intensive care unit (ICU) were not counted as a discharge unless the patient was downgraded and discharged from the hospitalist service. The discharging provider was assigned responsibility for the time of discharge, the percent of patients discharged per day, the discharge summary turnaround time, and hospital readmissions.

Metrics that were assigned to multiple providers for a single hospitalization were termed “provider day–weighted” metrics. The formula for calculating the weight for each provider day–weighted metric was as follows: weight for provider A = [number of daily charges billed by provider A] divided by [LOS +1]. The initial hospital day was counted as day 0. LOS plus 1 was used to recognize that a typical hospitalization will have a charge on the day of admission (day 0) and a charge on the day of discharge such that an LOS of 2 days (eg, a patient admitted on Monday and discharged on Wednesday) will have 3 daily charges. Provider day–weighted metrics included patient satisfaction, communication with the outpatient provider, depth of coding, and observed-to-expected LOS.

Our billing software prevented providers from the same group from billing multiple daily charges, thus ensuring that there were no duplicated charges submitted for a given day.

 

 

Presenting Results

Providers were only shown data from the day-weighted approach. For ease of visual interpretation, scores for each metric were scaled ordinally from 1 (worst performance) to 9 (best performance; Table 1). Data were displayed in a dashboard format on a password-protected website for each provider to view his or her own data relative to that of the hospitalist peer group. The dashboard was implemented in this format on July 1, 2011. Data were updated quarterly (Figure 2).

Results were displayed in a polyhedral or spider-web graph (Figure 2). Provider and group metrics were scaled according to predefined benchmarks established for each metric and standardized to a scale ranging from 1 to 9. The scale for each metric was set based on examining historical data and group median performance on the metrics to ensure that there was a range of performance (ie, to avoid having most hospitalists scoring a 1 or 9). Scaling thresholds were periodically adjusted as appropriate to maintain good visual discrimination. Higher scores (creating a larger-volume polygon) are desirable even for metrics such as LOS, for which a low value is desirable. Both a spider-web graph and trends over time were available to the provider (Figure 2). These graphs display a comparison of the individual provider scores for each metric to the hospitalist group average for that metric.

Comparison with the Standard (Attending of Record) Method of Attribution

For the purposes of this report, we sought to determine whether there were meaningful differences between our day-weighted approach versus the standard method of attribution, in which the attending of record is assigned responsibility for each metric that would not have been attributed to the discharging attending under both methods. Our goal was to determine where and whether there was a meaningful difference between the 2 methodologies, recognizing that the degree of difference between these 2 methodologies might vary in other institutions and settings. In our hospital, the attending of record is generally the discharging attending. In order to compare the 2 methodologies, we arbitrarily picked 2015 to retrospectively evaluate the differences between these 2 methods of attribution. We did not display or provide data using the standard methodology to providers at any point; this approach was used only for the purposes of this report. Because these metrics are intended to evaluate relative provider performance, we assigned a percentile to each provider for his or her performance on the given metric using our attribution methodology and then, similarly, assigned a percentile to each provider using the standard methodology. This yielded 2 percentile scores for each provider and each metric. We then compared these percentile ranks for providers in 2 ways: (1) we determined how often providers who scored in the top half of the group for a given metric (above the 50th percentile) also scored in the top half of the group for that metric by using the other calculation method, and (2) we calculated the absolute value of the difference in percentiles between the 2 methods to characterize the impact on a provider’s ranking for that metric that might result from switching to the other method. For instance, if a provider scored at the 20th percentile for the group in patient satisfaction with 1 attribution method and scored at the 40th percentile for the group in patient satisfaction using the other method, the absolute change in percentile would be 20 percentile points. But, this provider would still be below the 50th percentile by both methods (concordant bottom half performance). We did not perform this comparison for metrics assigned to the discharging provider (such as discharge summary turnaround time or readmissions) because the attending of record designation is assigned to the discharging provider at our hospital.

RESULTS

The dashboard was successfully operationalized on July 1, 2011, with displays visible to providers as shown in Figure 2. Consistent with the principles of providing effective performance feedback to providers, the display simultaneously showed providers their individual performance as well as the performance of their peers. Providers were able to view their spider-web plot for prior quarters. Not shown are additional views that allowed providers to see quarterly trends in their data versus their peers across several fiscal years. Also available to providers was their ranking relative to their peers for each metric; specific peers were deidentified in the display.

There was notable discordance between provider rankings between the 2 methodologies, as shown in Table 2. Provider performance above or below the median was concordant 56% to 75% of the time (depending on the particular metric), indicating substantial discordance because top-half or bottom-half concordance would be expected to occur by chance 50% of the time. Although the provider percentile differences between the 2 methods tended to be modest for most providers (the median difference between the methods was 13 to 22 percentile points for the various metrics), there were some providers for whom the method of calculation dramatically impacted their rankings. For 5 of the 6 metrics we examined, at least 1 provider had a 50-percentile or greater change in his or her ranking based on the method used. This indicates that at least some providers would have had markedly different scores relative to their peers had we used the alternative methodology (Table 2). In VTE prophylaxis, for example, at least 1 provider had a 94-percentile change in his or her ranking; similarly, a provider had an 88-perentile change in his or her LOS ranking between the 2 methodologies.

 

 

DISCUSSION

We found that it is possible to assign metrics across 1 hospital stay to multiple providers by using billing data. We also found a meaningful discrepancy in how well providers scored (relative to their peers) based on the method used for attribution. These results imply that hospitals should consider attributing performance metrics based on ascribed ownership from billing data and not just from attending of record status.

As hospitalist programs and providers in general are increasingly being asked to develop dashboards to monitor individual and group performance, correctly attributing care to providers is likely to become increasingly important. Experts agree that principles of effective provider performance dashboards include ranking individual provider performance relative to peers, clearly displaying data in an easily accessible format, and ensuring that data can be credibly attributed to the individual provider.3,4,6 However, there appears to be no gold standard method for attribution, especially in the inpatient setting. Our results imply that hospitals should consider attributing performance metrics based on ascribed ownership from billing data and not just from attending of record status.

Several limitations of our findings are important to consider. First, our program is a relatively small, academic group with handoffs that typically occur every 1 to 2 weeks and sometimes with additional handoffs on weekends. Different care patterns and settings might impact the utility of our attribution methodology relative to the standard methodology. Additionally, it is important to note that the relative merits of the different methodologies cannot be ascertained from our comparison. We can demonstrate discordance between the attribution methodologies, but we cannot say that 1 method is correct and the other is flawed. Although we believe that our day-weighted approach feels fairer to providers based on group input and feedback, we did not conduct a formal survey to examine providers’ preferences for the standard versus day-weighted approaches. The appropriateness of a particular attribution method needs to be assessed locally and may vary based on the clinical setting. For instance, on a service in which patients are admitted for procedures, it may make more sense to attribute the outcome of the case to the proceduralist even if that provider did not bill for the patient’s care on a daily basis. Finally, the computational requirements of our methodology are not trivial and require linking billing data with administrative patient-level data, which may be challenging to operationalize in some institutions.

These limitations aside, we believe that our attribution methodology has face validity. For example, a provider might be justifiably frustrated if, using the standard methodology, he or she is charged with the LOS of a patient who had been hospitalized for months, particularly if that patient is discharged shortly after the provider assumes care. Our method addresses this type of misattribution. Particularly when individual provider compensation is based on performance on metrics (as is the case at our institution), optimizing provider attribution to particular patients may be important, and face validity may be required for group buy-in.

In summary, we have demonstrated that it is possible to use billing data to assign ownership of patients to multiple providers over 1 hospital stay. This could be applied to other hospitalist programs as well as other healthcare settings in which multiple providers care for patients during 1 healthcare encounter (eg, ICUs).

Disclosure

The authors declare they have no relevant conflicts of interest.

References

1. Horwitz L, Partovian C, Lin Z, et al. Hospital-Wide (All-Condition) 30‐Day Risk-Standardized Readmission Measure. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/downloads/MMSHospital-WideAll-ConditionReadmissionRate.pdf. Accessed March 6, 2015.
2. Medicine SoH. Measuring Hospitalist Performance: Metrics, Reports, and Dashboards. 2007; https://www.hospitalmedicine.org/Web/Practice_Management/Products_and_Programs/measure_hosp_perf_metrics_reports_dashboards.aspx. Accessed May 12, 2013.
3. Teleki SS, Shaw R, Damberg CL, McGlynn EA. Providing performance feedback to individual physicians: current practice and emerging lessons. Santa Monica, CA: RAND Corporation; 2006. 1-47. https://www.rand.org/content/dam/rand/pubs/working_papers/2006/RAND_WR381.pdf. Accessed August, 2017.
4. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice Feedback Interventions: 15 Suggestions for Optimizing Effectiveness Practice Feedback Interventions. Ann Intern Med. 2016;164(6):435-441. PubMed
5. Dowding D, Randell R, Gardner P, et al. Dashboards for improving patient care: review of the literature. Int J Med Inform. 2015;84(2):87-100. PubMed
6. Landon BE, Normand S-LT, Blumenthal D, Daley J. Physician clinical performance assessment: prospects and barriers. JAMA. 2003;290(9):1183-1189. PubMed
7. Guyatt GH, Akl EA, Crowther M, Gutterman DD, Schuünemann HJ. Executive summary: Antit hrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Ann Intern Med. 2012;141(2 suppl):7S-47S. PubMed
8. Siddiqui Z, Qayyum R, Bertram A, et al. Does Provider Self-reporting of Etiquette Behaviors Improve Patient Experience? A Randomized Controlled Trial. J Hosp Med. 2017;12(6):402-406. PubMed
9. Oduyebo I, Lehmann CU, Pollack CE, et al. Association of self-reported hospital discharge handoffs with 30-day readmissions. JAMA Intern Med. 2013;173(8):624-629. PubMed

Article PDF
Issue
Journal of Hospital Medicine 13(7)
Publications
Topics
Page Number
470-475. Published online first December 20, 2017
Sections
Article PDF
Article PDF
Related Articles

Hospitalists’ performance is routinely evaluated by third-party payers, employers, and patients. As hospitalist programs mature, there is a need to develop processes to identify, internally measure, and report on individual and group performance. We know from Society of Hospital Medicine (SHM) data that a significant amount of hospitalists’ total compensation is at least partially based on performance. Often this is based at least in part on quality data. In 2006, SHM issued a white paper detailing the key elements of a successful performance monitoring and reporting process.1,2 Recommendations included the identification of meaningful operational and clinical performance metrics, and the ability to monitor and report both group and individual metrics was highlighted as an essential component. There is evidence that comparison of individual provider performance with that of their peers is a necessary element of successful provider dashboards.3 Additionally, regular feedback and a clear, visual presentation of the data are important components of successful provider feedback dashboards.3-6

Much of the literature regarding provider feedback dashboards has been based in the outpatient setting. The majority of these dashboards focus on the management of chronic illnesses (eg, diabetes and hypertension), rates of preventative care services (eg, colonoscopy or mammogram), or avoidance of unnecessary care (eg, antibiotics for sinusitis).4,5 Unlike in the outpatient setting, in which 1 provider often provides a majority of the care for a given episode of care, hospitalized patients are often cared for by multiple providers, challenging the appropriate attribution of patient-level metrics to specific providers. Under the standard approach, an entire hospitalization is attributed to 1 physician, generally the attending of record for the hospitalization, which may be the admitting provider or the discharging provider, depending on the approach used by the hospital. However, assigning responsibility for an entire hospitalization to a provider who may have only seen the patient for a small percentage of a hospitalization may jeopardize the validity of metrics. As provider metrics are increasingly being used for compensation, it is important to ensure that the method for attribution correctly identifies the providers caring for patients. To our knowledge there is no gold standard approach for attributing metrics to providers when patients are cared for by multiple providers, and the standard attending of record–based approach may lack face validity in many cases.

We aimed to develop and operationalize a system to more fairly attribute patient-level data to individual providers across a single hospitalization even when multiple providers cared for the patient. We then compared our methodology to the standard approach, in which the attending of record receives full attribution for each metric, to determine the difference on a provider level between the 2 models.

METHODS

Clinical Setting

The Johns Hopkins Hospital is a 1145-bed, tertiary-care hospital. Over the years of this project, the Johns Hopkins Hospitalist Program was an approximately 20-physician group providing care in a variety of settings, including a dedicated hospitalist floor, where this metrics program was initiated. Hospitalists in this setting work Monday through Friday, with 1 hospitalist and a moonlighter covering on the weekends. Admissions are performed by an admitter, and overnight care is provided by a nocturnist. Initially 17 beds, this unit expanded to 24 beds in June 2012. For the purposes of this article, we included all general medicine patients admitted to this floor between July 1, 2010, and June 30, 2014, who were cared for by hospitalists. During this period, all patients were inpatients; no patients were admitted under observation status. All of these patients were cared for by hospitalists without housestaff or advanced practitioners. Since 2014, the metrics program has been expanded to other hospitalist-run services in the hospital, but for simplicity, we have not presented these more recent data.

Individual Provider Metrics

Metrics were chosen to reflect institutional quality and efficiency priorities. Our choice of metrics was restricted to those that (1) plausibly reflect provider performance, at least in part, and (2) could be accessed in electronic form (without any manual chart review). Whenever possible, we chose metrics with objective data. Additionally, because funding for this effort was provided by the hospital, we sought to ensure that enough of the metrics were related to cost to justify ongoing hospital support of the project. SAS 9.2 (SAS Institute Inc, Cary, NC) was used to calculate metric weights. Specific metrics included American College of Chest Physicians (ACCP)–compliant venous thromboembolism (VTE) prophylaxis,7 observed-to-expected length of stay (LOS) ratio, percentage of discharges per day, discharges before 3 pm, depth of coding, patient satisfaction, readmissions, communication with the primary care provider, and time to signature for discharge summaries (Table 1).

 

 

Appropriate prophylaxis for VTE was calculated by using an algorithm embedded within the computerized provider order entry system, which assessed the prescription of ACCP-compliant VTE prophylaxis within 24 hours following admission. This included a risk assessment, and credit was given for no prophylaxis and/or mechanical and/or pharmacologic prophylaxis per the ACCP guidelines.7

Observed-to-expected LOS was defined by using the University HealthSystem Consortium (UHC; now Vizient Inc) expected LOS for the given calendar year. This approach incorporates patient diagnoses, demographics, and other administrative variables to define an expected LOS for each patient.

The percent of patients discharged per day was defined from billing data as the percentage of a provider’s evaluation and management charges that were the final charge of a patient’s stay (regardless of whether a discharge day service was coded).

Discharge prior to 3 pm was defined from administrative data as the time a patient was discharged from the electronic medical system.

Depth of coding was defined as the number of coded diagnoses submitted to the Maryland Health Services Cost Review Commission for determining payment and was viewed as an indicator of the thoroughness of provider documentation.

Patient satisfaction was defined at the patient level (for those patients who turned in patient satisfaction surveys) as the pooled value of the 5 provider questions on the hospital’s patient satisfaction survey administered by Press Ganey: “time the physician spent with you,” “did the physician show concern for your questions/worries,” “did the physician keep you informed,” “friendliness/courtesy of the physician,” and “skill of the physician.”8

Readmission rates were defined as same-hospital readmissions divided by the total number of patients discharged by a given provider, with exclusions based on the Centers for Medicare and Medicaid Services hospital-wide, all-cause readmission measure.1 The expected same-hospital readmission rate was defined for each patient as the observed readmission rate in the entire UHC (Vizient) data set for all patients with the same All Patient Refined Diagnosis Related Group and severity of illness, as we have described previously.9

Communication with the primary care provider was the only self-reported metric used. It was based on a mandatory prompt on the discharge worksheet in the electronic medical record (EMR). Successful communication with the outpatient provider was defined as verbal or electronic communication by the hospitalist with the outpatient provider. Partial (50%) credit was given for providers who attempted but were unsuccessful in communicating with the outpatient provider, for patients for whom the provider had access to the Johns Hopkins EMR system, and for planned admissions without new or important information to convey. No credit was given for providers who indicated that communication was not indicated, who indicated that a patient and/or family would update the provider, or who indicated that the discharge summary would be sufficient.9 Because the discharge worksheet could be initiated at any time during the hospitalization, providers could document communication with the outpatient provider at any point during hospitalization.

Discharge summary turnaround was defined as the average number of days elapsed between the day of discharge and the signing of the discharge summary in the EMR.

Assigning Ownership of Patients to Individual Providers

Using billing data, we assigned ownership of patient care based on the type, timing, and number of charges that occurred during each hospitalization (Figure 1). Eligible charges included all history and physical (codes 99221, 99222, and 99223), subsequent care (codes 99231, 99232, and 99233), and discharge charges (codes 99238 and 99239).

By using a unique identifier assigned for each hospitalization, professional fees submitted by providers were used to identify which provider saw the patient on the admission day, discharge day, as well as subsequent care days. Providers’ productivity, bonus supplements, and policy compliance were determined by using billing data, which encouraged the prompt submittal of charges.

The provider who billed the admission history and physical (codes 99221, 99222, and 99223) within 1 calendar date of the patient’s initial admission was defined as the admitting provider. Patients transferred to the hospitalist service from other services were not assigned an admitting hospitalist. The sole metric assigned to the admitting hospitalist was ACCP-compliant VTE prophylaxis.

The provider who billed the final subsequent care or discharge code (codes 99231, 99232, 99233, 99238, and 99239) within 1 calendar date of discharge was defined as the discharging provider. For hospitalizations characterized by a single provider charge (eg, for patients admitted and discharged on the same day), the provider billing this charge was assigned as both the admitting and discharging physician. Patients upgraded to the intensive care unit (ICU) were not counted as a discharge unless the patient was downgraded and discharged from the hospitalist service. The discharging provider was assigned responsibility for the time of discharge, the percent of patients discharged per day, the discharge summary turnaround time, and hospital readmissions.

Metrics that were assigned to multiple providers for a single hospitalization were termed “provider day–weighted” metrics. The formula for calculating the weight for each provider day–weighted metric was as follows: weight for provider A = [number of daily charges billed by provider A] divided by [LOS +1]. The initial hospital day was counted as day 0. LOS plus 1 was used to recognize that a typical hospitalization will have a charge on the day of admission (day 0) and a charge on the day of discharge such that an LOS of 2 days (eg, a patient admitted on Monday and discharged on Wednesday) will have 3 daily charges. Provider day–weighted metrics included patient satisfaction, communication with the outpatient provider, depth of coding, and observed-to-expected LOS.

Our billing software prevented providers from the same group from billing multiple daily charges, thus ensuring that there were no duplicated charges submitted for a given day.

 

 

Presenting Results

Providers were only shown data from the day-weighted approach. For ease of visual interpretation, scores for each metric were scaled ordinally from 1 (worst performance) to 9 (best performance; Table 1). Data were displayed in a dashboard format on a password-protected website for each provider to view his or her own data relative to that of the hospitalist peer group. The dashboard was implemented in this format on July 1, 2011. Data were updated quarterly (Figure 2).

Results were displayed in a polyhedral or spider-web graph (Figure 2). Provider and group metrics were scaled according to predefined benchmarks established for each metric and standardized to a scale ranging from 1 to 9. The scale for each metric was set based on examining historical data and group median performance on the metrics to ensure that there was a range of performance (ie, to avoid having most hospitalists scoring a 1 or 9). Scaling thresholds were periodically adjusted as appropriate to maintain good visual discrimination. Higher scores (creating a larger-volume polygon) are desirable even for metrics such as LOS, for which a low value is desirable. Both a spider-web graph and trends over time were available to the provider (Figure 2). These graphs display a comparison of the individual provider scores for each metric to the hospitalist group average for that metric.

Comparison with the Standard (Attending of Record) Method of Attribution

For the purposes of this report, we sought to determine whether there were meaningful differences between our day-weighted approach versus the standard method of attribution, in which the attending of record is assigned responsibility for each metric that would not have been attributed to the discharging attending under both methods. Our goal was to determine where and whether there was a meaningful difference between the 2 methodologies, recognizing that the degree of difference between these 2 methodologies might vary in other institutions and settings. In our hospital, the attending of record is generally the discharging attending. In order to compare the 2 methodologies, we arbitrarily picked 2015 to retrospectively evaluate the differences between these 2 methods of attribution. We did not display or provide data using the standard methodology to providers at any point; this approach was used only for the purposes of this report. Because these metrics are intended to evaluate relative provider performance, we assigned a percentile to each provider for his or her performance on the given metric using our attribution methodology and then, similarly, assigned a percentile to each provider using the standard methodology. This yielded 2 percentile scores for each provider and each metric. We then compared these percentile ranks for providers in 2 ways: (1) we determined how often providers who scored in the top half of the group for a given metric (above the 50th percentile) also scored in the top half of the group for that metric by using the other calculation method, and (2) we calculated the absolute value of the difference in percentiles between the 2 methods to characterize the impact on a provider’s ranking for that metric that might result from switching to the other method. For instance, if a provider scored at the 20th percentile for the group in patient satisfaction with 1 attribution method and scored at the 40th percentile for the group in patient satisfaction using the other method, the absolute change in percentile would be 20 percentile points. But, this provider would still be below the 50th percentile by both methods (concordant bottom half performance). We did not perform this comparison for metrics assigned to the discharging provider (such as discharge summary turnaround time or readmissions) because the attending of record designation is assigned to the discharging provider at our hospital.

RESULTS

The dashboard was successfully operationalized on July 1, 2011, with displays visible to providers as shown in Figure 2. Consistent with the principles of providing effective performance feedback to providers, the display simultaneously showed providers their individual performance as well as the performance of their peers. Providers were able to view their spider-web plot for prior quarters. Not shown are additional views that allowed providers to see quarterly trends in their data versus their peers across several fiscal years. Also available to providers was their ranking relative to their peers for each metric; specific peers were deidentified in the display.

There was notable discordance between provider rankings between the 2 methodologies, as shown in Table 2. Provider performance above or below the median was concordant 56% to 75% of the time (depending on the particular metric), indicating substantial discordance because top-half or bottom-half concordance would be expected to occur by chance 50% of the time. Although the provider percentile differences between the 2 methods tended to be modest for most providers (the median difference between the methods was 13 to 22 percentile points for the various metrics), there were some providers for whom the method of calculation dramatically impacted their rankings. For 5 of the 6 metrics we examined, at least 1 provider had a 50-percentile or greater change in his or her ranking based on the method used. This indicates that at least some providers would have had markedly different scores relative to their peers had we used the alternative methodology (Table 2). In VTE prophylaxis, for example, at least 1 provider had a 94-percentile change in his or her ranking; similarly, a provider had an 88-perentile change in his or her LOS ranking between the 2 methodologies.

 

 

DISCUSSION

We found that it is possible to assign metrics across 1 hospital stay to multiple providers by using billing data. We also found a meaningful discrepancy in how well providers scored (relative to their peers) based on the method used for attribution. These results imply that hospitals should consider attributing performance metrics based on ascribed ownership from billing data and not just from attending of record status.

As hospitalist programs and providers in general are increasingly being asked to develop dashboards to monitor individual and group performance, correctly attributing care to providers is likely to become increasingly important. Experts agree that principles of effective provider performance dashboards include ranking individual provider performance relative to peers, clearly displaying data in an easily accessible format, and ensuring that data can be credibly attributed to the individual provider.3,4,6 However, there appears to be no gold standard method for attribution, especially in the inpatient setting. Our results imply that hospitals should consider attributing performance metrics based on ascribed ownership from billing data and not just from attending of record status.

Several limitations of our findings are important to consider. First, our program is a relatively small, academic group with handoffs that typically occur every 1 to 2 weeks and sometimes with additional handoffs on weekends. Different care patterns and settings might impact the utility of our attribution methodology relative to the standard methodology. Additionally, it is important to note that the relative merits of the different methodologies cannot be ascertained from our comparison. We can demonstrate discordance between the attribution methodologies, but we cannot say that 1 method is correct and the other is flawed. Although we believe that our day-weighted approach feels fairer to providers based on group input and feedback, we did not conduct a formal survey to examine providers’ preferences for the standard versus day-weighted approaches. The appropriateness of a particular attribution method needs to be assessed locally and may vary based on the clinical setting. For instance, on a service in which patients are admitted for procedures, it may make more sense to attribute the outcome of the case to the proceduralist even if that provider did not bill for the patient’s care on a daily basis. Finally, the computational requirements of our methodology are not trivial and require linking billing data with administrative patient-level data, which may be challenging to operationalize in some institutions.

These limitations aside, we believe that our attribution methodology has face validity. For example, a provider might be justifiably frustrated if, using the standard methodology, he or she is charged with the LOS of a patient who had been hospitalized for months, particularly if that patient is discharged shortly after the provider assumes care. Our method addresses this type of misattribution. Particularly when individual provider compensation is based on performance on metrics (as is the case at our institution), optimizing provider attribution to particular patients may be important, and face validity may be required for group buy-in.

In summary, we have demonstrated that it is possible to use billing data to assign ownership of patients to multiple providers over 1 hospital stay. This could be applied to other hospitalist programs as well as other healthcare settings in which multiple providers care for patients during 1 healthcare encounter (eg, ICUs).

Disclosure

The authors declare they have no relevant conflicts of interest.

Hospitalists’ performance is routinely evaluated by third-party payers, employers, and patients. As hospitalist programs mature, there is a need to develop processes to identify, internally measure, and report on individual and group performance. We know from Society of Hospital Medicine (SHM) data that a significant amount of hospitalists’ total compensation is at least partially based on performance. Often this is based at least in part on quality data. In 2006, SHM issued a white paper detailing the key elements of a successful performance monitoring and reporting process.1,2 Recommendations included the identification of meaningful operational and clinical performance metrics, and the ability to monitor and report both group and individual metrics was highlighted as an essential component. There is evidence that comparison of individual provider performance with that of their peers is a necessary element of successful provider dashboards.3 Additionally, regular feedback and a clear, visual presentation of the data are important components of successful provider feedback dashboards.3-6

Much of the literature regarding provider feedback dashboards has been based in the outpatient setting. The majority of these dashboards focus on the management of chronic illnesses (eg, diabetes and hypertension), rates of preventative care services (eg, colonoscopy or mammogram), or avoidance of unnecessary care (eg, antibiotics for sinusitis).4,5 Unlike in the outpatient setting, in which 1 provider often provides a majority of the care for a given episode of care, hospitalized patients are often cared for by multiple providers, challenging the appropriate attribution of patient-level metrics to specific providers. Under the standard approach, an entire hospitalization is attributed to 1 physician, generally the attending of record for the hospitalization, which may be the admitting provider or the discharging provider, depending on the approach used by the hospital. However, assigning responsibility for an entire hospitalization to a provider who may have only seen the patient for a small percentage of a hospitalization may jeopardize the validity of metrics. As provider metrics are increasingly being used for compensation, it is important to ensure that the method for attribution correctly identifies the providers caring for patients. To our knowledge there is no gold standard approach for attributing metrics to providers when patients are cared for by multiple providers, and the standard attending of record–based approach may lack face validity in many cases.

We aimed to develop and operationalize a system to more fairly attribute patient-level data to individual providers across a single hospitalization even when multiple providers cared for the patient. We then compared our methodology to the standard approach, in which the attending of record receives full attribution for each metric, to determine the difference on a provider level between the 2 models.

METHODS

Clinical Setting

The Johns Hopkins Hospital is a 1145-bed, tertiary-care hospital. Over the years of this project, the Johns Hopkins Hospitalist Program was an approximately 20-physician group providing care in a variety of settings, including a dedicated hospitalist floor, where this metrics program was initiated. Hospitalists in this setting work Monday through Friday, with 1 hospitalist and a moonlighter covering on the weekends. Admissions are performed by an admitter, and overnight care is provided by a nocturnist. Initially 17 beds, this unit expanded to 24 beds in June 2012. For the purposes of this article, we included all general medicine patients admitted to this floor between July 1, 2010, and June 30, 2014, who were cared for by hospitalists. During this period, all patients were inpatients; no patients were admitted under observation status. All of these patients were cared for by hospitalists without housestaff or advanced practitioners. Since 2014, the metrics program has been expanded to other hospitalist-run services in the hospital, but for simplicity, we have not presented these more recent data.

Individual Provider Metrics

Metrics were chosen to reflect institutional quality and efficiency priorities. Our choice of metrics was restricted to those that (1) plausibly reflect provider performance, at least in part, and (2) could be accessed in electronic form (without any manual chart review). Whenever possible, we chose metrics with objective data. Additionally, because funding for this effort was provided by the hospital, we sought to ensure that enough of the metrics were related to cost to justify ongoing hospital support of the project. SAS 9.2 (SAS Institute Inc, Cary, NC) was used to calculate metric weights. Specific metrics included American College of Chest Physicians (ACCP)–compliant venous thromboembolism (VTE) prophylaxis,7 observed-to-expected length of stay (LOS) ratio, percentage of discharges per day, discharges before 3 pm, depth of coding, patient satisfaction, readmissions, communication with the primary care provider, and time to signature for discharge summaries (Table 1).

 

 

Appropriate prophylaxis for VTE was calculated by using an algorithm embedded within the computerized provider order entry system, which assessed the prescription of ACCP-compliant VTE prophylaxis within 24 hours following admission. This included a risk assessment, and credit was given for no prophylaxis and/or mechanical and/or pharmacologic prophylaxis per the ACCP guidelines.7

Observed-to-expected LOS was defined by using the University HealthSystem Consortium (UHC; now Vizient Inc) expected LOS for the given calendar year. This approach incorporates patient diagnoses, demographics, and other administrative variables to define an expected LOS for each patient.

The percent of patients discharged per day was defined from billing data as the percentage of a provider’s evaluation and management charges that were the final charge of a patient’s stay (regardless of whether a discharge day service was coded).

Discharge prior to 3 pm was defined from administrative data as the time a patient was discharged from the electronic medical system.

Depth of coding was defined as the number of coded diagnoses submitted to the Maryland Health Services Cost Review Commission for determining payment and was viewed as an indicator of the thoroughness of provider documentation.

Patient satisfaction was defined at the patient level (for those patients who turned in patient satisfaction surveys) as the pooled value of the 5 provider questions on the hospital’s patient satisfaction survey administered by Press Ganey: “time the physician spent with you,” “did the physician show concern for your questions/worries,” “did the physician keep you informed,” “friendliness/courtesy of the physician,” and “skill of the physician.”8

Readmission rates were defined as same-hospital readmissions divided by the total number of patients discharged by a given provider, with exclusions based on the Centers for Medicare and Medicaid Services hospital-wide, all-cause readmission measure.1 The expected same-hospital readmission rate was defined for each patient as the observed readmission rate in the entire UHC (Vizient) data set for all patients with the same All Patient Refined Diagnosis Related Group and severity of illness, as we have described previously.9

Communication with the primary care provider was the only self-reported metric used. It was based on a mandatory prompt on the discharge worksheet in the electronic medical record (EMR). Successful communication with the outpatient provider was defined as verbal or electronic communication by the hospitalist with the outpatient provider. Partial (50%) credit was given for providers who attempted but were unsuccessful in communicating with the outpatient provider, for patients for whom the provider had access to the Johns Hopkins EMR system, and for planned admissions without new or important information to convey. No credit was given for providers who indicated that communication was not indicated, who indicated that a patient and/or family would update the provider, or who indicated that the discharge summary would be sufficient.9 Because the discharge worksheet could be initiated at any time during the hospitalization, providers could document communication with the outpatient provider at any point during hospitalization.

Discharge summary turnaround was defined as the average number of days elapsed between the day of discharge and the signing of the discharge summary in the EMR.

Assigning Ownership of Patients to Individual Providers

Using billing data, we assigned ownership of patient care based on the type, timing, and number of charges that occurred during each hospitalization (Figure 1). Eligible charges included all history and physical (codes 99221, 99222, and 99223), subsequent care (codes 99231, 99232, and 99233), and discharge charges (codes 99238 and 99239).

By using a unique identifier assigned for each hospitalization, professional fees submitted by providers were used to identify which provider saw the patient on the admission day, discharge day, as well as subsequent care days. Providers’ productivity, bonus supplements, and policy compliance were determined by using billing data, which encouraged the prompt submittal of charges.

The provider who billed the admission history and physical (codes 99221, 99222, and 99223) within 1 calendar date of the patient’s initial admission was defined as the admitting provider. Patients transferred to the hospitalist service from other services were not assigned an admitting hospitalist. The sole metric assigned to the admitting hospitalist was ACCP-compliant VTE prophylaxis.

The provider who billed the final subsequent care or discharge code (codes 99231, 99232, 99233, 99238, and 99239) within 1 calendar date of discharge was defined as the discharging provider. For hospitalizations characterized by a single provider charge (eg, for patients admitted and discharged on the same day), the provider billing this charge was assigned as both the admitting and discharging physician. Patients upgraded to the intensive care unit (ICU) were not counted as a discharge unless the patient was downgraded and discharged from the hospitalist service. The discharging provider was assigned responsibility for the time of discharge, the percent of patients discharged per day, the discharge summary turnaround time, and hospital readmissions.

Metrics that were assigned to multiple providers for a single hospitalization were termed “provider day–weighted” metrics. The formula for calculating the weight for each provider day–weighted metric was as follows: weight for provider A = [number of daily charges billed by provider A] divided by [LOS +1]. The initial hospital day was counted as day 0. LOS plus 1 was used to recognize that a typical hospitalization will have a charge on the day of admission (day 0) and a charge on the day of discharge such that an LOS of 2 days (eg, a patient admitted on Monday and discharged on Wednesday) will have 3 daily charges. Provider day–weighted metrics included patient satisfaction, communication with the outpatient provider, depth of coding, and observed-to-expected LOS.

Our billing software prevented providers from the same group from billing multiple daily charges, thus ensuring that there were no duplicated charges submitted for a given day.

 

 

Presenting Results

Providers were only shown data from the day-weighted approach. For ease of visual interpretation, scores for each metric were scaled ordinally from 1 (worst performance) to 9 (best performance; Table 1). Data were displayed in a dashboard format on a password-protected website for each provider to view his or her own data relative to that of the hospitalist peer group. The dashboard was implemented in this format on July 1, 2011. Data were updated quarterly (Figure 2).

Results were displayed in a polyhedral or spider-web graph (Figure 2). Provider and group metrics were scaled according to predefined benchmarks established for each metric and standardized to a scale ranging from 1 to 9. The scale for each metric was set based on examining historical data and group median performance on the metrics to ensure that there was a range of performance (ie, to avoid having most hospitalists scoring a 1 or 9). Scaling thresholds were periodically adjusted as appropriate to maintain good visual discrimination. Higher scores (creating a larger-volume polygon) are desirable even for metrics such as LOS, for which a low value is desirable. Both a spider-web graph and trends over time were available to the provider (Figure 2). These graphs display a comparison of the individual provider scores for each metric to the hospitalist group average for that metric.

Comparison with the Standard (Attending of Record) Method of Attribution

For the purposes of this report, we sought to determine whether there were meaningful differences between our day-weighted approach versus the standard method of attribution, in which the attending of record is assigned responsibility for each metric that would not have been attributed to the discharging attending under both methods. Our goal was to determine where and whether there was a meaningful difference between the 2 methodologies, recognizing that the degree of difference between these 2 methodologies might vary in other institutions and settings. In our hospital, the attending of record is generally the discharging attending. In order to compare the 2 methodologies, we arbitrarily picked 2015 to retrospectively evaluate the differences between these 2 methods of attribution. We did not display or provide data using the standard methodology to providers at any point; this approach was used only for the purposes of this report. Because these metrics are intended to evaluate relative provider performance, we assigned a percentile to each provider for his or her performance on the given metric using our attribution methodology and then, similarly, assigned a percentile to each provider using the standard methodology. This yielded 2 percentile scores for each provider and each metric. We then compared these percentile ranks for providers in 2 ways: (1) we determined how often providers who scored in the top half of the group for a given metric (above the 50th percentile) also scored in the top half of the group for that metric by using the other calculation method, and (2) we calculated the absolute value of the difference in percentiles between the 2 methods to characterize the impact on a provider’s ranking for that metric that might result from switching to the other method. For instance, if a provider scored at the 20th percentile for the group in patient satisfaction with 1 attribution method and scored at the 40th percentile for the group in patient satisfaction using the other method, the absolute change in percentile would be 20 percentile points. But, this provider would still be below the 50th percentile by both methods (concordant bottom half performance). We did not perform this comparison for metrics assigned to the discharging provider (such as discharge summary turnaround time or readmissions) because the attending of record designation is assigned to the discharging provider at our hospital.

RESULTS

The dashboard was successfully operationalized on July 1, 2011, with displays visible to providers as shown in Figure 2. Consistent with the principles of providing effective performance feedback to providers, the display simultaneously showed providers their individual performance as well as the performance of their peers. Providers were able to view their spider-web plot for prior quarters. Not shown are additional views that allowed providers to see quarterly trends in their data versus their peers across several fiscal years. Also available to providers was their ranking relative to their peers for each metric; specific peers were deidentified in the display.

There was notable discordance between provider rankings between the 2 methodologies, as shown in Table 2. Provider performance above or below the median was concordant 56% to 75% of the time (depending on the particular metric), indicating substantial discordance because top-half or bottom-half concordance would be expected to occur by chance 50% of the time. Although the provider percentile differences between the 2 methods tended to be modest for most providers (the median difference between the methods was 13 to 22 percentile points for the various metrics), there were some providers for whom the method of calculation dramatically impacted their rankings. For 5 of the 6 metrics we examined, at least 1 provider had a 50-percentile or greater change in his or her ranking based on the method used. This indicates that at least some providers would have had markedly different scores relative to their peers had we used the alternative methodology (Table 2). In VTE prophylaxis, for example, at least 1 provider had a 94-percentile change in his or her ranking; similarly, a provider had an 88-perentile change in his or her LOS ranking between the 2 methodologies.

 

 

DISCUSSION

We found that it is possible to assign metrics across 1 hospital stay to multiple providers by using billing data. We also found a meaningful discrepancy in how well providers scored (relative to their peers) based on the method used for attribution. These results imply that hospitals should consider attributing performance metrics based on ascribed ownership from billing data and not just from attending of record status.

As hospitalist programs and providers in general are increasingly being asked to develop dashboards to monitor individual and group performance, correctly attributing care to providers is likely to become increasingly important. Experts agree that principles of effective provider performance dashboards include ranking individual provider performance relative to peers, clearly displaying data in an easily accessible format, and ensuring that data can be credibly attributed to the individual provider.3,4,6 However, there appears to be no gold standard method for attribution, especially in the inpatient setting. Our results imply that hospitals should consider attributing performance metrics based on ascribed ownership from billing data and not just from attending of record status.

Several limitations of our findings are important to consider. First, our program is a relatively small, academic group with handoffs that typically occur every 1 to 2 weeks and sometimes with additional handoffs on weekends. Different care patterns and settings might impact the utility of our attribution methodology relative to the standard methodology. Additionally, it is important to note that the relative merits of the different methodologies cannot be ascertained from our comparison. We can demonstrate discordance between the attribution methodologies, but we cannot say that 1 method is correct and the other is flawed. Although we believe that our day-weighted approach feels fairer to providers based on group input and feedback, we did not conduct a formal survey to examine providers’ preferences for the standard versus day-weighted approaches. The appropriateness of a particular attribution method needs to be assessed locally and may vary based on the clinical setting. For instance, on a service in which patients are admitted for procedures, it may make more sense to attribute the outcome of the case to the proceduralist even if that provider did not bill for the patient’s care on a daily basis. Finally, the computational requirements of our methodology are not trivial and require linking billing data with administrative patient-level data, which may be challenging to operationalize in some institutions.

These limitations aside, we believe that our attribution methodology has face validity. For example, a provider might be justifiably frustrated if, using the standard methodology, he or she is charged with the LOS of a patient who had been hospitalized for months, particularly if that patient is discharged shortly after the provider assumes care. Our method addresses this type of misattribution. Particularly when individual provider compensation is based on performance on metrics (as is the case at our institution), optimizing provider attribution to particular patients may be important, and face validity may be required for group buy-in.

In summary, we have demonstrated that it is possible to use billing data to assign ownership of patients to multiple providers over 1 hospital stay. This could be applied to other hospitalist programs as well as other healthcare settings in which multiple providers care for patients during 1 healthcare encounter (eg, ICUs).

Disclosure

The authors declare they have no relevant conflicts of interest.

References

1. Horwitz L, Partovian C, Lin Z, et al. Hospital-Wide (All-Condition) 30‐Day Risk-Standardized Readmission Measure. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/downloads/MMSHospital-WideAll-ConditionReadmissionRate.pdf. Accessed March 6, 2015.
2. Medicine SoH. Measuring Hospitalist Performance: Metrics, Reports, and Dashboards. 2007; https://www.hospitalmedicine.org/Web/Practice_Management/Products_and_Programs/measure_hosp_perf_metrics_reports_dashboards.aspx. Accessed May 12, 2013.
3. Teleki SS, Shaw R, Damberg CL, McGlynn EA. Providing performance feedback to individual physicians: current practice and emerging lessons. Santa Monica, CA: RAND Corporation; 2006. 1-47. https://www.rand.org/content/dam/rand/pubs/working_papers/2006/RAND_WR381.pdf. Accessed August, 2017.
4. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice Feedback Interventions: 15 Suggestions for Optimizing Effectiveness Practice Feedback Interventions. Ann Intern Med. 2016;164(6):435-441. PubMed
5. Dowding D, Randell R, Gardner P, et al. Dashboards for improving patient care: review of the literature. Int J Med Inform. 2015;84(2):87-100. PubMed
6. Landon BE, Normand S-LT, Blumenthal D, Daley J. Physician clinical performance assessment: prospects and barriers. JAMA. 2003;290(9):1183-1189. PubMed
7. Guyatt GH, Akl EA, Crowther M, Gutterman DD, Schuünemann HJ. Executive summary: Antit hrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Ann Intern Med. 2012;141(2 suppl):7S-47S. PubMed
8. Siddiqui Z, Qayyum R, Bertram A, et al. Does Provider Self-reporting of Etiquette Behaviors Improve Patient Experience? A Randomized Controlled Trial. J Hosp Med. 2017;12(6):402-406. PubMed
9. Oduyebo I, Lehmann CU, Pollack CE, et al. Association of self-reported hospital discharge handoffs with 30-day readmissions. JAMA Intern Med. 2013;173(8):624-629. PubMed

References

1. Horwitz L, Partovian C, Lin Z, et al. Hospital-Wide (All-Condition) 30‐Day Risk-Standardized Readmission Measure. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/MMS/downloads/MMSHospital-WideAll-ConditionReadmissionRate.pdf. Accessed March 6, 2015.
2. Medicine SoH. Measuring Hospitalist Performance: Metrics, Reports, and Dashboards. 2007; https://www.hospitalmedicine.org/Web/Practice_Management/Products_and_Programs/measure_hosp_perf_metrics_reports_dashboards.aspx. Accessed May 12, 2013.
3. Teleki SS, Shaw R, Damberg CL, McGlynn EA. Providing performance feedback to individual physicians: current practice and emerging lessons. Santa Monica, CA: RAND Corporation; 2006. 1-47. https://www.rand.org/content/dam/rand/pubs/working_papers/2006/RAND_WR381.pdf. Accessed August, 2017.
4. Brehaut JC, Colquhoun HL, Eva KW, et al. Practice Feedback Interventions: 15 Suggestions for Optimizing Effectiveness Practice Feedback Interventions. Ann Intern Med. 2016;164(6):435-441. PubMed
5. Dowding D, Randell R, Gardner P, et al. Dashboards for improving patient care: review of the literature. Int J Med Inform. 2015;84(2):87-100. PubMed
6. Landon BE, Normand S-LT, Blumenthal D, Daley J. Physician clinical performance assessment: prospects and barriers. JAMA. 2003;290(9):1183-1189. PubMed
7. Guyatt GH, Akl EA, Crowther M, Gutterman DD, Schuünemann HJ. Executive summary: Antit hrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians evidence-based clinical practice guidelines. Ann Intern Med. 2012;141(2 suppl):7S-47S. PubMed
8. Siddiqui Z, Qayyum R, Bertram A, et al. Does Provider Self-reporting of Etiquette Behaviors Improve Patient Experience? A Randomized Controlled Trial. J Hosp Med. 2017;12(6):402-406. PubMed
9. Oduyebo I, Lehmann CU, Pollack CE, et al. Association of self-reported hospital discharge handoffs with 30-day readmissions. JAMA Intern Med. 2013;173(8):624-629. PubMed

Issue
Journal of Hospital Medicine 13(7)
Issue
Journal of Hospital Medicine 13(7)
Page Number
470-475. Published online first December 20, 2017
Page Number
470-475. Published online first December 20, 2017
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Carrie A. Herzke, MD, MBA, Clinical Director, Hospitalist Program, Johns Hopkins Hospital, 600 N. Wolfe Street, Meyer 8-134, Baltimore, MD 21287; Telephone: 443-287-3631; Fax: 410-502-0923; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 08/15/2018 - 05:00
Un-Gate On Date
Wed, 07/11/2018 - 05:00
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media

Does provider self-reporting of etiquette behaviors improve patient experience? A randomized controlled trial

Article Type
Changed
Thu, 06/22/2017 - 14:33
Display Headline
Does provider self-reporting of etiquette behaviors improve patient experience? A randomized controlled trial

Physicians have historically had limited adoption of strategies to improve patient experience and often cite suboptimal data and lack of evidence-driven strategies. 1,2 However, public reporting of hospital-level physician domain Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) experience scores, and more recent linking of payments to performance on patient experience metrics, have been associated with significant increases in physician domain scores for most of the hospitals. 3 Hospitals and healthcare organizations have deployed a broad range of strategies to engage physicians. These include emphasizing the relationship between patient experience and patient compliance, complaints, and malpractice lawsuits; appealing to physicians’ sense of competitiveness by publishing individual provider experience scores; educating physicians on HCAHPS and providing them with regularly updated data; and development of specific techniques for improving patient-physician interaction. 4-8

Studies show that educational curricula on improving etiquette and communication skills for physicians lead to improvement in patient experience, and many such training programs are available to hospitals for a significant cost.9-15 Other studies that have focused on providing timely and individual feedback to physicians using tools other than HCAHPS have shown improvement in experience in some instances. 16,17 However, these strategies are resource intensive, require the presence of an independent observer in each patient room, and may not be practical in many settings. Further, long-term sustainability may be problematic.

Since the goal of any educational intervention targeting physicians is routinizing best practices, and since resource-intensive strategies of continuous assessment and feedback may not be practical, we sought to test the impact of periodic physician self-reporting of their etiquette-based behavior on their patient experience scores.

METHODS

Subjects

Hospitalists from 4 hospitals (2 community and 2 academic) that are part of the same healthcare system were the study subjects. Hospitalists who had at least 15 unique patients responding to the routinely administered Press Ganey experience survey during the baseline period were considered eligible. Eligible hospitalists were invited to enroll in the study if their site director confirmed that the provider was likely to stay with the group for the subsequent 12-month study period.

Self-Reported Frequency of Best-Practice Bedside Etiquette Behaviors
Table 1

Randomization, Intervention and Control Group

Hospitalists were randomized to the study arm or control arm (1:1 randomization). Study arm participants received biweekly etiquette behavior (EB) surveys and were asked to report how frequently they performed 7 best-practice bedside etiquette behaviors during the previous 2-week period (Table 1). These behaviors were pre-defined by a consensus group of investigators as being amenable to self-report and commonly considered best practice as described in detail below. Control-arm participants received similarly worded survey on quality improvement behaviors (QIB) that would not be expected to impact patient experience (such as reviewing medications to ensure that antithrombotic prophylaxis was prescribed, Table 1).

 

 

Baseline and Study Periods

A 12-month period prior to the enrollment of each hospitalist was considered the baseline period for that individual. Hospitalist eligibility was assessed based on number of unique patients for each hospitalist who responded to the survey during this baseline period. Once enrolled, baseline provider-level patient experience scores were calculated based on the survey responses during this 12-month baseline period. Baseline etiquette behavior performance of the study was calculated from the first survey. After the initial survey, hospitalists received biweekly surveys (EB or QIB) for the 12-month study period for a total of 26 surveys (including the initial survey).

Survey Development, Nature of Survey, Survey Distribution Methods

The EB and QIB physician self-report surveys were developed through an iterative process by the study team. The EB survey included elements from an etiquette-based medicine checklist for hospitalized patients described by Kahn et al. 18 We conducted a review of literature to identify evidence-based practices.19-22 Research team members contributed items on best practices in etiquette-based medicine from their experience. Specifically, behaviors were selected if they met the following 4 criteria: 1) performing the behavior did not lead to significant increase in workload and was relatively easy to incorporate in the work flow; 2) occurrence of the behavior would be easy to note for any outside observer or the providers themselves; 3) the practice was considered to be either an evidence-based or consensus-based best-practice; 4) there was consensus among study team members on including the item. The survey was tested for understandability by hospitalists who were not eligible for the study.

The EB survey contained 7 items related to behaviors that were expected to impact patient experience. The QIB survey contained 4 items related to behaviors that were expected to improve quality (Table 1). The initial survey also included questions about demographic characteristics of the participants.

Survey questionnaires were sent via email every 2 weeks for a period of 12 months. The survey questionnaire became available every other week, between Friday morning and Tuesday midnight, during the study period. Hospitalists received daily email reminders on each of these days with a link to the survey website if they did not complete the survey. They had the opportunity to report that they were not on service in the prior week and opt out of the survey for the specific 2-week period. The survey questions were available online as well as on a mobile device format.

Provider Level Patient Experience Scores

Provider-level patient experience scores were calculated from the physician domain Press Ganey survey items, which included the time that the physician spent with patients, the physician addressed questions/worries, the physician kept patients informed, the friendliness/courtesy of physician, and the skill of physician. Press Ganey responses were scored from 1 to 5 based on the Likert scale responses on the survey such that a response “very good” was scored 5 and a response “very poor” was scored 1. Additionally, physician domain HCAHPS item (doctors treat with courtesy/respect, doctors listen carefully, doctors explain in way patients understand) responses were utilized to calculate another set of HCAHPS provider level experience scores. The responses were scored as 1 for “always” response and “0” for any other response, consistent with CMS dichotomization of these results for public reporting. Weighted scores were calculated for individual hospitalists based on the proportion of days each hospitalist billed for the hospitalization so that experience scores of patients who were cared for by multiple providers were assigned to each provider in proportion to the percent of care delivered.23 Separate composite physician scores were generated from the 5 Press Ganey and for the 3 HCAHPS physician items. Each item was weighted equally, with the maximum possible for Press Ganey composite score of 25 (sum of the maximum possible score of 5 on each of the 5 Press Ganey items) and the HCAHPS possible total was 3 (sum of the maximum possible score of 1 on each of the 3 HCAHPS items).

ANALYSIS AND STATISTICAL METHODS

We analyzed the data to assess for changes in frequency of self-reported behavior over the study period, changes in provider-level patient experience between baseline and study period, and the association between the these 2 outcomes. The self-reported etiquette-based behavior responses were scored as 1 for the lowest response (never) to 4 as the highest (always). With 7 questions, the maximum attainable score was 28. The maximum score was normalized to 100 for ease of interpretation (corresponding to percentage of time etiquette behaviors were employed, by self-report). Similarly, the maximum attainable self-reported QIB-related behavior score on the 4 questions was 16. This was also converted to 0-100 scale for ease of comparison.

 

 

Two additional sets of analyses were performed to evaluate changes in patient experience during the study period. First, the mean 12-month provider level patient experience composite score in the baseline period was compared with the 12-month composite score during the 12-month study period for the study group and the control group. These were assessed with and without adjusting for age, sex, race, and U.S. medical school graduate (USMG) status. In the second set of unadjusted and adjusted analyses, changes in biweekly composite scores during the study period were compared between the intervention and the control groups while accounting for correlation between observations from the same physician using mixed linear models. Linear mixed models were used to accommodate correlations among multiple observations made on the same physician by including random effects within each regression model. Furthermore, these models allowed us to account for unbalanced design in our data when not all physicians had an equal number of observations and data elements were collected asynchronously.24 Analyses were performed in R version 3.2.2 (The R Project for Statistical Computing, Vienna, Austria); linear mixed models were performed using the ‘nlme’ package.25

We hypothesized that self-reporting on biweekly surveys would result in increases in the frequency of the reported behavior in each arm. We also hypothesized that, because of biweekly reflection and self-reporting on etiquette-based bedside behavior, patient experience scores would increase in the study arm.

RESULTS

Of the 80 hospitalists approached to participate in the study, 64 elected to participate (80% participation rate). The mean response rate to the survey was 57.4% for the intervention arm and 85.7% for the control arm. Higher response rates were not associated with improved patient experience scores. Of the respondents, 43.1% were younger than 35 years of age, 51.5% practiced in academic settings, and 53.1% were female. There was no statistical difference between hospitalists’ baseline composite experience scores based on gender, age, academic hospitalist status, USMG status, and English as a second language status. Similarly, there were no differences in poststudy composite experience scores based on physician characteristics.

Physicians reported high rates of etiquette-based behavior at baseline (mean score, 83.9+/-3.3), and this showed moderate improvement over the study period (5.6 % [3.9%-7.3%, P < 0.0001]). Similarly, there was a moderate increase in frequency of self-reported behavior in the control arm (6.8% [3.5%-10.1%, P < 0.0001]). Hospitalists reported on 80.7% (77.6%-83.4%) of the biweekly surveys that they “almost always” wrapped up by asking, “Do you have any other questions or concerns” or something similar. In contrast, hospitalists reported on only 27.9% (24.7%-31.3%) of the biweekly survey that they “almost always” sat down in the patient room.

The composite physician domain Press Ganey experience scores were no different for the intervention arm and the control arm during the 12-month baseline period (21.8 vs. 21.7; P = 0.90) and the 12-month intervention period (21.6 vs. 21.5; P = 0.75). Baseline self-reported behaviors were not associated with baseline experience scores. Similarly, there were no differences between the arms on composite physician domain HCAHPS experience scores during baseline (2.1 vs. 2.3; P = 0.13) and intervention periods (2.2 vs. 2.1; P = 0.33).

The difference in difference analysis of the baseline and postintervention composite between the intervention arm and the control arm was not statistically significant for Press Ganey composite physician experience scores (-0.163 vs. -0.322; P = 0.71) or HCAHPS composite physician scores (-0.162 vs. -0.071; P = 0.06). The results did not change when controlled for survey response rate (percentage biweekly surveys completed by the hospitalist), age, gender, USMG status, English as a second language status, or percent clinical effort. The difference in difference analysis of the individual Press Ganey and HCAHPS physician domain items that were used to calculate the composite score was also not statistically significant (Table 2).

Difference in Difference Analysis of Pre-Intervention and Postintervention Physician Domain HCAHPS and Press Ganey Scores
Table 2


Changes in self-reported etiquette-based behavior were not associated with any changes in composite Press Ganey and HCAHPS experience score or individual items of the composite experience scores between baseline and intervention period. Similarly, biweekly self-reported etiquette behaviors were not associated with composite and individual item experience scores derived from responses of the patients discharged during the same 2-week reporting period. The intra-class correlation between observations from the same physician was only 0.02%, suggesting that most of the variation in scores was likely due to patient factors and did not result from differences between physicians.

DISCUSSION

This 12-month randomized multicenter study of hospitalists showed that repeated self-reporting of etiquette-based behavior results in modest reported increases in performance of these behaviors. However, there was no associated increase in provider level patient experience scores at the end of the study period when compared to baseline scores of the same physicians or when compared to the scores of the control group. The study demonstrated feasibility of self-reporting of behaviors by physicians with high participation when provided modest incentives.

 

 

Educational and feedback strategies used to improve patient experience are very resource intensive. Training sessions provided at some hospitals may take hours, and sustained effects are unproved. The presence of an independent observer in patient rooms to generate feedback for providers is not scalable and sustainable outside of a research study environment.9-11,15,17,26-29 We attempted to use physician repeated self-reporting to reinforce the important and easy to adopt components of etiquette-based behavior to develop a more easily sustainable strategy. This may have failed for several reasons.

When combining “always” and “usually” responses, the physicians in our study reported a high level of etiquette behavior at baseline. If physicians believe that they are performing well at baseline, they would not consider this to be an area in need of improvement. Bigger changes in behavior may have been possible had the physicians rated themselves less favorably at baseline. Inflated or high baseline self-assessment of performance might also have led to limited success of other types of educational interventions had they been employed.

Studies published since the rollout of our study have shown that physicians significantly overestimate how frequently they perform these etiquette behaviors.30,31 It is likely that was the case in our study subjects. This may, at best, indicate that a much higher change in the level of self-reported performance would be needed to result in meaningful actual changes, or worse, may render self-reported etiquette behavior entirely unreliable. Interventions designed to improve etiquette-based behavior might need to provide feedback about performance.

A program that provides education on the importance of etiquette-based behaviors, obtains objective measures of performance of these behaviors, and offers individualized feedback may be more likely to increase the desired behaviors. This is a limitation of our study. However, we aimed to test a method that required limited resources. Additionally, our method for attributing HCAHPS scores to an individual physician, based on weighted scores that were calculated according to the proportion of days each hospitalist billed for the hospitalization, may be inaccurate. It is possible that each interaction does not contribute equally to the overall score. A team-based intervention and experience measurements could overcome this limitation.

CONCLUSION

This randomized trial demonstrated the feasibility of self-assessment of bedside etiquette behaviors by hospitalists but failed to demonstrate a meaningful impact on patient experience through self-report. These findings suggest that more intensive interventions, perhaps involving direct observation, peer-to-peer mentoring, or other techniques may be required to impact significantly physician etiquette behaviors.

Disclosure

Johns Hopkins Hospitalist Scholars Program provided funding support. Dr. Qayyum is a consultant for Sunovion. The other authors have nothing to report.

 

References

1. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625-648. PubMed
2. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: What it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. PubMed
3. Mann RK, Siddiqui Z, Kurbanova N, Qayyum R. Effect of HCAHPS reporting on patient satisfaction with physician communication. J Hosp Med. 2015;11(2):105-110. PubMed
4. Rivers PA, Glover SH. Health care competition, strategic mission, and patient satisfaction: research model and propositions. J Health Organ Manag. 2008;22(6):627-641. PubMed
5. Kim SS, Kaplowitz S, Johnston MV. The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof. 2004;27(3):237-251. PubMed
6. Stelfox HT, Gandhi TK, Orav EJ, Gustafson ML. The relation of patient satisfaction with complaints against physicians and malpractice lawsuits. Am J Med. 2005;118(10):1126-1133. PubMed
7. Rodriguez HP, Rodday AM, Marshall RE, Nelson KL, Rogers WH, Safran DG. Relation of patients’ experiences with individual physicians to malpractice risk. Int J Qual Health Care. 2008;20(1):5-12. PubMed
8. Cydulka RK, Tamayo-Sarver J, Gage A, Bagnoli D. Association of patient satisfaction with complaints and risk management among emergency physicians. J Emerg Med. 2011;41(4):405-411. PubMed
9. Windover AK, Boissy A, Rice TW, Gilligan T, Velez VJ, Merlino J. The REDE model of healthcare communication: Optimizing relationship as a therapeutic agent. Journal of Patient Experience. 2014;1(1):8-13. 
10. Chou CL, Hirschmann K, Fortin AH 6th, Lichstein PR. The impact of a faculty learning community on professional and personal development: the facilitator training program of the American Academy on Communication in Healthcare. Acad Med. 2014;89(7):1051-1056. PubMed
11. Kennedy M, Denise M, Fasolino M, John P, Gullen M, David J. Improving the patient experience through provider communication skills building. Patient Experience Journal. 2014;1(1):56-60. 
12. Braverman AM, Kunkel EJ, Katz L, et al. Do I buy it? How AIDET™ training changes residents’ values about patient care. Journal of Patient Experience. 2015;2(1):13-20. 
13. Riess H, Kelley JM, Bailey RW, Dunn EJ, Phillips M. Empathy training for resident physicians: a randomized controlled trial of a neuroscience-informed curriculum. J Gen Intern Med. 2012;27(10):1280-1286. PubMed
14. Rothberg MB, Steele JR, Wheeler J, Arora A, Priya A, Lindenauer PK. The relationship between time spent communicating and communication outcomes on a hospital medicine service. J Gen Internl Med. 2012;27(2):185-189. PubMed
15. O’Leary KJ, Cyrus RM. Improving patient satisfaction: timely feedback to specific physicians is essential for success. J Hosp Med. 2015;10(8):555-556. PubMed
16. Indovina K, Keniston A, Reid M, et al. Real‐time patient experience surveys of hospitalized medical patients. J Hosp Med. 2016;10(8):497-502. PubMed
17. Banka G, Edgington S, Kyulo N, et al. Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497-502. PubMed
18. Kahn MW. Etiquette-based medicine. N Engl J Med. 2008;358(19):1988-1989. PubMed
19. Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in-hospital physicians. Arch Intern Med. 2009;169(2):199-201. PubMed
20. Francis JJ, Pankratz VS, Huddleston JM. Patient satisfaction associated with correct identification of physicians’ photographs. Mayo Clin Proc. 2001;76(6):604-608. PubMed
21. Strasser F, Palmer JL, Willey J, et al. Impact of physician sitting versus standing during inpatient oncology consultations: patients’ preference and perception of compassion and duration. A randomized controlled trial. J Pain Symptom Manage. 2005;29(5):489-497. PubMed
22. Dudas RA, Lemerman H, Barone M, Serwint JR. PHACES (Photographs of Academic Clinicians and Their Educational Status): a tool to improve delivery of family-centered care. Acad Pediatr. 2010;10(2):138-145. PubMed
23. Herzke C, Michtalik H, Durkin N, et al. A method for attributing patient-level metrics to rotating providers in an inpatient setting. J Hosp Med. Under revision. 
24. Holden JE, Kelley K, Agarwal R. Analyzing change: a primer on multilevel models with applications to nephrology. Am J Nephrol. 2008;28(5):792-801. PubMed
25. Pinheiro J, Bates D, DebRoy S, Sarkar D. Linear and nonlinear mixed effects models. R package version. 2007;3:57. 
26. Braverman AM, Kunkel EJ, Katz L, et al. Do I buy it? How AIDET™ training changes residents’ values about patient care. Journal of Patient Experience. 2015;2(1):13-20.
27. Riess H, Kelley JM, Bailey RW, Dunn EJ, Phillips M. Empathy training for resident physicians: A randomized controlled trial of a neuroscience-informed curriculum. J Gen Intern Med. 2012;27(10):1280-1286. PubMed
28. Raper SE, Gupta M, Okusanya O, Morris JB. Improving communication skills: A course for academic medical center surgery residents and faculty. J Surg Educ. 2015;72(6):e202-e211. PubMed
29. Indovina K, Keniston A, Reid M, et al. Real‐time patient experience surveys of hospitalized medical patients. J Hosp Med. 2016;11(4):251-256. PubMed
30. Block L, Hutzler L, Habicht R, et al. Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter. J Hosp Med. 2013;8(11):631-634. PubMed
31. Tackett S, Tad-y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette-based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908-913. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12(6)
Publications
Topics
Page Number
402-406
Sections
Article PDF
Article PDF

Physicians have historically had limited adoption of strategies to improve patient experience and often cite suboptimal data and lack of evidence-driven strategies. 1,2 However, public reporting of hospital-level physician domain Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) experience scores, and more recent linking of payments to performance on patient experience metrics, have been associated with significant increases in physician domain scores for most of the hospitals. 3 Hospitals and healthcare organizations have deployed a broad range of strategies to engage physicians. These include emphasizing the relationship between patient experience and patient compliance, complaints, and malpractice lawsuits; appealing to physicians’ sense of competitiveness by publishing individual provider experience scores; educating physicians on HCAHPS and providing them with regularly updated data; and development of specific techniques for improving patient-physician interaction. 4-8

Studies show that educational curricula on improving etiquette and communication skills for physicians lead to improvement in patient experience, and many such training programs are available to hospitals for a significant cost.9-15 Other studies that have focused on providing timely and individual feedback to physicians using tools other than HCAHPS have shown improvement in experience in some instances. 16,17 However, these strategies are resource intensive, require the presence of an independent observer in each patient room, and may not be practical in many settings. Further, long-term sustainability may be problematic.

Since the goal of any educational intervention targeting physicians is routinizing best practices, and since resource-intensive strategies of continuous assessment and feedback may not be practical, we sought to test the impact of periodic physician self-reporting of their etiquette-based behavior on their patient experience scores.

METHODS

Subjects

Hospitalists from 4 hospitals (2 community and 2 academic) that are part of the same healthcare system were the study subjects. Hospitalists who had at least 15 unique patients responding to the routinely administered Press Ganey experience survey during the baseline period were considered eligible. Eligible hospitalists were invited to enroll in the study if their site director confirmed that the provider was likely to stay with the group for the subsequent 12-month study period.

Self-Reported Frequency of Best-Practice Bedside Etiquette Behaviors
Table 1

Randomization, Intervention and Control Group

Hospitalists were randomized to the study arm or control arm (1:1 randomization). Study arm participants received biweekly etiquette behavior (EB) surveys and were asked to report how frequently they performed 7 best-practice bedside etiquette behaviors during the previous 2-week period (Table 1). These behaviors were pre-defined by a consensus group of investigators as being amenable to self-report and commonly considered best practice as described in detail below. Control-arm participants received similarly worded survey on quality improvement behaviors (QIB) that would not be expected to impact patient experience (such as reviewing medications to ensure that antithrombotic prophylaxis was prescribed, Table 1).

 

 

Baseline and Study Periods

A 12-month period prior to the enrollment of each hospitalist was considered the baseline period for that individual. Hospitalist eligibility was assessed based on number of unique patients for each hospitalist who responded to the survey during this baseline period. Once enrolled, baseline provider-level patient experience scores were calculated based on the survey responses during this 12-month baseline period. Baseline etiquette behavior performance of the study was calculated from the first survey. After the initial survey, hospitalists received biweekly surveys (EB or QIB) for the 12-month study period for a total of 26 surveys (including the initial survey).

Survey Development, Nature of Survey, Survey Distribution Methods

The EB and QIB physician self-report surveys were developed through an iterative process by the study team. The EB survey included elements from an etiquette-based medicine checklist for hospitalized patients described by Kahn et al. 18 We conducted a review of literature to identify evidence-based practices.19-22 Research team members contributed items on best practices in etiquette-based medicine from their experience. Specifically, behaviors were selected if they met the following 4 criteria: 1) performing the behavior did not lead to significant increase in workload and was relatively easy to incorporate in the work flow; 2) occurrence of the behavior would be easy to note for any outside observer or the providers themselves; 3) the practice was considered to be either an evidence-based or consensus-based best-practice; 4) there was consensus among study team members on including the item. The survey was tested for understandability by hospitalists who were not eligible for the study.

The EB survey contained 7 items related to behaviors that were expected to impact patient experience. The QIB survey contained 4 items related to behaviors that were expected to improve quality (Table 1). The initial survey also included questions about demographic characteristics of the participants.

Survey questionnaires were sent via email every 2 weeks for a period of 12 months. The survey questionnaire became available every other week, between Friday morning and Tuesday midnight, during the study period. Hospitalists received daily email reminders on each of these days with a link to the survey website if they did not complete the survey. They had the opportunity to report that they were not on service in the prior week and opt out of the survey for the specific 2-week period. The survey questions were available online as well as on a mobile device format.

Provider Level Patient Experience Scores

Provider-level patient experience scores were calculated from the physician domain Press Ganey survey items, which included the time that the physician spent with patients, the physician addressed questions/worries, the physician kept patients informed, the friendliness/courtesy of physician, and the skill of physician. Press Ganey responses were scored from 1 to 5 based on the Likert scale responses on the survey such that a response “very good” was scored 5 and a response “very poor” was scored 1. Additionally, physician domain HCAHPS item (doctors treat with courtesy/respect, doctors listen carefully, doctors explain in way patients understand) responses were utilized to calculate another set of HCAHPS provider level experience scores. The responses were scored as 1 for “always” response and “0” for any other response, consistent with CMS dichotomization of these results for public reporting. Weighted scores were calculated for individual hospitalists based on the proportion of days each hospitalist billed for the hospitalization so that experience scores of patients who were cared for by multiple providers were assigned to each provider in proportion to the percent of care delivered.23 Separate composite physician scores were generated from the 5 Press Ganey and for the 3 HCAHPS physician items. Each item was weighted equally, with the maximum possible for Press Ganey composite score of 25 (sum of the maximum possible score of 5 on each of the 5 Press Ganey items) and the HCAHPS possible total was 3 (sum of the maximum possible score of 1 on each of the 3 HCAHPS items).

ANALYSIS AND STATISTICAL METHODS

We analyzed the data to assess for changes in frequency of self-reported behavior over the study period, changes in provider-level patient experience between baseline and study period, and the association between the these 2 outcomes. The self-reported etiquette-based behavior responses were scored as 1 for the lowest response (never) to 4 as the highest (always). With 7 questions, the maximum attainable score was 28. The maximum score was normalized to 100 for ease of interpretation (corresponding to percentage of time etiquette behaviors were employed, by self-report). Similarly, the maximum attainable self-reported QIB-related behavior score on the 4 questions was 16. This was also converted to 0-100 scale for ease of comparison.

 

 

Two additional sets of analyses were performed to evaluate changes in patient experience during the study period. First, the mean 12-month provider level patient experience composite score in the baseline period was compared with the 12-month composite score during the 12-month study period for the study group and the control group. These were assessed with and without adjusting for age, sex, race, and U.S. medical school graduate (USMG) status. In the second set of unadjusted and adjusted analyses, changes in biweekly composite scores during the study period were compared between the intervention and the control groups while accounting for correlation between observations from the same physician using mixed linear models. Linear mixed models were used to accommodate correlations among multiple observations made on the same physician by including random effects within each regression model. Furthermore, these models allowed us to account for unbalanced design in our data when not all physicians had an equal number of observations and data elements were collected asynchronously.24 Analyses were performed in R version 3.2.2 (The R Project for Statistical Computing, Vienna, Austria); linear mixed models were performed using the ‘nlme’ package.25

We hypothesized that self-reporting on biweekly surveys would result in increases in the frequency of the reported behavior in each arm. We also hypothesized that, because of biweekly reflection and self-reporting on etiquette-based bedside behavior, patient experience scores would increase in the study arm.

RESULTS

Of the 80 hospitalists approached to participate in the study, 64 elected to participate (80% participation rate). The mean response rate to the survey was 57.4% for the intervention arm and 85.7% for the control arm. Higher response rates were not associated with improved patient experience scores. Of the respondents, 43.1% were younger than 35 years of age, 51.5% practiced in academic settings, and 53.1% were female. There was no statistical difference between hospitalists’ baseline composite experience scores based on gender, age, academic hospitalist status, USMG status, and English as a second language status. Similarly, there were no differences in poststudy composite experience scores based on physician characteristics.

Physicians reported high rates of etiquette-based behavior at baseline (mean score, 83.9+/-3.3), and this showed moderate improvement over the study period (5.6 % [3.9%-7.3%, P < 0.0001]). Similarly, there was a moderate increase in frequency of self-reported behavior in the control arm (6.8% [3.5%-10.1%, P < 0.0001]). Hospitalists reported on 80.7% (77.6%-83.4%) of the biweekly surveys that they “almost always” wrapped up by asking, “Do you have any other questions or concerns” or something similar. In contrast, hospitalists reported on only 27.9% (24.7%-31.3%) of the biweekly survey that they “almost always” sat down in the patient room.

The composite physician domain Press Ganey experience scores were no different for the intervention arm and the control arm during the 12-month baseline period (21.8 vs. 21.7; P = 0.90) and the 12-month intervention period (21.6 vs. 21.5; P = 0.75). Baseline self-reported behaviors were not associated with baseline experience scores. Similarly, there were no differences between the arms on composite physician domain HCAHPS experience scores during baseline (2.1 vs. 2.3; P = 0.13) and intervention periods (2.2 vs. 2.1; P = 0.33).

The difference in difference analysis of the baseline and postintervention composite between the intervention arm and the control arm was not statistically significant for Press Ganey composite physician experience scores (-0.163 vs. -0.322; P = 0.71) or HCAHPS composite physician scores (-0.162 vs. -0.071; P = 0.06). The results did not change when controlled for survey response rate (percentage biweekly surveys completed by the hospitalist), age, gender, USMG status, English as a second language status, or percent clinical effort. The difference in difference analysis of the individual Press Ganey and HCAHPS physician domain items that were used to calculate the composite score was also not statistically significant (Table 2).

Difference in Difference Analysis of Pre-Intervention and Postintervention Physician Domain HCAHPS and Press Ganey Scores
Table 2


Changes in self-reported etiquette-based behavior were not associated with any changes in composite Press Ganey and HCAHPS experience score or individual items of the composite experience scores between baseline and intervention period. Similarly, biweekly self-reported etiquette behaviors were not associated with composite and individual item experience scores derived from responses of the patients discharged during the same 2-week reporting period. The intra-class correlation between observations from the same physician was only 0.02%, suggesting that most of the variation in scores was likely due to patient factors and did not result from differences between physicians.

DISCUSSION

This 12-month randomized multicenter study of hospitalists showed that repeated self-reporting of etiquette-based behavior results in modest reported increases in performance of these behaviors. However, there was no associated increase in provider level patient experience scores at the end of the study period when compared to baseline scores of the same physicians or when compared to the scores of the control group. The study demonstrated feasibility of self-reporting of behaviors by physicians with high participation when provided modest incentives.

 

 

Educational and feedback strategies used to improve patient experience are very resource intensive. Training sessions provided at some hospitals may take hours, and sustained effects are unproved. The presence of an independent observer in patient rooms to generate feedback for providers is not scalable and sustainable outside of a research study environment.9-11,15,17,26-29 We attempted to use physician repeated self-reporting to reinforce the important and easy to adopt components of etiquette-based behavior to develop a more easily sustainable strategy. This may have failed for several reasons.

When combining “always” and “usually” responses, the physicians in our study reported a high level of etiquette behavior at baseline. If physicians believe that they are performing well at baseline, they would not consider this to be an area in need of improvement. Bigger changes in behavior may have been possible had the physicians rated themselves less favorably at baseline. Inflated or high baseline self-assessment of performance might also have led to limited success of other types of educational interventions had they been employed.

Studies published since the rollout of our study have shown that physicians significantly overestimate how frequently they perform these etiquette behaviors.30,31 It is likely that was the case in our study subjects. This may, at best, indicate that a much higher change in the level of self-reported performance would be needed to result in meaningful actual changes, or worse, may render self-reported etiquette behavior entirely unreliable. Interventions designed to improve etiquette-based behavior might need to provide feedback about performance.

A program that provides education on the importance of etiquette-based behaviors, obtains objective measures of performance of these behaviors, and offers individualized feedback may be more likely to increase the desired behaviors. This is a limitation of our study. However, we aimed to test a method that required limited resources. Additionally, our method for attributing HCAHPS scores to an individual physician, based on weighted scores that were calculated according to the proportion of days each hospitalist billed for the hospitalization, may be inaccurate. It is possible that each interaction does not contribute equally to the overall score. A team-based intervention and experience measurements could overcome this limitation.

CONCLUSION

This randomized trial demonstrated the feasibility of self-assessment of bedside etiquette behaviors by hospitalists but failed to demonstrate a meaningful impact on patient experience through self-report. These findings suggest that more intensive interventions, perhaps involving direct observation, peer-to-peer mentoring, or other techniques may be required to impact significantly physician etiquette behaviors.

Disclosure

Johns Hopkins Hospitalist Scholars Program provided funding support. Dr. Qayyum is a consultant for Sunovion. The other authors have nothing to report.

 

Physicians have historically had limited adoption of strategies to improve patient experience and often cite suboptimal data and lack of evidence-driven strategies. 1,2 However, public reporting of hospital-level physician domain Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) experience scores, and more recent linking of payments to performance on patient experience metrics, have been associated with significant increases in physician domain scores for most of the hospitals. 3 Hospitals and healthcare organizations have deployed a broad range of strategies to engage physicians. These include emphasizing the relationship between patient experience and patient compliance, complaints, and malpractice lawsuits; appealing to physicians’ sense of competitiveness by publishing individual provider experience scores; educating physicians on HCAHPS and providing them with regularly updated data; and development of specific techniques for improving patient-physician interaction. 4-8

Studies show that educational curricula on improving etiquette and communication skills for physicians lead to improvement in patient experience, and many such training programs are available to hospitals for a significant cost.9-15 Other studies that have focused on providing timely and individual feedback to physicians using tools other than HCAHPS have shown improvement in experience in some instances. 16,17 However, these strategies are resource intensive, require the presence of an independent observer in each patient room, and may not be practical in many settings. Further, long-term sustainability may be problematic.

Since the goal of any educational intervention targeting physicians is routinizing best practices, and since resource-intensive strategies of continuous assessment and feedback may not be practical, we sought to test the impact of periodic physician self-reporting of their etiquette-based behavior on their patient experience scores.

METHODS

Subjects

Hospitalists from 4 hospitals (2 community and 2 academic) that are part of the same healthcare system were the study subjects. Hospitalists who had at least 15 unique patients responding to the routinely administered Press Ganey experience survey during the baseline period were considered eligible. Eligible hospitalists were invited to enroll in the study if their site director confirmed that the provider was likely to stay with the group for the subsequent 12-month study period.

Self-Reported Frequency of Best-Practice Bedside Etiquette Behaviors
Table 1

Randomization, Intervention and Control Group

Hospitalists were randomized to the study arm or control arm (1:1 randomization). Study arm participants received biweekly etiquette behavior (EB) surveys and were asked to report how frequently they performed 7 best-practice bedside etiquette behaviors during the previous 2-week period (Table 1). These behaviors were pre-defined by a consensus group of investigators as being amenable to self-report and commonly considered best practice as described in detail below. Control-arm participants received similarly worded survey on quality improvement behaviors (QIB) that would not be expected to impact patient experience (such as reviewing medications to ensure that antithrombotic prophylaxis was prescribed, Table 1).

 

 

Baseline and Study Periods

A 12-month period prior to the enrollment of each hospitalist was considered the baseline period for that individual. Hospitalist eligibility was assessed based on number of unique patients for each hospitalist who responded to the survey during this baseline period. Once enrolled, baseline provider-level patient experience scores were calculated based on the survey responses during this 12-month baseline period. Baseline etiquette behavior performance of the study was calculated from the first survey. After the initial survey, hospitalists received biweekly surveys (EB or QIB) for the 12-month study period for a total of 26 surveys (including the initial survey).

Survey Development, Nature of Survey, Survey Distribution Methods

The EB and QIB physician self-report surveys were developed through an iterative process by the study team. The EB survey included elements from an etiquette-based medicine checklist for hospitalized patients described by Kahn et al. 18 We conducted a review of literature to identify evidence-based practices.19-22 Research team members contributed items on best practices in etiquette-based medicine from their experience. Specifically, behaviors were selected if they met the following 4 criteria: 1) performing the behavior did not lead to significant increase in workload and was relatively easy to incorporate in the work flow; 2) occurrence of the behavior would be easy to note for any outside observer or the providers themselves; 3) the practice was considered to be either an evidence-based or consensus-based best-practice; 4) there was consensus among study team members on including the item. The survey was tested for understandability by hospitalists who were not eligible for the study.

The EB survey contained 7 items related to behaviors that were expected to impact patient experience. The QIB survey contained 4 items related to behaviors that were expected to improve quality (Table 1). The initial survey also included questions about demographic characteristics of the participants.

Survey questionnaires were sent via email every 2 weeks for a period of 12 months. The survey questionnaire became available every other week, between Friday morning and Tuesday midnight, during the study period. Hospitalists received daily email reminders on each of these days with a link to the survey website if they did not complete the survey. They had the opportunity to report that they were not on service in the prior week and opt out of the survey for the specific 2-week period. The survey questions were available online as well as on a mobile device format.

Provider Level Patient Experience Scores

Provider-level patient experience scores were calculated from the physician domain Press Ganey survey items, which included the time that the physician spent with patients, the physician addressed questions/worries, the physician kept patients informed, the friendliness/courtesy of physician, and the skill of physician. Press Ganey responses were scored from 1 to 5 based on the Likert scale responses on the survey such that a response “very good” was scored 5 and a response “very poor” was scored 1. Additionally, physician domain HCAHPS item (doctors treat with courtesy/respect, doctors listen carefully, doctors explain in way patients understand) responses were utilized to calculate another set of HCAHPS provider level experience scores. The responses were scored as 1 for “always” response and “0” for any other response, consistent with CMS dichotomization of these results for public reporting. Weighted scores were calculated for individual hospitalists based on the proportion of days each hospitalist billed for the hospitalization so that experience scores of patients who were cared for by multiple providers were assigned to each provider in proportion to the percent of care delivered.23 Separate composite physician scores were generated from the 5 Press Ganey and for the 3 HCAHPS physician items. Each item was weighted equally, with the maximum possible for Press Ganey composite score of 25 (sum of the maximum possible score of 5 on each of the 5 Press Ganey items) and the HCAHPS possible total was 3 (sum of the maximum possible score of 1 on each of the 3 HCAHPS items).

ANALYSIS AND STATISTICAL METHODS

We analyzed the data to assess for changes in frequency of self-reported behavior over the study period, changes in provider-level patient experience between baseline and study period, and the association between the these 2 outcomes. The self-reported etiquette-based behavior responses were scored as 1 for the lowest response (never) to 4 as the highest (always). With 7 questions, the maximum attainable score was 28. The maximum score was normalized to 100 for ease of interpretation (corresponding to percentage of time etiquette behaviors were employed, by self-report). Similarly, the maximum attainable self-reported QIB-related behavior score on the 4 questions was 16. This was also converted to 0-100 scale for ease of comparison.

 

 

Two additional sets of analyses were performed to evaluate changes in patient experience during the study period. First, the mean 12-month provider level patient experience composite score in the baseline period was compared with the 12-month composite score during the 12-month study period for the study group and the control group. These were assessed with and without adjusting for age, sex, race, and U.S. medical school graduate (USMG) status. In the second set of unadjusted and adjusted analyses, changes in biweekly composite scores during the study period were compared between the intervention and the control groups while accounting for correlation between observations from the same physician using mixed linear models. Linear mixed models were used to accommodate correlations among multiple observations made on the same physician by including random effects within each regression model. Furthermore, these models allowed us to account for unbalanced design in our data when not all physicians had an equal number of observations and data elements were collected asynchronously.24 Analyses were performed in R version 3.2.2 (The R Project for Statistical Computing, Vienna, Austria); linear mixed models were performed using the ‘nlme’ package.25

We hypothesized that self-reporting on biweekly surveys would result in increases in the frequency of the reported behavior in each arm. We also hypothesized that, because of biweekly reflection and self-reporting on etiquette-based bedside behavior, patient experience scores would increase in the study arm.

RESULTS

Of the 80 hospitalists approached to participate in the study, 64 elected to participate (80% participation rate). The mean response rate to the survey was 57.4% for the intervention arm and 85.7% for the control arm. Higher response rates were not associated with improved patient experience scores. Of the respondents, 43.1% were younger than 35 years of age, 51.5% practiced in academic settings, and 53.1% were female. There was no statistical difference between hospitalists’ baseline composite experience scores based on gender, age, academic hospitalist status, USMG status, and English as a second language status. Similarly, there were no differences in poststudy composite experience scores based on physician characteristics.

Physicians reported high rates of etiquette-based behavior at baseline (mean score, 83.9+/-3.3), and this showed moderate improvement over the study period (5.6 % [3.9%-7.3%, P < 0.0001]). Similarly, there was a moderate increase in frequency of self-reported behavior in the control arm (6.8% [3.5%-10.1%, P < 0.0001]). Hospitalists reported on 80.7% (77.6%-83.4%) of the biweekly surveys that they “almost always” wrapped up by asking, “Do you have any other questions or concerns” or something similar. In contrast, hospitalists reported on only 27.9% (24.7%-31.3%) of the biweekly survey that they “almost always” sat down in the patient room.

The composite physician domain Press Ganey experience scores were no different for the intervention arm and the control arm during the 12-month baseline period (21.8 vs. 21.7; P = 0.90) and the 12-month intervention period (21.6 vs. 21.5; P = 0.75). Baseline self-reported behaviors were not associated with baseline experience scores. Similarly, there were no differences between the arms on composite physician domain HCAHPS experience scores during baseline (2.1 vs. 2.3; P = 0.13) and intervention periods (2.2 vs. 2.1; P = 0.33).

The difference in difference analysis of the baseline and postintervention composite between the intervention arm and the control arm was not statistically significant for Press Ganey composite physician experience scores (-0.163 vs. -0.322; P = 0.71) or HCAHPS composite physician scores (-0.162 vs. -0.071; P = 0.06). The results did not change when controlled for survey response rate (percentage biweekly surveys completed by the hospitalist), age, gender, USMG status, English as a second language status, or percent clinical effort. The difference in difference analysis of the individual Press Ganey and HCAHPS physician domain items that were used to calculate the composite score was also not statistically significant (Table 2).

Difference in Difference Analysis of Pre-Intervention and Postintervention Physician Domain HCAHPS and Press Ganey Scores
Table 2


Changes in self-reported etiquette-based behavior were not associated with any changes in composite Press Ganey and HCAHPS experience score or individual items of the composite experience scores between baseline and intervention period. Similarly, biweekly self-reported etiquette behaviors were not associated with composite and individual item experience scores derived from responses of the patients discharged during the same 2-week reporting period. The intra-class correlation between observations from the same physician was only 0.02%, suggesting that most of the variation in scores was likely due to patient factors and did not result from differences between physicians.

DISCUSSION

This 12-month randomized multicenter study of hospitalists showed that repeated self-reporting of etiquette-based behavior results in modest reported increases in performance of these behaviors. However, there was no associated increase in provider level patient experience scores at the end of the study period when compared to baseline scores of the same physicians or when compared to the scores of the control group. The study demonstrated feasibility of self-reporting of behaviors by physicians with high participation when provided modest incentives.

 

 

Educational and feedback strategies used to improve patient experience are very resource intensive. Training sessions provided at some hospitals may take hours, and sustained effects are unproved. The presence of an independent observer in patient rooms to generate feedback for providers is not scalable and sustainable outside of a research study environment.9-11,15,17,26-29 We attempted to use physician repeated self-reporting to reinforce the important and easy to adopt components of etiquette-based behavior to develop a more easily sustainable strategy. This may have failed for several reasons.

When combining “always” and “usually” responses, the physicians in our study reported a high level of etiquette behavior at baseline. If physicians believe that they are performing well at baseline, they would not consider this to be an area in need of improvement. Bigger changes in behavior may have been possible had the physicians rated themselves less favorably at baseline. Inflated or high baseline self-assessment of performance might also have led to limited success of other types of educational interventions had they been employed.

Studies published since the rollout of our study have shown that physicians significantly overestimate how frequently they perform these etiquette behaviors.30,31 It is likely that was the case in our study subjects. This may, at best, indicate that a much higher change in the level of self-reported performance would be needed to result in meaningful actual changes, or worse, may render self-reported etiquette behavior entirely unreliable. Interventions designed to improve etiquette-based behavior might need to provide feedback about performance.

A program that provides education on the importance of etiquette-based behaviors, obtains objective measures of performance of these behaviors, and offers individualized feedback may be more likely to increase the desired behaviors. This is a limitation of our study. However, we aimed to test a method that required limited resources. Additionally, our method for attributing HCAHPS scores to an individual physician, based on weighted scores that were calculated according to the proportion of days each hospitalist billed for the hospitalization, may be inaccurate. It is possible that each interaction does not contribute equally to the overall score. A team-based intervention and experience measurements could overcome this limitation.

CONCLUSION

This randomized trial demonstrated the feasibility of self-assessment of bedside etiquette behaviors by hospitalists but failed to demonstrate a meaningful impact on patient experience through self-report. These findings suggest that more intensive interventions, perhaps involving direct observation, peer-to-peer mentoring, or other techniques may be required to impact significantly physician etiquette behaviors.

Disclosure

Johns Hopkins Hospitalist Scholars Program provided funding support. Dr. Qayyum is a consultant for Sunovion. The other authors have nothing to report.

 

References

1. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625-648. PubMed
2. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: What it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. PubMed
3. Mann RK, Siddiqui Z, Kurbanova N, Qayyum R. Effect of HCAHPS reporting on patient satisfaction with physician communication. J Hosp Med. 2015;11(2):105-110. PubMed
4. Rivers PA, Glover SH. Health care competition, strategic mission, and patient satisfaction: research model and propositions. J Health Organ Manag. 2008;22(6):627-641. PubMed
5. Kim SS, Kaplowitz S, Johnston MV. The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof. 2004;27(3):237-251. PubMed
6. Stelfox HT, Gandhi TK, Orav EJ, Gustafson ML. The relation of patient satisfaction with complaints against physicians and malpractice lawsuits. Am J Med. 2005;118(10):1126-1133. PubMed
7. Rodriguez HP, Rodday AM, Marshall RE, Nelson KL, Rogers WH, Safran DG. Relation of patients’ experiences with individual physicians to malpractice risk. Int J Qual Health Care. 2008;20(1):5-12. PubMed
8. Cydulka RK, Tamayo-Sarver J, Gage A, Bagnoli D. Association of patient satisfaction with complaints and risk management among emergency physicians. J Emerg Med. 2011;41(4):405-411. PubMed
9. Windover AK, Boissy A, Rice TW, Gilligan T, Velez VJ, Merlino J. The REDE model of healthcare communication: Optimizing relationship as a therapeutic agent. Journal of Patient Experience. 2014;1(1):8-13. 
10. Chou CL, Hirschmann K, Fortin AH 6th, Lichstein PR. The impact of a faculty learning community on professional and personal development: the facilitator training program of the American Academy on Communication in Healthcare. Acad Med. 2014;89(7):1051-1056. PubMed
11. Kennedy M, Denise M, Fasolino M, John P, Gullen M, David J. Improving the patient experience through provider communication skills building. Patient Experience Journal. 2014;1(1):56-60. 
12. Braverman AM, Kunkel EJ, Katz L, et al. Do I buy it? How AIDET™ training changes residents’ values about patient care. Journal of Patient Experience. 2015;2(1):13-20. 
13. Riess H, Kelley JM, Bailey RW, Dunn EJ, Phillips M. Empathy training for resident physicians: a randomized controlled trial of a neuroscience-informed curriculum. J Gen Intern Med. 2012;27(10):1280-1286. PubMed
14. Rothberg MB, Steele JR, Wheeler J, Arora A, Priya A, Lindenauer PK. The relationship between time spent communicating and communication outcomes on a hospital medicine service. J Gen Internl Med. 2012;27(2):185-189. PubMed
15. O’Leary KJ, Cyrus RM. Improving patient satisfaction: timely feedback to specific physicians is essential for success. J Hosp Med. 2015;10(8):555-556. PubMed
16. Indovina K, Keniston A, Reid M, et al. Real‐time patient experience surveys of hospitalized medical patients. J Hosp Med. 2016;10(8):497-502. PubMed
17. Banka G, Edgington S, Kyulo N, et al. Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497-502. PubMed
18. Kahn MW. Etiquette-based medicine. N Engl J Med. 2008;358(19):1988-1989. PubMed
19. Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in-hospital physicians. Arch Intern Med. 2009;169(2):199-201. PubMed
20. Francis JJ, Pankratz VS, Huddleston JM. Patient satisfaction associated with correct identification of physicians’ photographs. Mayo Clin Proc. 2001;76(6):604-608. PubMed
21. Strasser F, Palmer JL, Willey J, et al. Impact of physician sitting versus standing during inpatient oncology consultations: patients’ preference and perception of compassion and duration. A randomized controlled trial. J Pain Symptom Manage. 2005;29(5):489-497. PubMed
22. Dudas RA, Lemerman H, Barone M, Serwint JR. PHACES (Photographs of Academic Clinicians and Their Educational Status): a tool to improve delivery of family-centered care. Acad Pediatr. 2010;10(2):138-145. PubMed
23. Herzke C, Michtalik H, Durkin N, et al. A method for attributing patient-level metrics to rotating providers in an inpatient setting. J Hosp Med. Under revision. 
24. Holden JE, Kelley K, Agarwal R. Analyzing change: a primer on multilevel models with applications to nephrology. Am J Nephrol. 2008;28(5):792-801. PubMed
25. Pinheiro J, Bates D, DebRoy S, Sarkar D. Linear and nonlinear mixed effects models. R package version. 2007;3:57. 
26. Braverman AM, Kunkel EJ, Katz L, et al. Do I buy it? How AIDET™ training changes residents’ values about patient care. Journal of Patient Experience. 2015;2(1):13-20.
27. Riess H, Kelley JM, Bailey RW, Dunn EJ, Phillips M. Empathy training for resident physicians: A randomized controlled trial of a neuroscience-informed curriculum. J Gen Intern Med. 2012;27(10):1280-1286. PubMed
28. Raper SE, Gupta M, Okusanya O, Morris JB. Improving communication skills: A course for academic medical center surgery residents and faculty. J Surg Educ. 2015;72(6):e202-e211. PubMed
29. Indovina K, Keniston A, Reid M, et al. Real‐time patient experience surveys of hospitalized medical patients. J Hosp Med. 2016;11(4):251-256. PubMed
30. Block L, Hutzler L, Habicht R, et al. Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter. J Hosp Med. 2013;8(11):631-634. PubMed
31. Tackett S, Tad-y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette-based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908-913. PubMed

References

1. Blumenthal D, Kilo CM. A report card on continuous quality improvement. Milbank Q. 1998;76(4):625-648. PubMed
2. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: What it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. PubMed
3. Mann RK, Siddiqui Z, Kurbanova N, Qayyum R. Effect of HCAHPS reporting on patient satisfaction with physician communication. J Hosp Med. 2015;11(2):105-110. PubMed
4. Rivers PA, Glover SH. Health care competition, strategic mission, and patient satisfaction: research model and propositions. J Health Organ Manag. 2008;22(6):627-641. PubMed
5. Kim SS, Kaplowitz S, Johnston MV. The effects of physician empathy on patient satisfaction and compliance. Eval Health Prof. 2004;27(3):237-251. PubMed
6. Stelfox HT, Gandhi TK, Orav EJ, Gustafson ML. The relation of patient satisfaction with complaints against physicians and malpractice lawsuits. Am J Med. 2005;118(10):1126-1133. PubMed
7. Rodriguez HP, Rodday AM, Marshall RE, Nelson KL, Rogers WH, Safran DG. Relation of patients’ experiences with individual physicians to malpractice risk. Int J Qual Health Care. 2008;20(1):5-12. PubMed
8. Cydulka RK, Tamayo-Sarver J, Gage A, Bagnoli D. Association of patient satisfaction with complaints and risk management among emergency physicians. J Emerg Med. 2011;41(4):405-411. PubMed
9. Windover AK, Boissy A, Rice TW, Gilligan T, Velez VJ, Merlino J. The REDE model of healthcare communication: Optimizing relationship as a therapeutic agent. Journal of Patient Experience. 2014;1(1):8-13. 
10. Chou CL, Hirschmann K, Fortin AH 6th, Lichstein PR. The impact of a faculty learning community on professional and personal development: the facilitator training program of the American Academy on Communication in Healthcare. Acad Med. 2014;89(7):1051-1056. PubMed
11. Kennedy M, Denise M, Fasolino M, John P, Gullen M, David J. Improving the patient experience through provider communication skills building. Patient Experience Journal. 2014;1(1):56-60. 
12. Braverman AM, Kunkel EJ, Katz L, et al. Do I buy it? How AIDET™ training changes residents’ values about patient care. Journal of Patient Experience. 2015;2(1):13-20. 
13. Riess H, Kelley JM, Bailey RW, Dunn EJ, Phillips M. Empathy training for resident physicians: a randomized controlled trial of a neuroscience-informed curriculum. J Gen Intern Med. 2012;27(10):1280-1286. PubMed
14. Rothberg MB, Steele JR, Wheeler J, Arora A, Priya A, Lindenauer PK. The relationship between time spent communicating and communication outcomes on a hospital medicine service. J Gen Internl Med. 2012;27(2):185-189. PubMed
15. O’Leary KJ, Cyrus RM. Improving patient satisfaction: timely feedback to specific physicians is essential for success. J Hosp Med. 2015;10(8):555-556. PubMed
16. Indovina K, Keniston A, Reid M, et al. Real‐time patient experience surveys of hospitalized medical patients. J Hosp Med. 2016;10(8):497-502. PubMed
17. Banka G, Edgington S, Kyulo N, et al. Improving patient satisfaction through physician education, feedback, and incentives. J Hosp Med. 2015;10(8):497-502. PubMed
18. Kahn MW. Etiquette-based medicine. N Engl J Med. 2008;358(19):1988-1989. PubMed
19. Arora V, Gangireddy S, Mehrotra A, Ginde R, Tormey M, Meltzer D. Ability of hospitalized patients to identify their in-hospital physicians. Arch Intern Med. 2009;169(2):199-201. PubMed
20. Francis JJ, Pankratz VS, Huddleston JM. Patient satisfaction associated with correct identification of physicians’ photographs. Mayo Clin Proc. 2001;76(6):604-608. PubMed
21. Strasser F, Palmer JL, Willey J, et al. Impact of physician sitting versus standing during inpatient oncology consultations: patients’ preference and perception of compassion and duration. A randomized controlled trial. J Pain Symptom Manage. 2005;29(5):489-497. PubMed
22. Dudas RA, Lemerman H, Barone M, Serwint JR. PHACES (Photographs of Academic Clinicians and Their Educational Status): a tool to improve delivery of family-centered care. Acad Pediatr. 2010;10(2):138-145. PubMed
23. Herzke C, Michtalik H, Durkin N, et al. A method for attributing patient-level metrics to rotating providers in an inpatient setting. J Hosp Med. Under revision. 
24. Holden JE, Kelley K, Agarwal R. Analyzing change: a primer on multilevel models with applications to nephrology. Am J Nephrol. 2008;28(5):792-801. PubMed
25. Pinheiro J, Bates D, DebRoy S, Sarkar D. Linear and nonlinear mixed effects models. R package version. 2007;3:57. 
26. Braverman AM, Kunkel EJ, Katz L, et al. Do I buy it? How AIDET™ training changes residents’ values about patient care. Journal of Patient Experience. 2015;2(1):13-20.
27. Riess H, Kelley JM, Bailey RW, Dunn EJ, Phillips M. Empathy training for resident physicians: A randomized controlled trial of a neuroscience-informed curriculum. J Gen Intern Med. 2012;27(10):1280-1286. PubMed
28. Raper SE, Gupta M, Okusanya O, Morris JB. Improving communication skills: A course for academic medical center surgery residents and faculty. J Surg Educ. 2015;72(6):e202-e211. PubMed
29. Indovina K, Keniston A, Reid M, et al. Real‐time patient experience surveys of hospitalized medical patients. J Hosp Med. 2016;11(4):251-256. PubMed
30. Block L, Hutzler L, Habicht R, et al. Do internal medicine interns practice etiquette‐based communication? A critical look at the inpatient encounter. J Hosp Med. 2013;8(11):631-634. PubMed
31. Tackett S, Tad-y D, Rios R, Kisuule F, Wright S. Appraising the practice of etiquette-based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908-913. PubMed

Issue
Journal of Hospital Medicine 12(6)
Issue
Journal of Hospital Medicine 12(6)
Page Number
402-406
Page Number
402-406
Publications
Publications
Topics
Article Type
Display Headline
Does provider self-reporting of etiquette behaviors improve patient experience? A randomized controlled trial
Display Headline
Does provider self-reporting of etiquette behaviors improve patient experience? A randomized controlled trial
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
*Address for correspondence and reprint requests: Zishan Siddiqui, MD, Nelson 223, 1800 Orleans St., Baltimore, MD 21287; Telephone: 443 287-3631; Fax: 410-502-0923; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Gating Strategy
First Peek Free
Article PDF Media