Affiliations
Biostatistics, Epidemiology, and Data Management Core, Center for Child and Community Health Research, Johns Hopkins Bayview Medical Center, Johns Hopkins University School of Medicine, Baltimore, Maryland
Given name(s)
Haruka
Family name
Torok
Degrees
MD, MSc

Development and Validation of TAISCH

Article Type
Changed
Sun, 05/21/2017 - 14:00
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

Files
References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
Article PDF
Issue
Journal of Hospital Medicine - 9(9)
Publications
Page Number
553-558
Sections
Files
Files
Article PDF
Article PDF

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

Patient satisfaction scores are being reported publicly and will affect hospital reimbursement rates under Hospital Value Based Purchasing.[1] Patient satisfaction scores are currently obtained through metrics such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS)[2] and Press Ganey (PG)[3] surveys. Such surveys are mailed to a variable proportion of patients following their discharge from the hospital, and ask patients about the quality of care they received during their admission. Domains assessed regarding the patients' inpatient experiences range from room cleanliness to the amount of time the physician spent with them.

The Society of Hospital Medicine (SHM), the largest professional medical society representing hospitalists, encourages the use of patient satisfaction surveys to measure hospitalist providers' quality of patient care.[4] Ideally, accurate information would be delivered as feedback to individual providers in a timely manner in hopes of improving performance; however, the current methodology has shortcomings that limit its usefulness. First, several hospitalists and consultants may be involved in the care of 1 patient during the hospital stay, but the score can only be tied to a single physician. Current survey methods attribute all responses to that particular doctor, usually the attending of record, although patients may very well be thinking of other physicians when responding to questions. Second, only a few questions on the surveys ask about doctors' performance. Aforementioned surveys have 3 to 8 questions about doctors' care, which limits the ability to assess physician performance comprehensively. Finally, the surveys are mailed approximately 1 week after the patient's discharge, usually without a name or photograph of the physician to facilitate patient/caregiver recall. This time lag and lack of information to prompt patient recall likely lead to impreciseness in assessment. In addition, the response rates to these surveys are typically low, around 25% (personal oral communication with our division's service excellence stakeholder Dr. L.P. in September 2013). These deficiencies limit the usefulness of such data in coaching individual providers about their performance because they cannot be delivered in a timely fashion, and the reliability of the attribution is suspect.

With these considerations in mind, we developed and validated a new survey metric, the Tool to Assess Inpatient Satisfaction with Care from Hospitalists (TAISCH). We hypothesized that the results would be different from those collected using conventional methodologies.

PATIENTS AND METHODS

Study Design and Subjects

Our cross‐sectional study surveyed inpatients under the care of hospitalist physicians working without the support of trainees or allied health professionals (such as nurse practitioners or physician assistants). The subjects were hospitalized at a 560‐bed academic medical center on a general medical floor between September 2012 and December 2012. All participating hospitalist physicians were members of a division of hospital medicine.

TAISCH Development

Several steps were taken to establish content validity evidence.[5] We developed TAISCH by building upon the theoretical underpinnings of the quality of care measures that are endorsed by the SHM Membership Committee Guidelines for Hospitalists Patient Satisfaction.[4] This directive recommends that patient satisfaction with hospitalist care should be assessed across 6 domains: physician availability, physician concern for patients, physician communication skills, physician courteousness, physician clinical skills, and physician involvement of patients' families. Other existing validated measures tied to the quality of patient care were reviewed, and items related to the physician's care were considered for inclusion to further substantiate content validity.[6, 7, 8, 9, 10, 11, 12] Input from colleagues with expertise in clinical excellence and service excellence was also solicited. This included the director of Hopkins' Miller Coulson Academy of Clinical Excellence and the grant review committee members of the Johns Hopkins Osler Center for Clinical Excellence (who funded this study).[13, 14]

The preliminary instrument contained 17 items, including 2 conditional questions, and was first pilot tested on 5 hospitalized patients. We assessed the time it took to administer the surveys as well as patients' comments and questions about each survey item. This resulted in minor wording changes for clarification and changes in the order of the questions. We then pursued a second phase of piloting using the revised survey, which was administered to >20 patients. There were no further adjustments as patients reported that TAISCH was clear and concise.

From interviews with patients after pilot testing, it became clear that respondents were carefully reflecting on the quality of care and performance of their treating physician, thereby generating response process validity evidence.[5]

Data Collection

To ensure that patients had perspective upon which to base their assessment, they were only asked to appraise physicians after being cared for by the same hospitalist provider for at least 2 consecutive days. Patients who were on isolation, those who were non‐English speaking, and those with impaired decision‐making capacity (such as mental status change or dementia) were excluded. Patients were enrolled only if they could correctly name their doctor or at least identify a photograph of their hospitalist provider on a page that included pictures of all division members. Those patients who were able to name the provider or correctly select the provider from the page of photographs were considered to have correctly identified their provider. In order to ensure the confidentiality of the patients and their responses, all data collections were performed by a trained research assistant who had no patient‐care responsibilities. The survey was confidential, did not include any patient identifiers, and patients were assured that providers would never see their individual responses. The patients were given options to complete TAISCH either by verbally responding to the research assistant's questions, filling out the paper survey, or completing the survey online using an iPad at the bedside. TAISCH specifically asked the patients to rate their hospitalist provider's performance along several domains: communication skills, clinical skills, availability, empathy, courteousness, and discharge planning; 5‐point Likert scales were used exclusively.

In addition to the TAISCH questions, we asked patients (1) an overall satisfaction question, I would recommend Dr. X to my loved ones should he or she need hospitalization in the future (response options: strongly disagree, disagree, neutral, agree, strongly agree), (2) their pain level using the Wong‐Baker pain scale,[15] and (3) the Jefferson Scale of Patient's Perceptions of Physician Empathy (JSPPPE).[16, 17] Associations between TAISCH and these variables (as well as PG data) would be examined to ascertain relations to other variables validity evidence.[5] Specifically, we sought to ascertain discriminant and convergent validity where the TAISCH is associated positively with constructs where we expect positive associations (convergent) and negatively with those we expect negative associations (discriminant).[18] The Wong‐Baker pain scale is a recommended pain‐assessment tool by the Joint Commission on Accreditation of Healthcare Organizations, and is widely used in hospitals and various healthcare settings.[19] The scale has a range from 0 to 10 (0 for no pain and 10 indicating the worst pain). The hypothesis was that the patients' pain levels would adversely affect their perception of the physician's performance (discriminant validity). JSPPPE is a 5‐item validated scale developed to measure patients' perceptions of their physicians' empathic engagement. It has significant correlations with the American Board of Internal Medicine's patient rating surveys, and it is used in standardized patient examinations for medical students.[20] The hypothesis was that patient perception about the quality of physician care would correlate positively with their assessment of the physician's empathy (convergent validity).

Although all of the hospitalist providers in the division consented to participate in this study, only hospitalist providers for whom at least 4 patient surveys were collected were included in the analysis. The study was approved by our institutional review board.

Data Analysis

All data were analyzed using Stata 11 (StataCorp, College Station, TX). Data were analyzed to determine the potential for a single comprehensive assessment of physician performance with confirmatory factor analysis (CFA) using maximum likelihood extraction. Additional factor analyses examined the potential for a multiple factor solution using exploratory factor analysis (EFA) with principle component factor analysis and varimax rotation. Examination of scree plots, factor loadings for individual items greater than 0.40, eigenvalues greater than 1.0, and substantive meaning of the factors were all taken into consideration when determining the number of factors to retain from factor analytic models.[21] Cronbach's s were calculated for each factor to assess reliability. These data provided internal structure validity evidence (demonstrated by acceptable reliability and factor structure) to TAISCH.[5]

After arriving at the final TAISCH scale, composite TAISCH scores were computed. Associations between composite TAISCH scores with the Wong‐Baker pain scale, the JSPPPE, and the overall satisfaction question were assessed using linear regression with the svy command in Stata to account for the nested design of having each patient report on a single hospitalist provider. Correlation between composite TAISCH score and PG physician care scores (comprised of 5 questions: time physician spent with you, physician concern with questions/worries, physician kept you informed, friendliness/courtesy of physician, and skill of physician) were assessed at the provider level when both data were available.

RESULTS

A total of 330 patients were considered to be eligible through medical record screening. Of those patients, 73 (22%) were already discharged by the time the research assistant attempted to enroll them after 2 days of care by a single physician. Of 257 inpatients approached, 30 patients (12%) refused to participate. Among the 227 consented patients, 24 (9%) were excluded as they were unable to correctly identify their hospitalist provider. A total of 203 patients were enrolled, and each patient rated a single hospitalist; a total of 29 unique hospitalists were assessed by these patients. The patients' mean age was 60 years, 114 (56%) were female, and 61 (30%) were of nonwhite race (Table 1). The hospitalist physicians' demographic information is also shown in Table 1. Two hospitalists with fewer than 4 surveys collected were excluded from the analysis. Thus, final analysis included 200 unique patients assessing 1 of the 27 hospitalists (mean=7.4 surveys per hospitalist).

Characteristics of the 203 Patients and 29 Hospitalist Physicians Studied
CharacteristicsValue
  • NOTE: Abbreviations: SD, standard deviation.

Patients, N=203 
Age, y, mean (SD)60.0 (17.2)
Female, n (%)114 (56.1)
Nonwhite race, n (%)61 (30.5)
Observation stay, n (%)45 (22.1)
How are you feeling today? n (%) 
Very poor11 (5.5)
Poor14 (7.0)
Fair67 (33.5)
Good71 (35.5)
Very good33 (16.5)
Excellent4 (2.0)
Hospitalists, N=29 
Age, n (%) 
2630 years7 (24.1)
3135 years8 (27.6)
3640 years12 (41.4)
4145 years2 (6.9)
Female, n (%)11 (37.9)
International medical graduate, n (%)18 (62.1)
Years in current practice, n (%) 
<19 (31.0)
127 (24.1)
346 (20.7)
565 (17.2)
7 or more2 (6.9)
Race, n (%) 
Caucasian4 (13.8)
Asian19 (65.5)
African/African American5 (17.2)
Other1 (3.4)
Academic rank, n (%) 
Assistant professor9 (31.0)
Clinical instructor10 (34.5)
Clinical associate/nonfaculty10 (34.5)
Percentage of clinical effort, n (%) 
>70%6 (20.7)
50%70%19 (65.5)
<50%4 (13.8)

Validation of TAISCH

On the 17‐item TAISCH administered, the 2 conditional questions (When I asked to see Dr. X, s/he came within a reasonable amount of time. and If Dr. X interacted with your family, how well did s/he deal with them?) were applicable to fewer than 40% of patients. As such, they were not included in the analysis.

Internal Structure Validity Evidence

Results from factor analyses are shown in Table 2. The CFA modeling of a single factor solution with 15 items explained 42% of the total variance. The 27 hospitalists' average 15‐item TAISCH score ranged from 3.25 to 4.28 (mean [standard deviation]=3.82 [0.24]; possible score range: 15). Reliability of the 15‐item TAISCH was appropriate (Cronbach's =0.88).

Factor Loadings for 15‐Item TAISCH Measure Based on Confirmatory Factor Analysis
TAISCH (Cronbach's =0.88)Factor Loading
  • NOTE: Abbreviations: TAISCH, Tool to Assess Inpatient Satisfaction with Care from Hospitalists. *Response category: below average, average, above average, top 10% of all doctors, the very best of any doctor I have come across. Response category: none, a little, some, a lot, tremendously. Response category: strongly disagree, disagree, neutral, agree, strongly agree. Response category: poor, fair, good, very good, excellent. Response category: never, rarely, sometimes, most of the time, every single time.

Compared to all other physicians that you know, how do you rate Dr. X's compassion, empathy, and concern for you?*0.91
Compared to all other physicians that you know, how do you rate Dr. X's ability to communicate with you?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's skill in diagnosing and treating your medical conditions?*0.88
Compared to all other physicians that you know, how do you rate Dr. X's fund of knowledge?*0.80
How much confidence do you have in Dr. X's plan for your care?0.71
Dr. X kept me informed of the plans for my care.0.69
Effectively preparing patients for discharge is an important part of what doctors in the hospital do. How well has Dr. X done in getting you ready to be discharged from the hospital?0.67
Dr. X let me talk without interrupting.0.60
Dr. X encouraged me to ask questions.0.59
Dr. X checks to be sure I understood everything.0.55
I sensed Dr. X was in a rush when s/he was with me. (reverse coded)0.55
Dr. X showed interest in my views and opinions about my health.0.54
Dr. X discusses options with me and involves me in decision making.0.47
Dr. X asked permission to enter the room and waited for an answer.0.25
Dr. X sat down when s/he visited my bedside.0.14

As shown in Table 2, 2 variables had factor loadings below the minimum threshold of 0.40 in the CFA for the 15‐item TAISCH when modeling a single factor solution. Both items were related to physician etiquette: Dr. X asked permission to enter the room and waited for an answer. and Dr. X sat down when he/she visited my bedside.

When CFA was executed again, as a single factor omitting the 2 items that demonstrated lower factor loadings, the 13‐item single factor solution explained 47% of the total variance, and the Cronbach's was 0.92.

EFA models were also explored for potential alternate solutions. These analyses resulted in lesser reliability (low Cronbach's ), weak construct operationalization, and poor face validity (as judged by the research team).

Both the 13‐ and 15‐item single factor solutions were examined further to determine whether associations with criterion variables (pain, empathy) differed substantively. Given that results were similar across both solutions, subsequent analyses were completed with the 15‐item single factor solution, which included the etiquette‐related variables.

Relationship to Other Variables Validity Evidence

The association between the 15‐item TAISCH and JSPPPE was significantly positive (=12.2, P<0.001). Additionally, there was a positive and significant association between TAISCH and the overall satisfaction question: I would recommend Dr. X to my loved ones should they need hospitalization in the future. (=11.2, P<0.001). This overall satisfaction question was also associated positively with JSPPPE (=13.2, P<0.001). There was a statistically significant negative association between TAISCH and Wong‐Baker pain scale (=2.42, P<0.05).

The PG data from the same period were available for 24 out of 27 hospitalists. The number of PG surveys collected per provider ranged from 5 to 30 (mean=14). At the provider level, there was not a statistically significant correlation between PG and the 15‐item TAISCH (P=0.51). Of note, PG was also not significantly correlated with the overall satisfaction question, JSPPPE, or the Wong‐Baker pain scale (all P>0.10).

DISCUSSION

Our new metric, TAISCH, was found to be a reliable and valid measurement tool to assess patient satisfaction with the hospitalist physician's care. Because we only surveyed patients who could correctly identify their hospitalist physicians after interacting for at least 2 consecutive days, the attribution of the data to the individual hospitalist is almost certainly correct. The high participation rate indicates that the patients were not hesitant about rating their hospitalist provider's quality of care, even when asked while they were still in the hospital.

The majority of the patients approached were able to correctly identify their hospitalist provider. This rate (91%) was much higher than the rate previously reported in the literature where a picture card was used to improve provider recognition.[22] It is also likely that 1 physician, rather than a team of physicians, taking care of patients make it easier for patients to recall the name and recognize the face of their inpatient provider.

The CFA of TAISCH showed good fit but suggests that 2 variables, both from Kahn's etiquette‐based medicine (EtBM) checklist,[9] may not load in the same way as the other items. Tackett and colleagues reported that hospitalists who performed more EtBM behaviors scored higher on PG evaluations.[23] Such results, along with the comparable explanation of variance and reliability, convinced us to retain these 2 items in the final 15‐item TAISCH as dictated by the CFA. Although the literature supports the fact that physician etiquette is related to perception of high‐quality care, it is possible that these 2 questions were answered differently (and thereby failed to load the same way), because environmental limitations may be preventing physicians' ability to perform them consistently. We prefer the 15‐item version of TAISCH and future studies may provide additional information about its performance as compared to the 13‐item adaptation.

The significantly negative association between the Wong‐Baker pain scale and TAISCH stresses the importance of adequately addressing and treating the patient's pain. Hanna et al. showed that the patients' perceptions of pain control was associated with their overall satisfaction score measured by HCAHPS.[24] The association seen in our study was not unexpected, because TAISCH is administered while the patients are acutely ill in the hospital, when pain is likely more prevalent and severe than it is during the postdischarge settings (when the HCAHPS or PG surveys are administered). Interestingly, Hanna et al. discovered that the team's attention to controlling pain was more strongly correlated with overall satisfaction than was the actual pain control.[24] These data, now confirmed by our study, should serve to remind us that a hospitalist's concern and effort to relieve pain may augment patient satisfaction with the quality of care, even when eliminating the pain may be difficult or impossible.

TAISCH was found not to be correlated with PG scores. Several explanations for this deserve consideration. First, the postdischarge PG survey that is used for our institution does not list the name of the specific hospitalist providers for the patients to evaluate. Because patients encounter multiple physicians during their hospital stay (eg, emergency department physicians, hospitalist providers, consultants), it is possible that patients are not reflecting on the named doctor when assessing the the attending of record on the PG mailed questionnaire. Second, the representation of patients who responded to TAISCH and PG were different; almost all patients completed TAISCH as opposed to a small minority who decide to respond to the PG survey. Third, TAISCH measures the physicians' performance more comprehensively with a larger number of variables. Last, it is possible that we were underpowered to detect significant correlation, because there were only 24 providers who had data from both TAISCH and PG. However, our results endorse using caution in interpreting PG scores for individual hospitalist's performance, particularly for high‐stakes consequences (including the provision of incentives to high performer and the insistence on remediation for low performers).

Several limitations of this study should be considered. First, only hospitalist providers from a single division were assessed. This may limit the generalizability of our findings. Second, although patients were assured about confidentiality of their responses, they might have provided more favorable answers, because they may have felt uncomfortable rating their physician poorly. One review article of the measurement of healthcare satisfaction indicated that impersonal (mailed) methods result in more criticism and lower satisfaction than assessments made in person using interviews. As the trade‐off, the mailed surveys yield lower response rates that may introduce other forms of bias.[25] Even on the HCHAPS survey report for the same period from our institution, 78% of patients gave top box ratings for our doctors' communication skills, which is at the state average.[26] Similarly, a study that used postdischarge telephone interviews to collect patients' satisfaction with hospitalists' care quality reported an average score of 4.20 out of 5.[27] These findings confirm that highly skewed ratings are common for these types of surveys, irrespective of how or when the data are collected.

Despite the aforementioned limitations, TAISCH use need not be limited to hospitalist physicians. It may also be used to assess allied health professionals or trainees performance, which cannot be assessed by HCHAPS or PG. Applying TAISCH in different hospital settings (eg, emergency department or critical care units), assessing hospitalists' reactions to TAISCH, learning whether TAISCH leads to hospitalists' behavior changes or appraising whether performance can improve in response to coaching interventions for those performing poorly are all research questions that merit additional consideration.

CONCLUSION

TAISCH allows for obtaining patient satisfaction data that are highly attributable to specific hospitalist providers. The data collection method also permits high response rates so that input comes from almost all patients. The timeliness of the TAISCH assessments also makes it possible for real‐time service recovery, which is impossible with other commonly used metrics assessing patient satisfaction. Our next step will include testing the most effective way to provide feedback to providers and to coach these individuals so as to improve performance.

Acknowledgements

The authors would like to thank Po‐Han Chen at the BEAD Core for his statistical analysis support.

Disclosures: This study was supported by the Johns Hopkins Osler Center for Clinical Excellence. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine. The authors report no conflicts of interest.

References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
References
  1. Brumenthal D, Jena AB. Hospital value‐based purchasing. J Hosp Med. 2013;8:271277.
  2. HCAHPS survey. Hospital Consumer Assessment of Healthcare Providers and Systems website. Available at: http://www.hcahpsonline.org/home.aspx. Accessed August 27, 2011.
  3. Press Ganey survey. Press Ganey website. Available at: http://www.pressganey.com/index.aspx. Accessed February 12, 2013.
  4. Society of Hospital Medicine. Membership Committee Guidelines for Hospitalists Patient Satisfaction Surveys. Available at: http://www.hospitalmedicine.org/AM/Template.cfm?Section=Practice_Resources119:166.e7e16.
  5. Makoul G, Krupat E, Chang CH. Measuring patient views of physician communication skills: development and testing of the Communication Assessment Tool. Patient Educ Couns. 2007;67:333342.
  6. Jenkinson C, Coulter A, Bruster S. The Picker Patient Experience Questionnaire: development and validation using data from in‐patient surveys in five countries. Int J Qual Health Care. 2002;14:353358.
  7. The Patient Satisfaction Questionnaire from RAND Health. RAND Health website. Available at: http://www.rand.org/health/surveys_tools/psq.html. Accessed December 30, 2011.
  8. Kahn MW. Etiquette‐based medicine. N Engl J Med. 2008;358:19881989.
  9. Christmas C, Kravet S, Durso C, Wright SM. Defining clinical excellence in academic medicine: a qualitative study of the master clinicians. Mayo Clin Proc. 2008;83:989994.
  10. Wright SM, Christmas C, Burkhart K, Kravet S, Durso C. Creating an academy of clinical excellence at Johns Hopkins Bayview Medical Center: a 3‐year experience. Acad Med. 2010;85:18331839.
  11. Bendapudi NM, Berry LL, Keith FA, Turner Parish J, Rayburn WL. Patients' perspectives on ideal physician behaviors. Mayo Clin Proc. 2006;81(3):338344.
  12. The Miller‐Coulson Academy of Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/innovative/signature_programs/academy_of_clinical_excellence/. Accessed April 25, 2014.
  13. Osler Center for Clinical Excellence at Johns Hopkins. Available at: http://www.hopkinsmedicine.org/johns_hopkins_bayview/education_training/continuing_education/osler_center_for_clinical_excellence. Accessed April 25, 2014.
  14. Wong‐Baker FACES Foundation. Available at: http://www.wongbakerfaces.org. Accessed July 8, 2013.
  15. Kane GC, Gotto JL, Mangione S, West S, Hojat M. Jefferson Scale of Patient's Perceptions of Physician Empathy: preliminary psychometric data. Croat Med J. 2007;48:8186.
  16. Glaser KM, Markham FW, Adler HM, McManus PR, Hojat M. Relationships between scores on the Jefferson Scale of Physician Empathy, patient perceptions of physician empathy, and humanistic approaches to patient care: a validity study. Med Sci Monit. 2007;13(7):CR291CR294.
  17. Campbell DT, Fiske DW. Convergent and discriminant validation by the multitrait‐multimethod matrix. Psychol Bul. 1959;56(2):81105.
  18. The Joint Commission. Facts about pain management. Available at: http://www.jointcommission.org/pain_management. Accessed April 25, 2014.
  19. Berg K, Majdan JF, Berg D, et al. Medical students' self‐reported empathy and simulated patients' assessments of student empathy: an analysis by gender and ethnicity. Acad Med. 2011;86(8):984988.
  20. Gorsuch RL. Factor Analysis. Hillsdale, NJ: Lawrence Erlbaum Associates; 1983.
  21. Arora VM, Schaninger C, D'Arcy M, et al. Improving inpatients' identification of their doctors: use of FACE cards. Jt Comm J Qual Pateint Saf. 1009;35(12):613619.
  22. Tackett S, Tad‐y D, Rios R et al. Appraising the practice of etiquette‐based medicine in the inpatient setting. J Gen Intern Med. 2013;28(7):908913.
  23. Hanna MN, Gonzalez‐Fernandez M, Barrett AD, et al. Does patient perception of pain control affect patient satisfaction across surgical units in a tertiary teaching hospital? Am J Med Qual. 2012;27:411416.
  24. Crow R, Gage H, Hampson S, et al. The measurement of satisfaction with health care: implications for practice from a systematic review of the literature. Health Technol Assess. 2002;6(32):1244.
  25. Centers for Medicare 7(2):131136.
Issue
Journal of Hospital Medicine - 9(9)
Issue
Journal of Hospital Medicine - 9(9)
Page Number
553-558
Page Number
553-558
Publications
Publications
Article Type
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists
Display Headline
Development and validation of the tool to assess inpatient satisfaction with care from hospitalists
Sections
Article Source

© 2014 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Address for correspondence and reprint requests: Haruka Torok, MD, Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Ave., MFL Bldg, West Tower 6th Floor CIMS Suite, Baltimore, MD 21224; Telephone: 410‐550‐5018; Fax: 410‐550‐2972; E‐mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files

Learning Needs of Physician Assistants

Article Type
Changed
Mon, 05/22/2017 - 19:38
Display Headline
Learning needs of physician assistants working in hospital medicine

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

Files
References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
Article PDF
Issue
Journal of Hospital Medicine - 7(3)
Publications
Page Number
190-194
Sections
Files
Files
Article PDF
Article PDF

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

Physician assistants (PA) have rapidly become an integral component in the United States health care delivery system, including in the field of Hospital Medicine, the fastest growing medical field in the United States.1, 2 Since its induction in 1997, hospitalist providers in North America have increased by 30‐fold.3 Correlating with this, the number of PAs practicing in the field of hospital medicine has also increased greatly in recent years. According to the American Academy of Physician Assistants (AAPA) census reports, Hospital Medicine first appeared as one of the specialty choices in the 2006 census (response rate, 33% of all individuals eligible to practice as PAs) when it was selected as the primary specialty by 239 PAs (1.1% of respondents). In the 2008 report (response rate, 35%), the number grew to 421 (1.7%) PAs.2

PA training programs emphasize primary care and offer limited exposure to inpatient medicine. After PA students complete their first 12 months of training in didactic coursework that teach the basic sciences, they typically spend the next year on clinical rotations, largely rooted in outpatient care.2, 4 Upon graduation, PAs do not have to pursue postgraduate training before beginning to practice in their preferred specialty areas. Thus, a majority of PAs going into specialty areas are trained on the job. This is not an exception in the field of hospital medicine.

In recent years, despite an increase in the number of PAs in Hospital Medicine, some medical centers have chosen to phase out the use of midlevel hospitalist providers (including PAs) with the purposeful decision to not hire new midlevel providers.5 The rationale for this strategy is that there is thought to be a steep learning curve that requires much time to overcome before these providers feel comfortable across the breadth of clinical cases. Before they become experienced and confident in caring for a highly complex heterogeneous patient population, they cannot operate autonomously and are not a cost‐effective alternative to physicians. The complexities associated with practicing in this field were clarified in 2006 when the Society of Hospital Medicine identified 51 core competencies in hospital medicine.3, 6 Some hospitalist programs are willing to provide their PAs with on‐the‐job training, but many programs do not have the educational expertise or the resources to make this happen. Structured and focused postgraduate training in hospital medicine seems like a reasonable solution to prepare newly graduating PAs that are interested in pursuing hospitalist careers, but such opportunities are very limited.7

To date, there is no available information about the learning needs of PAs working in hospital medicine settings. We hypothesized that understanding the learning needs of PA hospitalists would inform the development of more effective and efficient training programs. We studied PAs with experience working in hospital medicine to (1) identify self‐perceived gaps in their skills and knowledge upon starting their hospitalist careers and (2) understand their views about optimal training for careers in hospital medicine.

METHODS

Study Design

We conducted a cross‐sectional survey of a convenience sample of self‐identified PAs working in adult Hospital Medicine. The survey was distributed using an electronic survey program.

Participants

The subjects for the survey were identified through the Facebook group PAs in Hospital Medicine, which had 133 members as of July 2010. This source was selected because it was the most comprehensive list of self‐identified hospitalist PAs. Additionally, the group allowed us to send individualized invitations to complete the survey along with subsequent reminder messages to nonresponders. Subjects were eligible to participate if they were PAs with experience working in hospital medicine settings taking care of adult internal medicine inpatients.

Survey Instrument

The survey instrument was developed based on the Core Competencies in Hospital Medicine with the goal of identifying PA hospitalists' knowledge and skill gaps that were present when they started their hospitalist career.

In one section, respondents were asked about content areas among the Core Competencies in Hospital Medicine that they believed would have enhanced their effectiveness in practicing hospital medicine had they had additional training before starting their work as hospitalists. Response options ranged from Strongly Agree to Strongly Disagree. Because there were content areas that seemed more relevant to physicians, through rigorous discussions, our study team (including a hospitalist physician, senior hospitalist PA, two curriculum development experts, one medical education research expert, and an experienced hospital medicine research assistant) selected topics that were felt to be particularly germane to PA hospitalists. The relevance of this content to PA hospitalists was confirmed through pilot testing of the instrument. Another series of questions asked the PAs about their views on formal postgraduate training programs. The subjects were also queried about the frequency with which they performed various procedures (using the following scale: Never, Rarely [1‐2/year], Regularly [1‐2/month], Often [1‐2/week]) and whether they felt it was necessary for PAs to have procedure skills listed as part of the Core Competencies in Hospital Medicine (using the following scale: Not necessary, Preferable, Essential). Finally, the survey included a question about the PAs' preferred learning methods by asking the degree of helpfulness on various approaches (using the following scale: Not at all, Little, Some, A lot, Tremendously). Demographic information was also collected. The instrument was pilot‐tested for clarity on the 9 PA hospitalists who were affiliated with our hospitalist service, and the instrument was iteratively revised based on their feedback.

Data Collection and Analysis

Between September and December 2010, the survey invitations were sent as Facebook messages to the 133 members of the Facebook group PAs in Hospital Medicine. Sixteen members could not be contacted because their account setup did not allow us to send messages, and 14 were excluded because they were non‐PA members. In order to maximize participation, up to 4 reminder messages were sent to the 103 targeted PAs. The survey results were analyzed using Stata 11. Descriptive statistics were used to characterize the responses.

This study protocol was approved by the institution's review board.

RESULTS

Sixty‐nine PAs responded (response rate, 67%). Table 1 provides demographic characteristics of the respondents. The majority of respondents were 2635 years old and had worked as hospitalists for a mean of 4.3 years.

Characteristics of the 62 Physician Assistant Respondents Who Elected to Share Demographic and Personal Information
Characteristics*Value
  • Abbreviations: ICU, intensive care unit; PA, physician assistant; SD, standard deviation.

  • Seven PAs did not provide any personal or demographic information.

  • Because of missing data, numbers may not correspond to the exact percentages.

Age, years, n (%) 
<261 (2)
263016 (29)
313514 (25)
364010 (18)
41455 (9)
>4510 (18)
Women, n (%)35 (63)
Year of graduation from PA school, mode (SD)2002 (7)
No. of years working/worked as hospitalist, mean (SD)4.3 (3.4)
Completed any postgraduate training program, n (%)0 (0)
Hospitalist was the first PA job, n (%)30 (49)
Salary, US$, n (%) 
50,00170,0001 (2)
70,00190,00032 (57)
>90,00023 (41)
Location of hospital, n (%) 
Urban35 (57)
Suburban21 (34)
Rural5 (8)
Hospital characteristics, n (%) 
Academic medical center25 (41)
Community teaching hospital20 (33)
Community nonteaching hospital16 (26)
Responsibilities in addition to taking care of inpatients on medicine floor, n (%) 
Care for patients in ICU22 (35)
Perform inpatient consultations31 (50)
See outpatients11 (18)

Clinical Conditions

Table 2 shows the respondents' experience with 19 core competency clinical conditions before beginning their careers as hospitalist PAs. They reported having most experience in managing diabetes and urinary tract infections, and least experience in managing healthcare associated pneumonias and sepsis syndrome.

Physician Assistant Experiences with 19 Core Clinical Conditions Before Starting Career in Hospital Medicine
Clinical ConditionMean (SD)*
  • Abbreviation: SD, standard deviation.

  • Likert scale: 1, no experience, I knew nothing about this condition; 2, no experience, I had heard/read about this condition; 3, I had experience caring for 1 patient (simulated or real) with this condition; 4, I had experience caring for 25 patients with this condition; 5, I had experience caring for many (>5) patients with this condition.

Urinary tract infection4.5 (0.8)
Diabetes mellitus4.5 (0.8)
Asthma4.4 (0.9)
Community‐acquired pneumonia4.3 (0.9)
Chronic obstructive pulmonary disease4.3 (1.0)
Cellulitis4.2 (0.9)
Congestive heart failure4.1 (1.0)
Cardiac arrhythmia3.9 (1.1)
Delirium and dementia3.8 (1.1)
Acute coronary syndrome3.8 (1.2)
Acute renal failure3.8 (1.1)
Gastrointestinal bleed3.7 (1.1)
Venous thromboembolism3.7 (1.2)
Pain management3.7 (1.2)
Perioperative medicine3.6 (1.4)
Stroke3.5 (1.2)
Alcohol and drug withdrawal3.4 (1.1)
Sepsis syndrome3.3 (1.1)
Hospital‐acquired pneumonia3.2 (1.1)

Procedures

Most PA hospitalists (67%) perform electrocardiograms and chest X‐ray interpretations regularly (more than 1‐2/ week). However, nearly all PA hospitalists never or rarely (less than 1‐2/year) perform any invasive procedures, including arthrocentesis (98%), lumbar puncture (100%), paracentesis (91%), thoracentesis (98%), central line placement (91%), peripherally inserted central catheter placement (91%), and peripheral intravenous insertion (91%). Despite the infrequency of execution, more than 50% of respondents explained that it is either preferable or essential to be able to perform these procedures.

Content Knowledge

The PA hospitalists indicated which content areas may have allowed them to be more successful had they learned the material before starting their hospitalist career (Table 3). The top 4 topics that PA hospitalists believed would have helped them most to care for inpatients were palliative care (85% agreed or strongly agreed), nutrition for hospitalized patients (84%), performing consultations in the hospital (64%), and prevention of health careassociated infection (61%).

Content Areas that 62 Respondent PAs Believed Would Have Enhanced Their Effectiveness in Practicing Hospital Medicine Had They Had Additional Training Before Starting Their Work as Hospitalists
Health Care System TopicsPAs Who Agreed or Strongly Agreed, n (%)
Palliative care47 (85)
Nutrition for hospitalized patients46 (84)
Performing consultations in hospital35 (64)
Prevention of health careassociated infections34 (62)
Diagnostic decision‐making processes32 (58)
Patient handoff and transitions of care31 (56)
Evidence‐based medicine28 (51)
Communication with patients and families27 (49)
Drug safety and drug interactions27 (49)
Team approach and multidisciplinary care26 (48)
Patient safety and quality improvement processes25 (45)
Care of elderly patients24 (44)
Medical ethics22 (40)
Patient education20 (36)
Care of uninsured or underinsured patients18 (33)

Professional Growth as Hospitalist Providers

PAs judged working with physician preceptors (mean SD, 4.5 0.6) and discussing patients with consultants (mean SD, 4.3 0.8) to be most helpful for their professional growth, whereas receiving feedback/audits about their performance (mean SD, 3.5 1), attending conferences/lectures (mean SD, 3.6 0.7), and reading journals/textbooks (mean SD, 3.6 0.8) were rated as being less useful. Respondents believed that the mean number of months required for new hospitalist PAs to become fully competent team members was 11 months ( 8.6 SD). Forty‐three percent of respondents shared the perspective that some clinical experience in an inpatient setting was an essential prerequisite for entry into a hospitalist position. Although more than half (58%) felt that completion of postgraduate training program in hospital medicine was not necessary as a prerequisite, almost all (91%) explained that they would have been interested in such a program even if it meant having a lower stipend than a hospitalist PA salary during the first year on the job (Table 4).

Self‐Reported Interest from 55 Respondents in Postgraduate Hospitalist Training Depending on Varying Levels of Incentives and Disincentives
Interest in Trainingn (%)
Interested and willing to pay tuition1 (2)
Interested even if there was no stipend, as long as I didn't have to pay any additional tuition3 (5)
Interested ONLY if a stipend of at least 25% of a hospitalist PA salary was offered4 (7)
Interested ONLY if a stipend of at least 50% of a hospitalist PA salary was offered21 (38)
Interested ONLY if a stipend of at least 75% of a hospitalist PA salary was offered21 (38)
Interested ONLY if 100% of a hospitalist PA salary was offered4 (7)
Not interested under any circumstances1 (2)

DISCUSSION

Our survey addresses a wide range of topics related to PA hospitalists' learning needs including their experience with the Core Competencies in Hospital Medicine and their views on the benefits of PA training following graduation. Although self‐efficacy is not assessed, our study revealed that PAs who are choosing hospitalist careers have limited prior clinical experience treating many medical conditions that are managed in inpatient settings, such as sepsis syndrome. This inexperience with commonly seen clinical conditions, such as sepsis, wherein following guidelines can both reduce costs and improve outcomes, is problematic. More experience and training with such conditions would almost certainly reduce variability, improve skills, and augment confidence. These observed variations in experience in caring for conditions that often prompt admission to the hospital among PAs starting their hospitalist careers emphasizes the need to be learner‐centered when training PAs, so as to provide tailored guidance and oversight.

Only a few other empiric research articles have focused on PA hospitalists. One article described a postgraduate training program for PAs in hospital medicine that was launched in 2008. The curriculum was developed based on the Core Competencies in Hospital Medicine, and the authors explained that after 12 months of training, their first graduate functioned at the level of a PA with 4 years of experience under her belt.7 Several articles describe experiences using midlevel providers (including PAs) in general surgery, primary care medicine, cardiology, emergency medicine, critical care, pediatrics, and hospital medicine settings.5, 820 Many of these articles reported favorable results showing that using midlevel providers was either superior or just as effective in terms of cost and quality measures to physician‐only models. Many of these papers alluded to the ways in which PAs have enabled graduate medical education training programs to comply with residents' duty‐hour restrictions. A recent analysis that compared outcomes related to inpatient care provided by a hospitalist‐PA model versus a traditional resident‐based model revealed a slightly longer length of stay on the PA team but similar charges, readmission rates, and mortality.19 Yet another paper revealed that patients admitted to a residents' service, compared with the nonteaching hospitalist service that uses PAs and nurse practitioners, were different, having higher comorbidity burdens and higher acuity diagnoses.20 The authors suggested that this variance might be explained by the difference in their training, abilities, and goals of the groups. There was no research article that sought to capture the perspectives of practicing hospitalist PAs.

Our study revealed that although half of respondents became hospitalists immediately after graduating from PA school, a majority agreed that additional clinical training in inpatient settings would have been welcomed and helpful. This study's results reveal that although there is a fair amount of perceived interest in postgraduate training programs in hospital medicine, there are very few training opportunities for PAs in hospital medicine.7, 21 The American Academy of Physician Assistants, the Society of Hospital Medicine, and the American Academy of Nurse Practitioners cosponsor Adult Hospital Medicine Boot Camp for PAs and nurse practitioners annually to facilitate knowledge acquisition, but this course is truly an orientation rather than a comprehensive training program.22 Our findings suggest that more rigorous and thorough training in hospital medicine would be valued and appreciated by PA hospitalists.

Several limitations of this study should be considered. First, our survey respondents may not represent the entire spectrum of practicing PA hospitalists. However, the demographic data of 421 PAs who indicated their specialty as hospital medicine in the 2008 National Physician Assistants Census Report were not dissimilar from our informants; 65% were women, and their mean number of years in hospital medicine was 3.9 years.2 Second, our study sample was small. It was difficult to identify a national sample of hospitalist PAs, and we had to resort to a creative use of social media to find a national sample. Third, the study relied exclusively on self‐report, and since we asked about their perceived learning needs when they started working as hospitalists, recall bias cannot be excluded. However, the questions addressing attitudes and beliefs can only be ascertained from the informants themselves. That said, the input from hospitalist physicians about training needs for the PAs who they are supervising would have strengthened the reliability of the data, but this was not possible given the sampling strategy that we elected to use. Finally, our survey instrument was developed based on the Core Competencies in Hospital Medicine, which is a blueprint to develop standardized curricula for teaching hospital medicine in medical school, postgraduate training programs (ie, residency, fellowship), and continuing medical education programs. It is not clear whether the same competencies should be expected of PA hospitalists who may have different job descriptions from physician hospitalists.

In conclusion, we present the first national data on self‐perceived learning needs of PAs working in hospital medicine settings. This study collates the perceptions of PAs working in hospital medicine and highlights the fact that training in PA school does not adequately prepare them to care for hospitalized patients. Hospitalist groups may use this study's findings to coach and instruct newly hired or inexperienced hospitalist PAs, particularly until postgraduate training opportunities become more prevalent. PA schools may consider the results of this study for modifying their curricula in hopes of emphasizing the clinical content that may be most relevant for a proportion of their graduates.

Acknowledgements

The authors would like to thank Drs. David Kern and Belinda Chen at Johns Hopkins Bayview Medical Center for their assistance in developing the survey instrument.

Financial support: This study was supported by the Linda Brandt Research Award program of the Association of Postgraduate PA Programs. Dr. Wright is a Miller‐Coulson Family Scholar and was supported through the Johns Hopkins Center for Innovative Medicine.

Disclosures: Dr. Torok and Ms. Lackner received a Linda Brandt Research Award from the Association of Postgraduate PA Programs for support of this study. Dr. Wright is a Miller‐Coulson Family Scholar and is supported through the Johns Hopkins Center for Innovative Medicine.

References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
References
  1. United States Department of Labor, Bureau of Labor Statistics. Available at: http://www.bls.gov. Accessed February 16,2011.
  2. American Academy of Physician Assistants. Available at: http://www.aapa.org. Accessed April 20,2011.
  3. Society of Hospital Medicine. Available at: http://www.hospitalmedicine.org. Accessed January 24,2011.
  4. Accreditation Review Commission on Education for the Physician Assistants Accreditation Standards. Available at: http://www.arc‐pa.org/acc_standards. Accessed February 16,2011.
  5. Parekh VI,Roy CL.Non‐physician providers in hospital medicine: not so fast.J Hosp Med.2010;5(2):103106.
  6. Dressler DD,Pistoria MJ,Budnitz TL,McKean SC,Amin AN.Core competencies in hospital medicine: development and methodology.J Hosp Med.2006;1:4856.
  7. Will KK,Budavari AL,Wilkens JA,Mishark K,Hartsell ZC.A hospitalist postgraduate training program for physician assistants.J Hosp Med.2010;5:9498.
  8. Resnick AS,Todd BA,Mullen JL,Morris JB.How do surgical residents and non‐physician practitioners play together in the sandbox?Curr Surg.2006;63:155164.
  9. Victorino GP,Organ CH.Physician assistant influence on surgery residents.Arch Surg.2003;138:971976.
  10. Buch KE,Genovese MY,Conigliaro JL, et al.Non‐physician practitioners' overall enhancement to a surgical resident's experience.J Surg Educ.2008;65:5053.
  11. Roblin DW,Howard DH,Becker ER,Kathleen Adams E,Roberts MH.Use of midlevel practitioners to achieve labor cost savings in the primary care practice of an MCO.Health Serv Res.2004;39:607626.
  12. Grzybicki DM,Sullivan PJ,Oppy JM,Bethke AM,Raab SS.The economic benefit for family/general medicine practices employing physician assistants.Am J Manag Care.2002;8:613620.
  13. Kaissi A,Kralewski J,Dowd B.Financial and organizational factors affecting the employment of nurse practitioners and physician assistants in medical group practices.J Ambul Care Manage.2003;26:209216.
  14. Nishimura RA,Linderbaum JA,Naessens JM,Spurrier B,Koch MB,Gaines KA.A nonresident cardiovascular inpatient service improves residents' experiences in an academic medical center: a new model to meet the challenges of the new millennium.Acad Med.2004;79:426431.
  15. Kleinpell RM,Ely EW,Grabenkort R.Nurse practitioners and physician assistants in the intensive care unit: an evidence‐based review.Crit Care Med.2008;36:28882897.
  16. Carter AJ,Chochinov AH.A systematic review of the impact of nurse practitioners on cost, quality of care, satisfaction and wait times in the emergency department.CJEM.2007;9:286295.
  17. Mathur M,Rampersad A,Howard K,Goldman GM.Physician assistants as physician extenders in the pediatric intensive care unit setting—a 5‐year experience.Pediatr Crit Care Med.2005;6:1419.
  18. Abrass CK,Ballweg R,Gilshannon M,Coombs JB.A process for reducing workload and enhancing residents' education at an academic medical center.Acad Med.2001;76:798805.
  19. Singh S,Fletcher KE,Schapira MM, et al.A comparison of outcomes of general medical inpatient care provided by a hospitalist‐physician assistant model vs a traditional resident‐based model.J Hosp Med.2011;6:112130.
  20. O'Connor AB,Lang VJ,Lurie SJ, et al.The effect of nonteaching services on the distribution of inpatient cases for internal medicine residents.Acad Med.2009;84:220225.
  21. Association of Postgraduate PA Programs. Available at: http://appap.org/Home/tabid/38/Default.aspx. Accessed February 16,2011.
  22. Adult Hospital Medicine Boot Camp for PAs and NPs. Available at: http://www.aapa.org/component/content/article/23—general‐/673‐adult‐hospital‐medicine‐boot‐camp‐for‐pas‐and‐nps. Accessed February 16,2011.
Issue
Journal of Hospital Medicine - 7(3)
Issue
Journal of Hospital Medicine - 7(3)
Page Number
190-194
Page Number
190-194
Publications
Publications
Article Type
Display Headline
Learning needs of physician assistants working in hospital medicine
Display Headline
Learning needs of physician assistants working in hospital medicine
Sections
Article Source

Copyright © 2011 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Johns Hopkins University School of Medicine, Johns Hopkins Bayview Medical Center, 5200 Eastern Avenue, MFL Building West Tower 6F CIMS Suite, Baltimore, MD 21224
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Article PDF Media
Media Files