Allowed Publications
Slot System
Featured Buckets
Featured Buckets Admin

Long Peripheral Catheters: A Retrospective Review of Major Complications

Article Type
Changed
Thu, 11/21/2019 - 14:53

Introduced in the 1950s, midline catheters have become a popular option for intravenous (IV) access.1,2 Ranging from 8 to 25 cm in length, they are inserted in the veins of the upper arm. Unlike peripherally inserted central catheters (PICCs), the tip of midline catheters terminates proximal to the axillary vein; thus, midlines are peripheral, not central venous access devices.1-3 One popular variation of a midline catheter, though nebulously defined, is the long peripheral catheter (LPC), a device ranging from 6 to 15 cm in length.4,5

Concerns regarding inappropriate use and complications such as thrombosis and central line-associated bloodstream infection (CLABSI) have spurred growth in the use of LPCs.6 However, data regarding complication rates with these devices are limited. Whether LPCs are a safe and viable option for IV access is unclear. We conducted a retrospective study to examine indications, patterns of use, and complications following LPC insertion in hospitalized patients.

METHODS

Device Selection

Our institution is a 470-bed tertiary care, safety-net hospital in Chicago, Illinois. Our vascular access team (VAT) performs a patient assessment and selects IV devices based upon published standards for device appropriateness. 7 We retrospectively collated electronic requests for LPC insertion on adult inpatients between October 2015 and June 2017. Cases where (1) duplicate orders, (2) patient refusal, (3) peripheral intravenous catheter of any length, or (4) PICCs were placed were excluded from this analysis.

VAT and Device Characteristics

We used Bard PowerGlide® (Bard Access Systems, Inc., Salt Lake City, Utah), an 18-gauge, 8-10 cm long, power-injectable, polyurethane LPC. Bundled kits (ie, device, gown, dressing, etc.) were utilized, and VAT providers underwent two weeks of training prior to the study period. All LPCs were inserted in the upper extremities under sterile technique using ultrasound guidance (accelerated Seldinger technique). Placement confirmation was verified by aspiration, flush, and ultrasound visualization of the catheter tip within the vein. An antimicrobial dressing was applied to the catheter insertion site, and daily saline flushes and weekly dressing changes by bedside nurses were used for device maintenance. LPC placement was available on all nonholiday weekdays from 8 am to 5 pm.

Data Selection

For each LPC recipient, demographic and comorbidity data were collected to calculate the Charlson Comorbidity Index (Table 1). Every LPC recipient’s history of deep vein thrombosis (DVT) and catheter-related infection (CRI) was recorded. Procedural information (eg, inserter, vein, and number of attempts) was obtained from insertion notes. All data were extracted from the electronic medical record via chart review. Two reviewers verified outcomes to ensure concordance with stated definitions (ie, DVT, CRI). Device parameters, including dwell time, indication, and time to complication(s) were also collected.

 

 

Primary Outcomes

The primary outcome was the incidence of DVT and CRI (Table 2). DVT was defined as radiographically confirmed (eg, ultrasound, computed tomography) thrombosis in the presence of patient signs or symptoms. CRI was defined in accordance with Timsit et al.8 as follows: catheter-related clinical sepsis without bloodstream infection defined as (1) combination of fever (body temperature >38.5°C) or hypothermia (body temperature <36.5°C), (2) catheter-tip culture yielding ≥103 CFUs/mL, (3) pus at the insertion site or resolution of clinical sepsis after catheter removal, and (4) absence of any other infectious focus or catheter-related bloodstream infection (CRBSI). CRBSI was defined as a combination of (1) one or more positive peripheral blood cultures sampled immediately before or within 48 hours after catheter removal, (2) a quantitative catheter-tip culture testing positive for the same microorganisms (same species and susceptibility pattern) or a differential time to positivity of blood cultures ≥2 hours, and (3) no other infectious focus explaining the positive blood culture result.

Secondary Outcomes

Secondary outcomes, defined as minor complications, included infiltration, thrombophlebitis, and catheter occlusion. Infiltration was defined as localized swelling due to infusate or site leakage. Thrombophlebitis was defined as one or more of the following: localized erythema, palpable cord, tenderness, or streaking. Occlusion was defined as nonpatency of the catheter due to the inability to flush or aspirate. Definitions for secondary outcomes are consistent with those used in prior studies.9

Statistical Analysis

Patient and LPC characteristics were analyzed using descriptive statistics. Results were reported as percentages, means, medians (interquartile range [IQR]), and rates per 1,000 catheter days. All analyses were conducted in Stata v.15 (StataCorp, College Station, Texas).

RESULTS

Within the 20-month study period, a total of 539 LPCs representing 5,543 catheter days were available for analysis. The mean patient age was 53 years. A total of 90 patients (16.7%) had a history of DVT, while 6 (1.1%) had a history of CRI. We calculated a median Charlson index of 4 (interquartile range [IQR], 2-7), suggesting an estimated one-year postdischarge survival of 53% (Table 1).

The majority of LPCs (99.6% [537/539]) were single lumen catheters. No patient had more than one concurrent LPC. The cannulation success rate on the first attempt was 93.9% (507/539). The brachial or basilic veins were primarily targeted (98.7%, [532/539]). Difficult intravenous access represented 48.8% (263/539) of indications, and postdischarge parenteral antibiotics constituted 47.9% (258/539). The median catheter dwell time was eight days (IQR, 4-14 days).

Nine DVTs (1.7% [9/539]) occurred in patients with LPCs. The incidence of DVT was higher in patients with a history of DVT (5.7%, 5/90). The median time from insertion to DVT was 11 (IQR, 5-14) days. DVTs were managed with LPC removal and systemic anticoagulation in accordance with catheter-related DVT guidelines. The rate of CRI was 0.6% (3/539), or 0.54 per 1,000 catheter days. Two CRIs had positive blood cultures, while one had negative cultures. Infections occurred after a median of 12 (IQR, 8-15) days of catheter dwell. Each was treated with LPC removal and IV antibiotics, with two patients receiving two weeks and one receiving six weeks of antibiotic therapy (Table 2).

With respect to secondary outcomes, the incidence of infiltration was 0.4% (2/539), thrombophlebitis 0.7% (4/539), and catheter occlusion 0.9% (5/539). The time to event was 8.5, 3.75, and 5.4 days, respectively. Collectively, 2.0% of devices experienced a minor complication.

 

 

DISCUSSION

In our single-center study, LPCs were primarily inserted for difficult venous access or parenteral antibiotics. Despite a clinically complex population with a high number of comorbidities, rates of major and minor complications associated with LPCs were low. These data suggest that LPCs are a safe alternative to PICCs and other central access devices for short-term use.

Our incidence of CRI of 0.6% (0.54 per 1,000 catheter days) is similar to or lower than other studies.2,10,11 An incidence of 0%-1.5% was observed in two recent publications about midline catheters, with rates across individual studies and hospital sites varying widely.12,13 A systematic review of intravascular devices reported CRI rates of 0.4% (0.2 per 1,000 catheter days) for midlines and 0.1% (0.5 per 1,000 catheter days for peripheral IVs), in contrast to PICCs at 3.1% (1.1 per 1,000 catheter days).14 However, catheters of varying lengths and diameters were used in studies within the review, potentially leading to heterogeneous outcomes. In accordance with existing data, CRI incidence in our study increased with catheter dwell time.10

The 1.7% rate of DVT observed in our study is on the lower end of existing data (1.4%-5.9%).12-15 Compared with PICCs (2%-15%), the incidence of venous thrombosis appears to be lower with midlines/LPCs—justifying their use as an alternative device for IV access.7,9,12,14 There was an overall low rate of minor complications, similar to recently published results.10 As rates were greater in patients with a history of DVT (5.7%), caution is warranted when using these devices in this population.

Our experience with LPCs suggests financial and patient benefits. The cost of LPCs is lower than central access devices.4 As rates of CRI were low, costs related to CLABSIs from PICC use may be reduced by appropriate LPC use. LPCs may allow the ability to draw blood routinely, which could improve the patient experience—albeit with its own risks. Current recommendations support the use of PICCs or LPCs, somewhat interchangeably, for patients with appropriate indications needing IV therapy for more than five to six days.2,7 However, LPCs now account for 57% of vascular access procedures in our center and have led to a decrease in reliance on PICCs and attendant complications.

Our study has several limitations. First, LPCs and midlines are often used interchangeably in the literature.4,5 Therefore, reported complication rates may not reflect those of LPCs alone and may limit comparisons. Second, ours was a single-center study with experts assessing device appropriateness and performing ultrasound-guided insertions; our findings may not be generalizable to dissimilar settings. Third, we did not track LPC complications such as nonpatency and leakage. As prior studies reported high rates of complications such as these events, caution is advised when interpreting our findings.15 Finally, we retrospectively extracted data from our medical records; limitations in documentation may influence our findings.

CONCLUSION

In patients requiring short-term IV therapy, these data suggest LPCs have low complication rates and may be safely used as an alternative option for venous access.

Acknowledgments

The authors thank Drs. Laura Hernandez, Andres Mendez Hernandez, and Victor Prado for their assistance in data collection. The authors also thank Mr. Onofre Donceras and Dr. Sharon Welbel from the John H. Stroger, Jr. Hospital of Cook County Department of Infection Control & Epidemiology for their assistance in reviewing local line infection data.

Drs. Patel and Chopra developed the study design. Drs. Patel, Araujo, Parra Rodriguez, Ramirez Sanchez, and Chopra contributed to manuscript writing. Ms. Snyder provided statistical analysis. All authors have seen and approved the final manuscript for submission.

 

 

Disclosures

The authors have nothing to disclose.

References

1. Anderson NR. Midline catheters: the middle ground of intravenous therapy administration. J Infus Nurs. 2004;27(5):313-321.
2. Adams DZ, Little A, Vinsant C, et al. The midline catheter: a clinical review. J Emerg Med. 2016;51(3):252-258. https://doi.org/10.1016/j.jemermed.2016.05.029.
3. Scoppettuolo G, Pittiruti M, Pitoni S, et al. Ultrasound-guided “short” midline catheters for difficult venous access in the emergency department: a retrospective analysis. Int J Emerg Med. 2016;9(1):3. https://doi.org/10.1186/s12245-016-0100-0.
4. Qin KR, Nataraja RM, Pacilli M. Long peripheral catheters: is it time to address the confusion? J Vasc Access. 2018;20(5). https://doi.org/10.1177/1129729818819730.
5. Pittiruti M, Scoppettuolo G. The GAVeCeLT Manual of PICC and Midlines. Milano: EDRA; 2016.
6. Dawson RB, Moureau NL. Midline catheters: an essential tool in CLABSI reduction. Infection Control Today. https://www.infectioncontroltoday.com/clabsi/midline-catheters-essential-tool-clabsi-reduction. Accessed February 19, 2018
7. Chopra V, Flanders SA, Saint S, et al. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA appropriateness method. Ann Intern Med. 2015;163(6):S1-S40. https://doi.org/10.7326/M15-0744.
8. Timsit JF, Schwebel C, Bouadma L, et al. Chlorhexidine-impregnated sponges and less frequent dressing changes for prevention of catheter-related infections in critically ill adults: a randomized controlled trial. JAMA. 2009;301(12):1231-1241. https://doi.org/10.1001/jama.2009.376.
9. Bahl A, Karabon P, Chu D. Comparison of venous thrombosis complications in midlines versus peripherally inserted central catheters: are midlines the safer option? Clin Appl Thromb Hemost. 2019;25. https://doi.org/10.1177/1076029619839150.
10. Goetz AM, Miller J, Wagener MM, et al. Complications related to intravenous midline catheter usage. A 2-year study. J Intraven Nurs. 1998;21(2):76-80.
11. Xu T, Kingsley L, DiNucci S, et al. Safety and utilization of peripherally inserted central catheters versus midline catheters at a large academic medical center. Am J Infect Control. 2016;44(12):1458-1461. https://doi.org/10.1016/j.ajic.2016.09.010.
12. Chopra V, Kaatz S, Swaminathan L, et al. Variation in use and outcomes related to midline catheters: results from a multicentre pilot study. BMJ Qual Saf. 2019;28(9):714-720. https://doi.org/10.1136/bmjqs-2018-008554.
13. Badger J. Long peripheral catheters for deep arm vein venous access: A systematic review of complications. Heart Lung. 2019;48(3):222-225. https://doi.org/10.1016/j.hrtlng.2019.01.002.
14. Maki DG, Kluger DM, Crnich CJ. The risk of bloodstream infection in adults with different intravascular devices: a systematic review of 200 published prospective studies. Mayo Clin Proc. 2006;81(9):1159-1171. https://doi.org/10.4065/81.9.1159.
15. Zerla PA, Caravella G, De Luca G, et al. Open- vs closed-tip valved peripherally inserted central catheters and midlines: Findings from a vascular access database. J Assoc Vasc Access. 2015;20(3):169-176. https://doi.org/10.1016/j.java.2015.06.001.

Article PDF
Issue
Journal of Hospital Medicine 14(12)
Publications
Topics
Page Number
758-760. Published Online First October 23, 2019
Sections
Article PDF
Article PDF
Related Articles

Introduced in the 1950s, midline catheters have become a popular option for intravenous (IV) access.1,2 Ranging from 8 to 25 cm in length, they are inserted in the veins of the upper arm. Unlike peripherally inserted central catheters (PICCs), the tip of midline catheters terminates proximal to the axillary vein; thus, midlines are peripheral, not central venous access devices.1-3 One popular variation of a midline catheter, though nebulously defined, is the long peripheral catheter (LPC), a device ranging from 6 to 15 cm in length.4,5

Concerns regarding inappropriate use and complications such as thrombosis and central line-associated bloodstream infection (CLABSI) have spurred growth in the use of LPCs.6 However, data regarding complication rates with these devices are limited. Whether LPCs are a safe and viable option for IV access is unclear. We conducted a retrospective study to examine indications, patterns of use, and complications following LPC insertion in hospitalized patients.

METHODS

Device Selection

Our institution is a 470-bed tertiary care, safety-net hospital in Chicago, Illinois. Our vascular access team (VAT) performs a patient assessment and selects IV devices based upon published standards for device appropriateness. 7 We retrospectively collated electronic requests for LPC insertion on adult inpatients between October 2015 and June 2017. Cases where (1) duplicate orders, (2) patient refusal, (3) peripheral intravenous catheter of any length, or (4) PICCs were placed were excluded from this analysis.

VAT and Device Characteristics

We used Bard PowerGlide® (Bard Access Systems, Inc., Salt Lake City, Utah), an 18-gauge, 8-10 cm long, power-injectable, polyurethane LPC. Bundled kits (ie, device, gown, dressing, etc.) were utilized, and VAT providers underwent two weeks of training prior to the study period. All LPCs were inserted in the upper extremities under sterile technique using ultrasound guidance (accelerated Seldinger technique). Placement confirmation was verified by aspiration, flush, and ultrasound visualization of the catheter tip within the vein. An antimicrobial dressing was applied to the catheter insertion site, and daily saline flushes and weekly dressing changes by bedside nurses were used for device maintenance. LPC placement was available on all nonholiday weekdays from 8 am to 5 pm.

Data Selection

For each LPC recipient, demographic and comorbidity data were collected to calculate the Charlson Comorbidity Index (Table 1). Every LPC recipient’s history of deep vein thrombosis (DVT) and catheter-related infection (CRI) was recorded. Procedural information (eg, inserter, vein, and number of attempts) was obtained from insertion notes. All data were extracted from the electronic medical record via chart review. Two reviewers verified outcomes to ensure concordance with stated definitions (ie, DVT, CRI). Device parameters, including dwell time, indication, and time to complication(s) were also collected.

 

 

Primary Outcomes

The primary outcome was the incidence of DVT and CRI (Table 2). DVT was defined as radiographically confirmed (eg, ultrasound, computed tomography) thrombosis in the presence of patient signs or symptoms. CRI was defined in accordance with Timsit et al.8 as follows: catheter-related clinical sepsis without bloodstream infection defined as (1) combination of fever (body temperature >38.5°C) or hypothermia (body temperature <36.5°C), (2) catheter-tip culture yielding ≥103 CFUs/mL, (3) pus at the insertion site or resolution of clinical sepsis after catheter removal, and (4) absence of any other infectious focus or catheter-related bloodstream infection (CRBSI). CRBSI was defined as a combination of (1) one or more positive peripheral blood cultures sampled immediately before or within 48 hours after catheter removal, (2) a quantitative catheter-tip culture testing positive for the same microorganisms (same species and susceptibility pattern) or a differential time to positivity of blood cultures ≥2 hours, and (3) no other infectious focus explaining the positive blood culture result.

Secondary Outcomes

Secondary outcomes, defined as minor complications, included infiltration, thrombophlebitis, and catheter occlusion. Infiltration was defined as localized swelling due to infusate or site leakage. Thrombophlebitis was defined as one or more of the following: localized erythema, palpable cord, tenderness, or streaking. Occlusion was defined as nonpatency of the catheter due to the inability to flush or aspirate. Definitions for secondary outcomes are consistent with those used in prior studies.9

Statistical Analysis

Patient and LPC characteristics were analyzed using descriptive statistics. Results were reported as percentages, means, medians (interquartile range [IQR]), and rates per 1,000 catheter days. All analyses were conducted in Stata v.15 (StataCorp, College Station, Texas).

RESULTS

Within the 20-month study period, a total of 539 LPCs representing 5,543 catheter days were available for analysis. The mean patient age was 53 years. A total of 90 patients (16.7%) had a history of DVT, while 6 (1.1%) had a history of CRI. We calculated a median Charlson index of 4 (interquartile range [IQR], 2-7), suggesting an estimated one-year postdischarge survival of 53% (Table 1).

The majority of LPCs (99.6% [537/539]) were single lumen catheters. No patient had more than one concurrent LPC. The cannulation success rate on the first attempt was 93.9% (507/539). The brachial or basilic veins were primarily targeted (98.7%, [532/539]). Difficult intravenous access represented 48.8% (263/539) of indications, and postdischarge parenteral antibiotics constituted 47.9% (258/539). The median catheter dwell time was eight days (IQR, 4-14 days).

Nine DVTs (1.7% [9/539]) occurred in patients with LPCs. The incidence of DVT was higher in patients with a history of DVT (5.7%, 5/90). The median time from insertion to DVT was 11 (IQR, 5-14) days. DVTs were managed with LPC removal and systemic anticoagulation in accordance with catheter-related DVT guidelines. The rate of CRI was 0.6% (3/539), or 0.54 per 1,000 catheter days. Two CRIs had positive blood cultures, while one had negative cultures. Infections occurred after a median of 12 (IQR, 8-15) days of catheter dwell. Each was treated with LPC removal and IV antibiotics, with two patients receiving two weeks and one receiving six weeks of antibiotic therapy (Table 2).

With respect to secondary outcomes, the incidence of infiltration was 0.4% (2/539), thrombophlebitis 0.7% (4/539), and catheter occlusion 0.9% (5/539). The time to event was 8.5, 3.75, and 5.4 days, respectively. Collectively, 2.0% of devices experienced a minor complication.

 

 

DISCUSSION

In our single-center study, LPCs were primarily inserted for difficult venous access or parenteral antibiotics. Despite a clinically complex population with a high number of comorbidities, rates of major and minor complications associated with LPCs were low. These data suggest that LPCs are a safe alternative to PICCs and other central access devices for short-term use.

Our incidence of CRI of 0.6% (0.54 per 1,000 catheter days) is similar to or lower than other studies.2,10,11 An incidence of 0%-1.5% was observed in two recent publications about midline catheters, with rates across individual studies and hospital sites varying widely.12,13 A systematic review of intravascular devices reported CRI rates of 0.4% (0.2 per 1,000 catheter days) for midlines and 0.1% (0.5 per 1,000 catheter days for peripheral IVs), in contrast to PICCs at 3.1% (1.1 per 1,000 catheter days).14 However, catheters of varying lengths and diameters were used in studies within the review, potentially leading to heterogeneous outcomes. In accordance with existing data, CRI incidence in our study increased with catheter dwell time.10

The 1.7% rate of DVT observed in our study is on the lower end of existing data (1.4%-5.9%).12-15 Compared with PICCs (2%-15%), the incidence of venous thrombosis appears to be lower with midlines/LPCs—justifying their use as an alternative device for IV access.7,9,12,14 There was an overall low rate of minor complications, similar to recently published results.10 As rates were greater in patients with a history of DVT (5.7%), caution is warranted when using these devices in this population.

Our experience with LPCs suggests financial and patient benefits. The cost of LPCs is lower than central access devices.4 As rates of CRI were low, costs related to CLABSIs from PICC use may be reduced by appropriate LPC use. LPCs may allow the ability to draw blood routinely, which could improve the patient experience—albeit with its own risks. Current recommendations support the use of PICCs or LPCs, somewhat interchangeably, for patients with appropriate indications needing IV therapy for more than five to six days.2,7 However, LPCs now account for 57% of vascular access procedures in our center and have led to a decrease in reliance on PICCs and attendant complications.

Our study has several limitations. First, LPCs and midlines are often used interchangeably in the literature.4,5 Therefore, reported complication rates may not reflect those of LPCs alone and may limit comparisons. Second, ours was a single-center study with experts assessing device appropriateness and performing ultrasound-guided insertions; our findings may not be generalizable to dissimilar settings. Third, we did not track LPC complications such as nonpatency and leakage. As prior studies reported high rates of complications such as these events, caution is advised when interpreting our findings.15 Finally, we retrospectively extracted data from our medical records; limitations in documentation may influence our findings.

CONCLUSION

In patients requiring short-term IV therapy, these data suggest LPCs have low complication rates and may be safely used as an alternative option for venous access.

Acknowledgments

The authors thank Drs. Laura Hernandez, Andres Mendez Hernandez, and Victor Prado for their assistance in data collection. The authors also thank Mr. Onofre Donceras and Dr. Sharon Welbel from the John H. Stroger, Jr. Hospital of Cook County Department of Infection Control & Epidemiology for their assistance in reviewing local line infection data.

Drs. Patel and Chopra developed the study design. Drs. Patel, Araujo, Parra Rodriguez, Ramirez Sanchez, and Chopra contributed to manuscript writing. Ms. Snyder provided statistical analysis. All authors have seen and approved the final manuscript for submission.

 

 

Disclosures

The authors have nothing to disclose.

Introduced in the 1950s, midline catheters have become a popular option for intravenous (IV) access.1,2 Ranging from 8 to 25 cm in length, they are inserted in the veins of the upper arm. Unlike peripherally inserted central catheters (PICCs), the tip of midline catheters terminates proximal to the axillary vein; thus, midlines are peripheral, not central venous access devices.1-3 One popular variation of a midline catheter, though nebulously defined, is the long peripheral catheter (LPC), a device ranging from 6 to 15 cm in length.4,5

Concerns regarding inappropriate use and complications such as thrombosis and central line-associated bloodstream infection (CLABSI) have spurred growth in the use of LPCs.6 However, data regarding complication rates with these devices are limited. Whether LPCs are a safe and viable option for IV access is unclear. We conducted a retrospective study to examine indications, patterns of use, and complications following LPC insertion in hospitalized patients.

METHODS

Device Selection

Our institution is a 470-bed tertiary care, safety-net hospital in Chicago, Illinois. Our vascular access team (VAT) performs a patient assessment and selects IV devices based upon published standards for device appropriateness. 7 We retrospectively collated electronic requests for LPC insertion on adult inpatients between October 2015 and June 2017. Cases where (1) duplicate orders, (2) patient refusal, (3) peripheral intravenous catheter of any length, or (4) PICCs were placed were excluded from this analysis.

VAT and Device Characteristics

We used Bard PowerGlide® (Bard Access Systems, Inc., Salt Lake City, Utah), an 18-gauge, 8-10 cm long, power-injectable, polyurethane LPC. Bundled kits (ie, device, gown, dressing, etc.) were utilized, and VAT providers underwent two weeks of training prior to the study period. All LPCs were inserted in the upper extremities under sterile technique using ultrasound guidance (accelerated Seldinger technique). Placement confirmation was verified by aspiration, flush, and ultrasound visualization of the catheter tip within the vein. An antimicrobial dressing was applied to the catheter insertion site, and daily saline flushes and weekly dressing changes by bedside nurses were used for device maintenance. LPC placement was available on all nonholiday weekdays from 8 am to 5 pm.

Data Selection

For each LPC recipient, demographic and comorbidity data were collected to calculate the Charlson Comorbidity Index (Table 1). Every LPC recipient’s history of deep vein thrombosis (DVT) and catheter-related infection (CRI) was recorded. Procedural information (eg, inserter, vein, and number of attempts) was obtained from insertion notes. All data were extracted from the electronic medical record via chart review. Two reviewers verified outcomes to ensure concordance with stated definitions (ie, DVT, CRI). Device parameters, including dwell time, indication, and time to complication(s) were also collected.

 

 

Primary Outcomes

The primary outcome was the incidence of DVT and CRI (Table 2). DVT was defined as radiographically confirmed (eg, ultrasound, computed tomography) thrombosis in the presence of patient signs or symptoms. CRI was defined in accordance with Timsit et al.8 as follows: catheter-related clinical sepsis without bloodstream infection defined as (1) combination of fever (body temperature >38.5°C) or hypothermia (body temperature <36.5°C), (2) catheter-tip culture yielding ≥103 CFUs/mL, (3) pus at the insertion site or resolution of clinical sepsis after catheter removal, and (4) absence of any other infectious focus or catheter-related bloodstream infection (CRBSI). CRBSI was defined as a combination of (1) one or more positive peripheral blood cultures sampled immediately before or within 48 hours after catheter removal, (2) a quantitative catheter-tip culture testing positive for the same microorganisms (same species and susceptibility pattern) or a differential time to positivity of blood cultures ≥2 hours, and (3) no other infectious focus explaining the positive blood culture result.

Secondary Outcomes

Secondary outcomes, defined as minor complications, included infiltration, thrombophlebitis, and catheter occlusion. Infiltration was defined as localized swelling due to infusate or site leakage. Thrombophlebitis was defined as one or more of the following: localized erythema, palpable cord, tenderness, or streaking. Occlusion was defined as nonpatency of the catheter due to the inability to flush or aspirate. Definitions for secondary outcomes are consistent with those used in prior studies.9

Statistical Analysis

Patient and LPC characteristics were analyzed using descriptive statistics. Results were reported as percentages, means, medians (interquartile range [IQR]), and rates per 1,000 catheter days. All analyses were conducted in Stata v.15 (StataCorp, College Station, Texas).

RESULTS

Within the 20-month study period, a total of 539 LPCs representing 5,543 catheter days were available for analysis. The mean patient age was 53 years. A total of 90 patients (16.7%) had a history of DVT, while 6 (1.1%) had a history of CRI. We calculated a median Charlson index of 4 (interquartile range [IQR], 2-7), suggesting an estimated one-year postdischarge survival of 53% (Table 1).

The majority of LPCs (99.6% [537/539]) were single lumen catheters. No patient had more than one concurrent LPC. The cannulation success rate on the first attempt was 93.9% (507/539). The brachial or basilic veins were primarily targeted (98.7%, [532/539]). Difficult intravenous access represented 48.8% (263/539) of indications, and postdischarge parenteral antibiotics constituted 47.9% (258/539). The median catheter dwell time was eight days (IQR, 4-14 days).

Nine DVTs (1.7% [9/539]) occurred in patients with LPCs. The incidence of DVT was higher in patients with a history of DVT (5.7%, 5/90). The median time from insertion to DVT was 11 (IQR, 5-14) days. DVTs were managed with LPC removal and systemic anticoagulation in accordance with catheter-related DVT guidelines. The rate of CRI was 0.6% (3/539), or 0.54 per 1,000 catheter days. Two CRIs had positive blood cultures, while one had negative cultures. Infections occurred after a median of 12 (IQR, 8-15) days of catheter dwell. Each was treated with LPC removal and IV antibiotics, with two patients receiving two weeks and one receiving six weeks of antibiotic therapy (Table 2).

With respect to secondary outcomes, the incidence of infiltration was 0.4% (2/539), thrombophlebitis 0.7% (4/539), and catheter occlusion 0.9% (5/539). The time to event was 8.5, 3.75, and 5.4 days, respectively. Collectively, 2.0% of devices experienced a minor complication.

 

 

DISCUSSION

In our single-center study, LPCs were primarily inserted for difficult venous access or parenteral antibiotics. Despite a clinically complex population with a high number of comorbidities, rates of major and minor complications associated with LPCs were low. These data suggest that LPCs are a safe alternative to PICCs and other central access devices for short-term use.

Our incidence of CRI of 0.6% (0.54 per 1,000 catheter days) is similar to or lower than other studies.2,10,11 An incidence of 0%-1.5% was observed in two recent publications about midline catheters, with rates across individual studies and hospital sites varying widely.12,13 A systematic review of intravascular devices reported CRI rates of 0.4% (0.2 per 1,000 catheter days) for midlines and 0.1% (0.5 per 1,000 catheter days for peripheral IVs), in contrast to PICCs at 3.1% (1.1 per 1,000 catheter days).14 However, catheters of varying lengths and diameters were used in studies within the review, potentially leading to heterogeneous outcomes. In accordance with existing data, CRI incidence in our study increased with catheter dwell time.10

The 1.7% rate of DVT observed in our study is on the lower end of existing data (1.4%-5.9%).12-15 Compared with PICCs (2%-15%), the incidence of venous thrombosis appears to be lower with midlines/LPCs—justifying their use as an alternative device for IV access.7,9,12,14 There was an overall low rate of minor complications, similar to recently published results.10 As rates were greater in patients with a history of DVT (5.7%), caution is warranted when using these devices in this population.

Our experience with LPCs suggests financial and patient benefits. The cost of LPCs is lower than central access devices.4 As rates of CRI were low, costs related to CLABSIs from PICC use may be reduced by appropriate LPC use. LPCs may allow the ability to draw blood routinely, which could improve the patient experience—albeit with its own risks. Current recommendations support the use of PICCs or LPCs, somewhat interchangeably, for patients with appropriate indications needing IV therapy for more than five to six days.2,7 However, LPCs now account for 57% of vascular access procedures in our center and have led to a decrease in reliance on PICCs and attendant complications.

Our study has several limitations. First, LPCs and midlines are often used interchangeably in the literature.4,5 Therefore, reported complication rates may not reflect those of LPCs alone and may limit comparisons. Second, ours was a single-center study with experts assessing device appropriateness and performing ultrasound-guided insertions; our findings may not be generalizable to dissimilar settings. Third, we did not track LPC complications such as nonpatency and leakage. As prior studies reported high rates of complications such as these events, caution is advised when interpreting our findings.15 Finally, we retrospectively extracted data from our medical records; limitations in documentation may influence our findings.

CONCLUSION

In patients requiring short-term IV therapy, these data suggest LPCs have low complication rates and may be safely used as an alternative option for venous access.

Acknowledgments

The authors thank Drs. Laura Hernandez, Andres Mendez Hernandez, and Victor Prado for their assistance in data collection. The authors also thank Mr. Onofre Donceras and Dr. Sharon Welbel from the John H. Stroger, Jr. Hospital of Cook County Department of Infection Control & Epidemiology for their assistance in reviewing local line infection data.

Drs. Patel and Chopra developed the study design. Drs. Patel, Araujo, Parra Rodriguez, Ramirez Sanchez, and Chopra contributed to manuscript writing. Ms. Snyder provided statistical analysis. All authors have seen and approved the final manuscript for submission.

 

 

Disclosures

The authors have nothing to disclose.

References

1. Anderson NR. Midline catheters: the middle ground of intravenous therapy administration. J Infus Nurs. 2004;27(5):313-321.
2. Adams DZ, Little A, Vinsant C, et al. The midline catheter: a clinical review. J Emerg Med. 2016;51(3):252-258. https://doi.org/10.1016/j.jemermed.2016.05.029.
3. Scoppettuolo G, Pittiruti M, Pitoni S, et al. Ultrasound-guided “short” midline catheters for difficult venous access in the emergency department: a retrospective analysis. Int J Emerg Med. 2016;9(1):3. https://doi.org/10.1186/s12245-016-0100-0.
4. Qin KR, Nataraja RM, Pacilli M. Long peripheral catheters: is it time to address the confusion? J Vasc Access. 2018;20(5). https://doi.org/10.1177/1129729818819730.
5. Pittiruti M, Scoppettuolo G. The GAVeCeLT Manual of PICC and Midlines. Milano: EDRA; 2016.
6. Dawson RB, Moureau NL. Midline catheters: an essential tool in CLABSI reduction. Infection Control Today. https://www.infectioncontroltoday.com/clabsi/midline-catheters-essential-tool-clabsi-reduction. Accessed February 19, 2018
7. Chopra V, Flanders SA, Saint S, et al. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA appropriateness method. Ann Intern Med. 2015;163(6):S1-S40. https://doi.org/10.7326/M15-0744.
8. Timsit JF, Schwebel C, Bouadma L, et al. Chlorhexidine-impregnated sponges and less frequent dressing changes for prevention of catheter-related infections in critically ill adults: a randomized controlled trial. JAMA. 2009;301(12):1231-1241. https://doi.org/10.1001/jama.2009.376.
9. Bahl A, Karabon P, Chu D. Comparison of venous thrombosis complications in midlines versus peripherally inserted central catheters: are midlines the safer option? Clin Appl Thromb Hemost. 2019;25. https://doi.org/10.1177/1076029619839150.
10. Goetz AM, Miller J, Wagener MM, et al. Complications related to intravenous midline catheter usage. A 2-year study. J Intraven Nurs. 1998;21(2):76-80.
11. Xu T, Kingsley L, DiNucci S, et al. Safety and utilization of peripherally inserted central catheters versus midline catheters at a large academic medical center. Am J Infect Control. 2016;44(12):1458-1461. https://doi.org/10.1016/j.ajic.2016.09.010.
12. Chopra V, Kaatz S, Swaminathan L, et al. Variation in use and outcomes related to midline catheters: results from a multicentre pilot study. BMJ Qual Saf. 2019;28(9):714-720. https://doi.org/10.1136/bmjqs-2018-008554.
13. Badger J. Long peripheral catheters for deep arm vein venous access: A systematic review of complications. Heart Lung. 2019;48(3):222-225. https://doi.org/10.1016/j.hrtlng.2019.01.002.
14. Maki DG, Kluger DM, Crnich CJ. The risk of bloodstream infection in adults with different intravascular devices: a systematic review of 200 published prospective studies. Mayo Clin Proc. 2006;81(9):1159-1171. https://doi.org/10.4065/81.9.1159.
15. Zerla PA, Caravella G, De Luca G, et al. Open- vs closed-tip valved peripherally inserted central catheters and midlines: Findings from a vascular access database. J Assoc Vasc Access. 2015;20(3):169-176. https://doi.org/10.1016/j.java.2015.06.001.

References

1. Anderson NR. Midline catheters: the middle ground of intravenous therapy administration. J Infus Nurs. 2004;27(5):313-321.
2. Adams DZ, Little A, Vinsant C, et al. The midline catheter: a clinical review. J Emerg Med. 2016;51(3):252-258. https://doi.org/10.1016/j.jemermed.2016.05.029.
3. Scoppettuolo G, Pittiruti M, Pitoni S, et al. Ultrasound-guided “short” midline catheters for difficult venous access in the emergency department: a retrospective analysis. Int J Emerg Med. 2016;9(1):3. https://doi.org/10.1186/s12245-016-0100-0.
4. Qin KR, Nataraja RM, Pacilli M. Long peripheral catheters: is it time to address the confusion? J Vasc Access. 2018;20(5). https://doi.org/10.1177/1129729818819730.
5. Pittiruti M, Scoppettuolo G. The GAVeCeLT Manual of PICC and Midlines. Milano: EDRA; 2016.
6. Dawson RB, Moureau NL. Midline catheters: an essential tool in CLABSI reduction. Infection Control Today. https://www.infectioncontroltoday.com/clabsi/midline-catheters-essential-tool-clabsi-reduction. Accessed February 19, 2018
7. Chopra V, Flanders SA, Saint S, et al. The Michigan Appropriateness Guide for Intravenous Catheters (MAGIC): results from a multispecialty panel using the RAND/UCLA appropriateness method. Ann Intern Med. 2015;163(6):S1-S40. https://doi.org/10.7326/M15-0744.
8. Timsit JF, Schwebel C, Bouadma L, et al. Chlorhexidine-impregnated sponges and less frequent dressing changes for prevention of catheter-related infections in critically ill adults: a randomized controlled trial. JAMA. 2009;301(12):1231-1241. https://doi.org/10.1001/jama.2009.376.
9. Bahl A, Karabon P, Chu D. Comparison of venous thrombosis complications in midlines versus peripherally inserted central catheters: are midlines the safer option? Clin Appl Thromb Hemost. 2019;25. https://doi.org/10.1177/1076029619839150.
10. Goetz AM, Miller J, Wagener MM, et al. Complications related to intravenous midline catheter usage. A 2-year study. J Intraven Nurs. 1998;21(2):76-80.
11. Xu T, Kingsley L, DiNucci S, et al. Safety and utilization of peripherally inserted central catheters versus midline catheters at a large academic medical center. Am J Infect Control. 2016;44(12):1458-1461. https://doi.org/10.1016/j.ajic.2016.09.010.
12. Chopra V, Kaatz S, Swaminathan L, et al. Variation in use and outcomes related to midline catheters: results from a multicentre pilot study. BMJ Qual Saf. 2019;28(9):714-720. https://doi.org/10.1136/bmjqs-2018-008554.
13. Badger J. Long peripheral catheters for deep arm vein venous access: A systematic review of complications. Heart Lung. 2019;48(3):222-225. https://doi.org/10.1016/j.hrtlng.2019.01.002.
14. Maki DG, Kluger DM, Crnich CJ. The risk of bloodstream infection in adults with different intravascular devices: a systematic review of 200 published prospective studies. Mayo Clin Proc. 2006;81(9):1159-1171. https://doi.org/10.4065/81.9.1159.
15. Zerla PA, Caravella G, De Luca G, et al. Open- vs closed-tip valved peripherally inserted central catheters and midlines: Findings from a vascular access database. J Assoc Vasc Access. 2015;20(3):169-176. https://doi.org/10.1016/j.java.2015.06.001.

Issue
Journal of Hospital Medicine 14(12)
Issue
Journal of Hospital Medicine 14(12)
Page Number
758-760. Published Online First October 23, 2019
Page Number
758-760. Published Online First October 23, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Sanjay A. Patel, MD; E-mail: [email protected]; Telephone: 312-864-4522.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media

Hospital Medicine Has a Specialty Code. Is the Memo Still in the Mail?

Article Type
Changed
Thu, 03/25/2021 - 11:26

In recognizing the importance of Hospital Medicine (HM) and its practitioners, the Centers for Medicare and Medicaid Services (CMS) awarded the field a specialty designation in 2016. The code is self-selected by hospitalists and used by the CMS for programmatic and claims processing purposes. The HM code (“C6”), submitted to the CMS by the provider or their designee through the Provider Enrollment Chain and Ownership System (PECOS), in turn links to the National Provider Identification provider data.

The Society of Hospital Medicine® sought the designation given the growth of hospitalists practicing nationally, their impact on the practice of medicine in the inpatient setting,1 and their secondary effects on global care.2 In fact, early efforts by the CMS to transition physician payments to the value-based payment used specialty designations to create benchmarks in cost metrics, heightening the importance for hospitalists to be able to assess their performance. The need to identify any shifts in resource utilization and workforce mix in the broader context of health reforms necessitated action. Essentially, to understand the “why’s” of hospital medicine, the field required an accounting of the “who’s” and “where’s.”

The CMS granted the C6 designation in 2016, and it went live in April 2017. Despite the code’s brief two-year tenure, calls for its creation long predated its existence. As such, the new modifier requires an initial look to help steer the role of HM in any future CMS and managed care organization (MCO) quality, payment, or practice improvement activities.

METHODS

We analyzed publicly available 2017 Medicare Part B utilization data3 to explore the rates of Evaluation & Management (E&M) codes used across specialties, using the C6 designation to identify hospitalists.

To try to estimate the percentage of hospitalists who were likely billing under the C6 designation, we then compared the rates of C6 billing to expected rates of hospitalist E&M billing based on an analysis of hospitalist prevalence in the 2012 Medicare physician payment data. Prior work to identify hospitalists before the implementation of the C6 designation relied on thresholds of inpatient codes for various inpatient E&M services.4,5 We used our previously published approach of a threshold of 60% of inpatient E&M hospital services to differentiate hospitalists from their parent specialties.6 We also calculated the expected rates of E&M billing for other select specialty services by applying the 2012 E&M coding trends to the 2017 data.

RESULTS

Table 1 shows the distribution of inpatient E&M codes billed by hospitalists using the C6 identification, as well as the use of those codes by other specialists. Hospitalists identified by the C6 designation billed only 2%-5% of inpatient and 6% of observation codes. As an example, in 2017, discharge CPT codes 99238 and 99239 were used 7,872,323 times. However, C6-identified hospitalists accounted for only 441,420 of these codes.

 

 

Table 2 compares the observed billing rates by specialty using the C6 designation to identify hospitalists with what would be the expected rates with the 2012 threshold-based specialty billing designation applied to the 2017 data. This comparison demonstrates that hospitalist billing based on the C6 modifier use is approximately one-tenth of what would have been their expected volume of E&M services.

DISCUSSION

We examined the patterns of hospitalist billing using the C6 hospital medicine specialty modifier, comparing billing patterns with what we would expect hospitalist activity to be if we had used a threshold-based approach. The difference between the C6 and the threshold-based approaches to assessing hospitalist activity suggests that as few as 10% of hospitalists have adopted the C6 code.

Why is the adoption of the C6 modifier so low? Although administrative data do not allow us to identify the reasons why providers chose to disregard the C6 designation, we can speculate on causes. There are, to date, low direct risks and recognized benefits with using the code. We hypothesize that several factors could be impeding whether providers use the modifier to bring about potential gains. The first may be knowledge-related; ie, hospitalists might not be familiar with the specialty code or unaware of the importance of accurately capturing hospitalist practice patterns. They may also wrongly assume that their practices are aware of the revision or have submitted the appropriate paperwork. Similarly, practice personnel may lack knowledge regarding the code or the importance of its use. The second factor may be logistical; ie, administrative barriers such as difficulty accessing the Provider Enrollment, Chain and Ownership System (PECOS) and out-of-date paper registration forms impede fast uptake. The final reason might be related to professionals whose tenures as hospitalists will be brief, and their unease of carrying an identifier into their next non-HM position prompts hesitation. Providers may have a misperception that using the C6 code may somehow impact or limit their future scope of practice, when, in fact, they may change their Medicare specialty designation at any time.

Changes in reimbursement models, including the Bundled Payments for Care Improvement Advanced (BPCI-A) and other value-based initiatives, heighten the need for a more accurate identification of the specialty. Classifying individual providers and groups to make valid performance comparisons is relevant for the same reasons. The CMS continues to advance cost and efficiency measures in its publicly accessible physiciancompare.gov website.7 Without an improved ability to identify services provided by hospitalists—by both CMS and commercial entities—the potential benefits delivered by hospitalists in terms of improved care quality, safety, or efficiency could go undetected by payers and policymakers. Moreover, C6 may be used in other ways by the CMS throughout its payment systems and programmatic efforts that use specialty to differentiate between Medicare providers.8 Finally, the C6 is an identifier for the Medicare fee-for-service system; state programs and MCOs may not identify hospitalists in the same manner, or at all. Therefore, it may make it more difficult for those groups and HM researchers to study the trends in care delivery changes. The specialty needs to engage with these other payers to assist in revising their information systems to better account for how hospitalists care for their insured populations.

Although we would expect a natural increase in C6 adoption over time, optimally meeting stakeholders’ data needs requires more rapid uptake. Our analysis is limited by our assumption that specialty patterns of code use remain similar from 2012 to 2017. Regardless, the magnitude of the difference between the estimate of hospitalists using the C6 versus billing thresholds strongly suggests underuse of the C6 designation. The CMS and MCOs have an increasing need for valid and representative data, and C6 can be used to assess “HM-adjusted” resource utilization, relative value units (RVUs), and performance evaluations. Therefore, hospitalists may see more incentives to use the C6 specialty code in a manner consistent with other recognized subspecialties. 

 

 

Disclaimer

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, and the Health Services Research and Development Service. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

References

1. Wachter RM, Goldman L. Zero to 50,000—The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Quinn R. HM 2016: A year in review. The Hospitalist. 2016;12. https://www.the-hospitalist.org/hospitalist/article/121419/everything-you-need-know-about-bundled-payments-care-improvement
3. Centers for Medicare and Medicaid Services. Medicare Utilization for Part B. https://www.cms.gov/research-statistics-data-and-systems/statistics-trends-and-reports/medicarefeeforsvcpartsab/medicareutilizationforpartb.html. Accessed June 14, 2019.
4. Saint S, Christakis DA, Baldwin L-M, Rosenblatt R. Is hospitalism new? An analysis of Medicare data from Washington State in 1994. Eff Clin Pract. 2000;3(1):35-39.
5. Welch WP, Stearns SC, Cuellar AE, Bindman AB. Use of hospitalists by Medicare beneficiaries: a national picture. Medicare Medicaid Res Rev. 2014;4(2). https://doi.org/10.5600/mmrr2014-004-02-b01.
6. Lapps J, Flansbaum B, Leykum L, Boswell J, Haines L. Updating threshold-based identification of hospitalists in 2012 medicare pay data. J Hosp Med. 2016;11(1):45-47. https://doi.org/10.1002/jhm.2480.
7. Centers for Medicare & Medicaid Services. Physician Compare Initiative. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/physician-compare-initiative/index.html. Accessed June 14, 2019.
8. Centers for Medicare & Medicaid Services. Revisions to Payment Policies under the Medicare Physician Fee Schedule, Quality Payment Program and Other Revisions to Part B for CY 2020 (CMS-1715-P). Accessed prior to publishing in the Federal Register through www.regulations.gov.

Article PDF
Author and Disclosure Information

1Geisinger Health System, Danville, Pennsylvania; 2University of Texas Heatlh Science Center at San Antonio, San Antonio, Texas; 3South Texas Veterans Health Care System, San Antonio, Texas; 4Society of Hospital Medicine, Philadelphia, Pennsylvania.

Disclosures

The authors have nothing to disclose.

Issue
Journal of Hospital Medicine 15(2)
Publications
Topics
Page Number
91-93. Published online first September 18, 2019
Sections
Author and Disclosure Information

1Geisinger Health System, Danville, Pennsylvania; 2University of Texas Heatlh Science Center at San Antonio, San Antonio, Texas; 3South Texas Veterans Health Care System, San Antonio, Texas; 4Society of Hospital Medicine, Philadelphia, Pennsylvania.

Disclosures

The authors have nothing to disclose.

Author and Disclosure Information

1Geisinger Health System, Danville, Pennsylvania; 2University of Texas Heatlh Science Center at San Antonio, San Antonio, Texas; 3South Texas Veterans Health Care System, San Antonio, Texas; 4Society of Hospital Medicine, Philadelphia, Pennsylvania.

Disclosures

The authors have nothing to disclose.

Article PDF
Article PDF
Related Articles

In recognizing the importance of Hospital Medicine (HM) and its practitioners, the Centers for Medicare and Medicaid Services (CMS) awarded the field a specialty designation in 2016. The code is self-selected by hospitalists and used by the CMS for programmatic and claims processing purposes. The HM code (“C6”), submitted to the CMS by the provider or their designee through the Provider Enrollment Chain and Ownership System (PECOS), in turn links to the National Provider Identification provider data.

The Society of Hospital Medicine® sought the designation given the growth of hospitalists practicing nationally, their impact on the practice of medicine in the inpatient setting,1 and their secondary effects on global care.2 In fact, early efforts by the CMS to transition physician payments to the value-based payment used specialty designations to create benchmarks in cost metrics, heightening the importance for hospitalists to be able to assess their performance. The need to identify any shifts in resource utilization and workforce mix in the broader context of health reforms necessitated action. Essentially, to understand the “why’s” of hospital medicine, the field required an accounting of the “who’s” and “where’s.”

The CMS granted the C6 designation in 2016, and it went live in April 2017. Despite the code’s brief two-year tenure, calls for its creation long predated its existence. As such, the new modifier requires an initial look to help steer the role of HM in any future CMS and managed care organization (MCO) quality, payment, or practice improvement activities.

METHODS

We analyzed publicly available 2017 Medicare Part B utilization data3 to explore the rates of Evaluation & Management (E&M) codes used across specialties, using the C6 designation to identify hospitalists.

To try to estimate the percentage of hospitalists who were likely billing under the C6 designation, we then compared the rates of C6 billing to expected rates of hospitalist E&M billing based on an analysis of hospitalist prevalence in the 2012 Medicare physician payment data. Prior work to identify hospitalists before the implementation of the C6 designation relied on thresholds of inpatient codes for various inpatient E&M services.4,5 We used our previously published approach of a threshold of 60% of inpatient E&M hospital services to differentiate hospitalists from their parent specialties.6 We also calculated the expected rates of E&M billing for other select specialty services by applying the 2012 E&M coding trends to the 2017 data.

RESULTS

Table 1 shows the distribution of inpatient E&M codes billed by hospitalists using the C6 identification, as well as the use of those codes by other specialists. Hospitalists identified by the C6 designation billed only 2%-5% of inpatient and 6% of observation codes. As an example, in 2017, discharge CPT codes 99238 and 99239 were used 7,872,323 times. However, C6-identified hospitalists accounted for only 441,420 of these codes.

 

 

Table 2 compares the observed billing rates by specialty using the C6 designation to identify hospitalists with what would be the expected rates with the 2012 threshold-based specialty billing designation applied to the 2017 data. This comparison demonstrates that hospitalist billing based on the C6 modifier use is approximately one-tenth of what would have been their expected volume of E&M services.

DISCUSSION

We examined the patterns of hospitalist billing using the C6 hospital medicine specialty modifier, comparing billing patterns with what we would expect hospitalist activity to be if we had used a threshold-based approach. The difference between the C6 and the threshold-based approaches to assessing hospitalist activity suggests that as few as 10% of hospitalists have adopted the C6 code.

Why is the adoption of the C6 modifier so low? Although administrative data do not allow us to identify the reasons why providers chose to disregard the C6 designation, we can speculate on causes. There are, to date, low direct risks and recognized benefits with using the code. We hypothesize that several factors could be impeding whether providers use the modifier to bring about potential gains. The first may be knowledge-related; ie, hospitalists might not be familiar with the specialty code or unaware of the importance of accurately capturing hospitalist practice patterns. They may also wrongly assume that their practices are aware of the revision or have submitted the appropriate paperwork. Similarly, practice personnel may lack knowledge regarding the code or the importance of its use. The second factor may be logistical; ie, administrative barriers such as difficulty accessing the Provider Enrollment, Chain and Ownership System (PECOS) and out-of-date paper registration forms impede fast uptake. The final reason might be related to professionals whose tenures as hospitalists will be brief, and their unease of carrying an identifier into their next non-HM position prompts hesitation. Providers may have a misperception that using the C6 code may somehow impact or limit their future scope of practice, when, in fact, they may change their Medicare specialty designation at any time.

Changes in reimbursement models, including the Bundled Payments for Care Improvement Advanced (BPCI-A) and other value-based initiatives, heighten the need for a more accurate identification of the specialty. Classifying individual providers and groups to make valid performance comparisons is relevant for the same reasons. The CMS continues to advance cost and efficiency measures in its publicly accessible physiciancompare.gov website.7 Without an improved ability to identify services provided by hospitalists—by both CMS and commercial entities—the potential benefits delivered by hospitalists in terms of improved care quality, safety, or efficiency could go undetected by payers and policymakers. Moreover, C6 may be used in other ways by the CMS throughout its payment systems and programmatic efforts that use specialty to differentiate between Medicare providers.8 Finally, the C6 is an identifier for the Medicare fee-for-service system; state programs and MCOs may not identify hospitalists in the same manner, or at all. Therefore, it may make it more difficult for those groups and HM researchers to study the trends in care delivery changes. The specialty needs to engage with these other payers to assist in revising their information systems to better account for how hospitalists care for their insured populations.

Although we would expect a natural increase in C6 adoption over time, optimally meeting stakeholders’ data needs requires more rapid uptake. Our analysis is limited by our assumption that specialty patterns of code use remain similar from 2012 to 2017. Regardless, the magnitude of the difference between the estimate of hospitalists using the C6 versus billing thresholds strongly suggests underuse of the C6 designation. The CMS and MCOs have an increasing need for valid and representative data, and C6 can be used to assess “HM-adjusted” resource utilization, relative value units (RVUs), and performance evaluations. Therefore, hospitalists may see more incentives to use the C6 specialty code in a manner consistent with other recognized subspecialties. 

 

 

Disclaimer

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, and the Health Services Research and Development Service. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

In recognizing the importance of Hospital Medicine (HM) and its practitioners, the Centers for Medicare and Medicaid Services (CMS) awarded the field a specialty designation in 2016. The code is self-selected by hospitalists and used by the CMS for programmatic and claims processing purposes. The HM code (“C6”), submitted to the CMS by the provider or their designee through the Provider Enrollment Chain and Ownership System (PECOS), in turn links to the National Provider Identification provider data.

The Society of Hospital Medicine® sought the designation given the growth of hospitalists practicing nationally, their impact on the practice of medicine in the inpatient setting,1 and their secondary effects on global care.2 In fact, early efforts by the CMS to transition physician payments to the value-based payment used specialty designations to create benchmarks in cost metrics, heightening the importance for hospitalists to be able to assess their performance. The need to identify any shifts in resource utilization and workforce mix in the broader context of health reforms necessitated action. Essentially, to understand the “why’s” of hospital medicine, the field required an accounting of the “who’s” and “where’s.”

The CMS granted the C6 designation in 2016, and it went live in April 2017. Despite the code’s brief two-year tenure, calls for its creation long predated its existence. As such, the new modifier requires an initial look to help steer the role of HM in any future CMS and managed care organization (MCO) quality, payment, or practice improvement activities.

METHODS

We analyzed publicly available 2017 Medicare Part B utilization data3 to explore the rates of Evaluation & Management (E&M) codes used across specialties, using the C6 designation to identify hospitalists.

To try to estimate the percentage of hospitalists who were likely billing under the C6 designation, we then compared the rates of C6 billing to expected rates of hospitalist E&M billing based on an analysis of hospitalist prevalence in the 2012 Medicare physician payment data. Prior work to identify hospitalists before the implementation of the C6 designation relied on thresholds of inpatient codes for various inpatient E&M services.4,5 We used our previously published approach of a threshold of 60% of inpatient E&M hospital services to differentiate hospitalists from their parent specialties.6 We also calculated the expected rates of E&M billing for other select specialty services by applying the 2012 E&M coding trends to the 2017 data.

RESULTS

Table 1 shows the distribution of inpatient E&M codes billed by hospitalists using the C6 identification, as well as the use of those codes by other specialists. Hospitalists identified by the C6 designation billed only 2%-5% of inpatient and 6% of observation codes. As an example, in 2017, discharge CPT codes 99238 and 99239 were used 7,872,323 times. However, C6-identified hospitalists accounted for only 441,420 of these codes.

 

 

Table 2 compares the observed billing rates by specialty using the C6 designation to identify hospitalists with what would be the expected rates with the 2012 threshold-based specialty billing designation applied to the 2017 data. This comparison demonstrates that hospitalist billing based on the C6 modifier use is approximately one-tenth of what would have been their expected volume of E&M services.

DISCUSSION

We examined the patterns of hospitalist billing using the C6 hospital medicine specialty modifier, comparing billing patterns with what we would expect hospitalist activity to be if we had used a threshold-based approach. The difference between the C6 and the threshold-based approaches to assessing hospitalist activity suggests that as few as 10% of hospitalists have adopted the C6 code.

Why is the adoption of the C6 modifier so low? Although administrative data do not allow us to identify the reasons why providers chose to disregard the C6 designation, we can speculate on causes. There are, to date, low direct risks and recognized benefits with using the code. We hypothesize that several factors could be impeding whether providers use the modifier to bring about potential gains. The first may be knowledge-related; ie, hospitalists might not be familiar with the specialty code or unaware of the importance of accurately capturing hospitalist practice patterns. They may also wrongly assume that their practices are aware of the revision or have submitted the appropriate paperwork. Similarly, practice personnel may lack knowledge regarding the code or the importance of its use. The second factor may be logistical; ie, administrative barriers such as difficulty accessing the Provider Enrollment, Chain and Ownership System (PECOS) and out-of-date paper registration forms impede fast uptake. The final reason might be related to professionals whose tenures as hospitalists will be brief, and their unease of carrying an identifier into their next non-HM position prompts hesitation. Providers may have a misperception that using the C6 code may somehow impact or limit their future scope of practice, when, in fact, they may change their Medicare specialty designation at any time.

Changes in reimbursement models, including the Bundled Payments for Care Improvement Advanced (BPCI-A) and other value-based initiatives, heighten the need for a more accurate identification of the specialty. Classifying individual providers and groups to make valid performance comparisons is relevant for the same reasons. The CMS continues to advance cost and efficiency measures in its publicly accessible physiciancompare.gov website.7 Without an improved ability to identify services provided by hospitalists—by both CMS and commercial entities—the potential benefits delivered by hospitalists in terms of improved care quality, safety, or efficiency could go undetected by payers and policymakers. Moreover, C6 may be used in other ways by the CMS throughout its payment systems and programmatic efforts that use specialty to differentiate between Medicare providers.8 Finally, the C6 is an identifier for the Medicare fee-for-service system; state programs and MCOs may not identify hospitalists in the same manner, or at all. Therefore, it may make it more difficult for those groups and HM researchers to study the trends in care delivery changes. The specialty needs to engage with these other payers to assist in revising their information systems to better account for how hospitalists care for their insured populations.

Although we would expect a natural increase in C6 adoption over time, optimally meeting stakeholders’ data needs requires more rapid uptake. Our analysis is limited by our assumption that specialty patterns of code use remain similar from 2012 to 2017. Regardless, the magnitude of the difference between the estimate of hospitalists using the C6 versus billing thresholds strongly suggests underuse of the C6 designation. The CMS and MCOs have an increasing need for valid and representative data, and C6 can be used to assess “HM-adjusted” resource utilization, relative value units (RVUs), and performance evaluations. Therefore, hospitalists may see more incentives to use the C6 specialty code in a manner consistent with other recognized subspecialties. 

 

 

Disclaimer

The research reported here was supported by the Department of Veterans Affairs, Veterans Health Administration, and the Health Services Research and Development Service. The views expressed in this article are those of the authors and do not necessarily reflect the position or policy of the Department of Veterans Affairs.

References

1. Wachter RM, Goldman L. Zero to 50,000—The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Quinn R. HM 2016: A year in review. The Hospitalist. 2016;12. https://www.the-hospitalist.org/hospitalist/article/121419/everything-you-need-know-about-bundled-payments-care-improvement
3. Centers for Medicare and Medicaid Services. Medicare Utilization for Part B. https://www.cms.gov/research-statistics-data-and-systems/statistics-trends-and-reports/medicarefeeforsvcpartsab/medicareutilizationforpartb.html. Accessed June 14, 2019.
4. Saint S, Christakis DA, Baldwin L-M, Rosenblatt R. Is hospitalism new? An analysis of Medicare data from Washington State in 1994. Eff Clin Pract. 2000;3(1):35-39.
5. Welch WP, Stearns SC, Cuellar AE, Bindman AB. Use of hospitalists by Medicare beneficiaries: a national picture. Medicare Medicaid Res Rev. 2014;4(2). https://doi.org/10.5600/mmrr2014-004-02-b01.
6. Lapps J, Flansbaum B, Leykum L, Boswell J, Haines L. Updating threshold-based identification of hospitalists in 2012 medicare pay data. J Hosp Med. 2016;11(1):45-47. https://doi.org/10.1002/jhm.2480.
7. Centers for Medicare & Medicaid Services. Physician Compare Initiative. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/physician-compare-initiative/index.html. Accessed June 14, 2019.
8. Centers for Medicare & Medicaid Services. Revisions to Payment Policies under the Medicare Physician Fee Schedule, Quality Payment Program and Other Revisions to Part B for CY 2020 (CMS-1715-P). Accessed prior to publishing in the Federal Register through www.regulations.gov.

References

1. Wachter RM, Goldman L. Zero to 50,000—The 20th Anniversary of the Hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Quinn R. HM 2016: A year in review. The Hospitalist. 2016;12. https://www.the-hospitalist.org/hospitalist/article/121419/everything-you-need-know-about-bundled-payments-care-improvement
3. Centers for Medicare and Medicaid Services. Medicare Utilization for Part B. https://www.cms.gov/research-statistics-data-and-systems/statistics-trends-and-reports/medicarefeeforsvcpartsab/medicareutilizationforpartb.html. Accessed June 14, 2019.
4. Saint S, Christakis DA, Baldwin L-M, Rosenblatt R. Is hospitalism new? An analysis of Medicare data from Washington State in 1994. Eff Clin Pract. 2000;3(1):35-39.
5. Welch WP, Stearns SC, Cuellar AE, Bindman AB. Use of hospitalists by Medicare beneficiaries: a national picture. Medicare Medicaid Res Rev. 2014;4(2). https://doi.org/10.5600/mmrr2014-004-02-b01.
6. Lapps J, Flansbaum B, Leykum L, Boswell J, Haines L. Updating threshold-based identification of hospitalists in 2012 medicare pay data. J Hosp Med. 2016;11(1):45-47. https://doi.org/10.1002/jhm.2480.
7. Centers for Medicare & Medicaid Services. Physician Compare Initiative. https://www.cms.gov/Medicare/Quality-Initiatives-Patient-Assessment-Instruments/physician-compare-initiative/index.html. Accessed June 14, 2019.
8. Centers for Medicare & Medicaid Services. Revisions to Payment Policies under the Medicare Physician Fee Schedule, Quality Payment Program and Other Revisions to Part B for CY 2020 (CMS-1715-P). Accessed prior to publishing in the Federal Register through www.regulations.gov.

Issue
Journal of Hospital Medicine 15(2)
Issue
Journal of Hospital Medicine 15(2)
Page Number
91-93. Published online first September 18, 2019
Page Number
91-93. Published online first September 18, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Bradley Flansbaum, DO, MPH; E-mail: [email protected]; Telephone: 570-214-9585; Twitter: @BradleyFlansbau
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Gating Strategy
First Peek Free
Medscape Article
Display survey writer
Reuters content
Article PDF Media

Community Pediatric Hospitalist Workload: Results from a National Survey

Article Type
Changed
Wed, 10/30/2019 - 13:52

As a newly recognized specialty, pediatric hospital medicine (PHM) continues to expand and diversify.1 Pediatric hospitalists care for children in hospitals ranging from small, rural community hospitals to large, free-standing quaternary children’s hospitals.2-4 In addition, more than 10% of graduating pediatric residents are seeking future careers within PHM.5

In 2018, Fromme et al. published a study describing clinical workload for pediatric hospitalists within university-based settings.6 They characterized the diversity of work models and programmatic sustainability but limited the study to university-based programs. With over half of children receiving care within community hospitals,7 workforce patterns for community-based pediatric hospitalists should be characterized to maximize sustainability and minimize attrition across the field.

In this study, we describe programmatic variability in clinical work expectations of 70 community-based PHM programs. We aimed to describe existing work models and expectations of community-based program leaders as they relate to their unique clinical setting.

METHODS

We conducted a cross-sectional survey of community-based PHM site directors through structured interviews. Community hospital programs were self-defined by the study participants, although typically defined as general hospitals that admit pediatric patients and are not free-standing or children’s hospitals within a general hospital. Survey respondents were asked to answer questions only reflecting expectations at their community hospital.

Survey Design and Content

Building from a tool used by Fromme et al.6 we created a 12-question structured interview questionnaire focused on three areas: (1) full-time employment (FTE) metrics including definitions of a 1.0 FTE, “typical” shifts, and weekend responsibilities; (2) work volume including census parameters, service-line coverage expectations, back-up systems, and overnight call responsibilities; and (3) programmatic model including sense of sustainability (eg, minimizing burnout and attrition), support for activities such as administrative or research time, and employer model (Appendix).

We modified the survey through research team consensus. After pilot-testing by research team members at their own sites, the survey was refined for item clarity, structural design, and length. We chose to administer surveys through phone interviews over a traditional distribution due to anticipated variability in work models. The research team discussed how each question should be asked, and responses were clarified to maintain consistency.

 

 

Survey Administration

Given the absence of a national registry or database for community-based PHM programs, study participation was solicited through an invitation posted on the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) Listserv and the AAP SOHM Community Hospitalist Listserv in May 2018. Invitations were posted twice at two weeks apart. Each research team member completed 6-19 interviews. Responses to survey questions were recorded in REDCap, a secure, web-based data capture instrument.8

Participating in the study was considered implied consent, and participants did not receive a monetary incentive, although respondents were offered deidentified survey data for participation. The study was exempted through the University of Chicago Institutional Review Board.

Data Analysis

Employers were dichotomized as community hospital employer (including primary community hospital employment/private organization) or noncommunity hospital employer (including children’s/university hospital employment or school of medicine). Descriptive statistics were reported to compare the demographics of two employer groups. P values were calculated using two-sample t-tests for the continuous variables and chi-square or Fisher-exact tests for the categorical variables. Mann–Whitney U-test was performed for continuous variables without normality. Analyses were performed using the R Statistical Programming Language (R Foundation for Statistical Computing, Vienna, Austria), version 3.4.3.

RESULTS

Participation and Program Characteristics

We interviewed 70 community-based PHM site directors representing programs across 29 states (Table 1) and five geographic regions: Midwest (34.3%), Northeast (11.4%), Southeast (15.7%), Southwest (4.3%), and West (34.3%). Employer models varied across groups, with more noncommunity hospital employers (57%) than community hospital employers (43%). The top three services covered by pediatric hospitalists were pediatric inpatient or observation bed admissions (97%), emergency department consults (89%), and general newborns (67%). PHM programs also provided coverage for other services, including newborn deliveries (43%), Special Care Nursery/Level II Neonatal Intensive Care Unit (41%), step-down unit (20%), and mental health units (13%). About 59% of programs provided education for family medicine residents, 36% were for pediatric residents, and 70% worked with advanced practice providers. The majority of programs (70%) provided in-house coverage overnight.

Clinical Work Expectations and Employer Model

Clinical work expectations varied broadly across programs (Table 2). The median expected hours for a 1.0 FTE was 1,882 hours per year (interquartile range [IQR] 1,805, 2,016), and the median expected weekend coverage/year (defined as covering two days or two nights of the weekend) was 21 (IQR 14, 24). Most programs did not expand staff coverage based on seasonality (73%), and less than 20% of programs operated with a census cap. Median support for nondirect patient care activities was 4% (IQR 0,10) of a program’s total FTE (ie, a 5.0 FTE program would have 0.20 FTE support). Programs with community hospital employers had an 8% higher expectation of 1.0 FTE hours/year (P = .01) and viewed an appropriate pediatric morning census as 20% higher (P = .01; Table 2).

Program Sustainability

Twenty-six (37%) site directors described their program as unsustainable. When programmatic characteristics and clinical work expectations were analyzed by perception of sustainability, we observed no difference between programs that were perceived as unsustainable in the number of 1.0 FTE hours/year (P = .16), weekends/year (P = .65), in-house call (P = .36), or the presence of a back-up system (P = .61).

 

 

DISCUSSION

To our knowledge, this study is the first to describe clinical work models exclusively for pediatric community hospitalist programs. We found that expectations for clinical FTE hours, weekend coverage, appropriate morning census, support for nondirect patient care activities, and perception of sustainability varied broadly across programs. The only variable affecting some of these differences was employer model, with those employed by a community hospital employer having a higher expectation for hours/year and appropriate morning pediatric census than those employed by noncommunity hospital employers.

With a growing emphasis on physician burnout and career satisfaction,9-11 understanding the characteristics of community hospital work settings is critical for identifying and building sustainable employment models. Previous studies have identified that the balance of clinical and nonclinical responsibilities and the setting of community versus university-based programs are major contributors to burnout and career satisfaction.9,11 Interestingly, although community hospital-based programs have limited FTE for nondirect patient care activities, we found that a higher percentage of program site directors perceived their program models as sustainable when compared with university-based programs in prior research (63% versus 50%).6 Elucidating why community hospital PHM programs are perceived as more sustainable provides an opportunity for future research. Potential reasons may include fewer academic requirements for promotion or an increased connection to a local community.

We also found that the employer model had a statistically significant impact on expected FTE hours per year but not on perception of sustainability. Programs employed by community hospitals worked 8% more hours per year than those employed by noncommunity hospital employers and accepted a higher morning pediatric census. This variation in hours and census level appropriateness is likely multifactorial, potentially from higher nonclinical expectations for promotion (eg, academic or scholarly production) at school of medicine or children’s hospital employed programs versus limited reimbursement for administrative responsibilities within community hospital employment models.

There are several potential next steps for our findings. As our data are the first attempt (to our knowledge) at describing the current practice and expectations exclusively within community hospital programs, this study can be used as a starting point for the development of workload expectation standards. Increasing transparency nationally for individual community programs potentially promotes discussions around burnout and attrition. Having objective data to compare program models may assist in advocating with local hospital leadership for restructuring that better aligns with national norms.

Our study has several limitations. First, our sampling frame was based upon a self-selection of program directors. This may have led to a biased representation of programs with higher workloads motivated to develop a standard to compare with other programs, which may have potentially led to an overestimation of hours. Second, without a registry or database for community-based pediatric hospitalist programs, we do not know the percentage of community-based programs that our sample represents. Although our results cannot speak for all community PHM programs, we attempted to mitigate nonresponse bias through the breadth of programs represented, which spanned 29 states, five geographic regions, and teaching and nonteaching programs. The interview-based method for data collection allowed the research team to clarify questions and responses across sites, thereby improving the quality and consistency of the data for the represented study sample. Finally, other factors possibly contributed to sustainability that we did not address in this study, such as programs that are dependent on billable encounters as part of their salary support.

 

 

CONCLUSION

As a newly recognized subspecialty, creating a reference for community-based program leaders to determine and discuss individual models and expectations with hospital administrators may help address programmatic sustainability. It may also allow for the analysis of long-term career satisfaction and longevity within community PHM programs based on workload. Future studies should further explore root causes for workload discrepancies between community and university employed programs along with establishing potential standards for PHM program development.

Acknowledgments

We would like to thank the Stanford School of Medicine Quantitative Sciences Unit staff for their assistance in statistical analysis.

Disclosure

The authors have nothing to disclose.

Files
References

1. Robert MW, Lee G. Zero to 50,000—The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. https://doi.org/10.1002/jhm.2020.
3. Paul DH, Jennifer D, Elizabeth R, et al. Proposed dashboard for pediatric hospital medicine groups. Hosp Pediatr. 2012;2(2):59-68. https://doi.org/10.1542/hpeds.2012-0004
4. Gary LF, Kathryn B, Kamilah N, Indu L. Characteristics of the pediatric hospitalist workforce: its roles and work environment. Pediatrics 2007;120(1):33-39. https://doi.org/10.1542/peds.2007-0304
5. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce. 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
6. Fromme HB, Chen CO, Fine BR, Gosdin C, Shaughnessy EE. Pediatric hospitalist workload and sustainability in university-based programs: results from a national interview-based survey. J Hosp Med. 2018;13(10):702-705. https://doi.org/10.12788/jhm.2977.
7. Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children’s hospitals in the United States. J Hosp Med. 2016;11(11):743-749. https://doi.org/10.1002/jhm.2624.
8. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
9. Laurie AP, Aisha BD, Mary CO. Association between practice setting and pediatric hospitalist career satisfaction. Hosp Pediatr. 2013;3(3):285-291. https://doi.org/10.1542/hpeds.2012-0085
10. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. https://doi.org/10.1007/s11606-011-1780-z.
11. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907

Article PDF
Issue
Journal of Hospital Medicine 14(11)
Publications
Topics
Page Number
682-685. Published online first August 21, 2019
Sections
Files
Files
Article PDF
Article PDF
Related Articles

As a newly recognized specialty, pediatric hospital medicine (PHM) continues to expand and diversify.1 Pediatric hospitalists care for children in hospitals ranging from small, rural community hospitals to large, free-standing quaternary children’s hospitals.2-4 In addition, more than 10% of graduating pediatric residents are seeking future careers within PHM.5

In 2018, Fromme et al. published a study describing clinical workload for pediatric hospitalists within university-based settings.6 They characterized the diversity of work models and programmatic sustainability but limited the study to university-based programs. With over half of children receiving care within community hospitals,7 workforce patterns for community-based pediatric hospitalists should be characterized to maximize sustainability and minimize attrition across the field.

In this study, we describe programmatic variability in clinical work expectations of 70 community-based PHM programs. We aimed to describe existing work models and expectations of community-based program leaders as they relate to their unique clinical setting.

METHODS

We conducted a cross-sectional survey of community-based PHM site directors through structured interviews. Community hospital programs were self-defined by the study participants, although typically defined as general hospitals that admit pediatric patients and are not free-standing or children’s hospitals within a general hospital. Survey respondents were asked to answer questions only reflecting expectations at their community hospital.

Survey Design and Content

Building from a tool used by Fromme et al.6 we created a 12-question structured interview questionnaire focused on three areas: (1) full-time employment (FTE) metrics including definitions of a 1.0 FTE, “typical” shifts, and weekend responsibilities; (2) work volume including census parameters, service-line coverage expectations, back-up systems, and overnight call responsibilities; and (3) programmatic model including sense of sustainability (eg, minimizing burnout and attrition), support for activities such as administrative or research time, and employer model (Appendix).

We modified the survey through research team consensus. After pilot-testing by research team members at their own sites, the survey was refined for item clarity, structural design, and length. We chose to administer surveys through phone interviews over a traditional distribution due to anticipated variability in work models. The research team discussed how each question should be asked, and responses were clarified to maintain consistency.

 

 

Survey Administration

Given the absence of a national registry or database for community-based PHM programs, study participation was solicited through an invitation posted on the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) Listserv and the AAP SOHM Community Hospitalist Listserv in May 2018. Invitations were posted twice at two weeks apart. Each research team member completed 6-19 interviews. Responses to survey questions were recorded in REDCap, a secure, web-based data capture instrument.8

Participating in the study was considered implied consent, and participants did not receive a monetary incentive, although respondents were offered deidentified survey data for participation. The study was exempted through the University of Chicago Institutional Review Board.

Data Analysis

Employers were dichotomized as community hospital employer (including primary community hospital employment/private organization) or noncommunity hospital employer (including children’s/university hospital employment or school of medicine). Descriptive statistics were reported to compare the demographics of two employer groups. P values were calculated using two-sample t-tests for the continuous variables and chi-square or Fisher-exact tests for the categorical variables. Mann–Whitney U-test was performed for continuous variables without normality. Analyses were performed using the R Statistical Programming Language (R Foundation for Statistical Computing, Vienna, Austria), version 3.4.3.

RESULTS

Participation and Program Characteristics

We interviewed 70 community-based PHM site directors representing programs across 29 states (Table 1) and five geographic regions: Midwest (34.3%), Northeast (11.4%), Southeast (15.7%), Southwest (4.3%), and West (34.3%). Employer models varied across groups, with more noncommunity hospital employers (57%) than community hospital employers (43%). The top three services covered by pediatric hospitalists were pediatric inpatient or observation bed admissions (97%), emergency department consults (89%), and general newborns (67%). PHM programs also provided coverage for other services, including newborn deliveries (43%), Special Care Nursery/Level II Neonatal Intensive Care Unit (41%), step-down unit (20%), and mental health units (13%). About 59% of programs provided education for family medicine residents, 36% were for pediatric residents, and 70% worked with advanced practice providers. The majority of programs (70%) provided in-house coverage overnight.

Clinical Work Expectations and Employer Model

Clinical work expectations varied broadly across programs (Table 2). The median expected hours for a 1.0 FTE was 1,882 hours per year (interquartile range [IQR] 1,805, 2,016), and the median expected weekend coverage/year (defined as covering two days or two nights of the weekend) was 21 (IQR 14, 24). Most programs did not expand staff coverage based on seasonality (73%), and less than 20% of programs operated with a census cap. Median support for nondirect patient care activities was 4% (IQR 0,10) of a program’s total FTE (ie, a 5.0 FTE program would have 0.20 FTE support). Programs with community hospital employers had an 8% higher expectation of 1.0 FTE hours/year (P = .01) and viewed an appropriate pediatric morning census as 20% higher (P = .01; Table 2).

Program Sustainability

Twenty-six (37%) site directors described their program as unsustainable. When programmatic characteristics and clinical work expectations were analyzed by perception of sustainability, we observed no difference between programs that were perceived as unsustainable in the number of 1.0 FTE hours/year (P = .16), weekends/year (P = .65), in-house call (P = .36), or the presence of a back-up system (P = .61).

 

 

DISCUSSION

To our knowledge, this study is the first to describe clinical work models exclusively for pediatric community hospitalist programs. We found that expectations for clinical FTE hours, weekend coverage, appropriate morning census, support for nondirect patient care activities, and perception of sustainability varied broadly across programs. The only variable affecting some of these differences was employer model, with those employed by a community hospital employer having a higher expectation for hours/year and appropriate morning pediatric census than those employed by noncommunity hospital employers.

With a growing emphasis on physician burnout and career satisfaction,9-11 understanding the characteristics of community hospital work settings is critical for identifying and building sustainable employment models. Previous studies have identified that the balance of clinical and nonclinical responsibilities and the setting of community versus university-based programs are major contributors to burnout and career satisfaction.9,11 Interestingly, although community hospital-based programs have limited FTE for nondirect patient care activities, we found that a higher percentage of program site directors perceived their program models as sustainable when compared with university-based programs in prior research (63% versus 50%).6 Elucidating why community hospital PHM programs are perceived as more sustainable provides an opportunity for future research. Potential reasons may include fewer academic requirements for promotion or an increased connection to a local community.

We also found that the employer model had a statistically significant impact on expected FTE hours per year but not on perception of sustainability. Programs employed by community hospitals worked 8% more hours per year than those employed by noncommunity hospital employers and accepted a higher morning pediatric census. This variation in hours and census level appropriateness is likely multifactorial, potentially from higher nonclinical expectations for promotion (eg, academic or scholarly production) at school of medicine or children’s hospital employed programs versus limited reimbursement for administrative responsibilities within community hospital employment models.

There are several potential next steps for our findings. As our data are the first attempt (to our knowledge) at describing the current practice and expectations exclusively within community hospital programs, this study can be used as a starting point for the development of workload expectation standards. Increasing transparency nationally for individual community programs potentially promotes discussions around burnout and attrition. Having objective data to compare program models may assist in advocating with local hospital leadership for restructuring that better aligns with national norms.

Our study has several limitations. First, our sampling frame was based upon a self-selection of program directors. This may have led to a biased representation of programs with higher workloads motivated to develop a standard to compare with other programs, which may have potentially led to an overestimation of hours. Second, without a registry or database for community-based pediatric hospitalist programs, we do not know the percentage of community-based programs that our sample represents. Although our results cannot speak for all community PHM programs, we attempted to mitigate nonresponse bias through the breadth of programs represented, which spanned 29 states, five geographic regions, and teaching and nonteaching programs. The interview-based method for data collection allowed the research team to clarify questions and responses across sites, thereby improving the quality and consistency of the data for the represented study sample. Finally, other factors possibly contributed to sustainability that we did not address in this study, such as programs that are dependent on billable encounters as part of their salary support.

 

 

CONCLUSION

As a newly recognized subspecialty, creating a reference for community-based program leaders to determine and discuss individual models and expectations with hospital administrators may help address programmatic sustainability. It may also allow for the analysis of long-term career satisfaction and longevity within community PHM programs based on workload. Future studies should further explore root causes for workload discrepancies between community and university employed programs along with establishing potential standards for PHM program development.

Acknowledgments

We would like to thank the Stanford School of Medicine Quantitative Sciences Unit staff for their assistance in statistical analysis.

Disclosure

The authors have nothing to disclose.

As a newly recognized specialty, pediatric hospital medicine (PHM) continues to expand and diversify.1 Pediatric hospitalists care for children in hospitals ranging from small, rural community hospitals to large, free-standing quaternary children’s hospitals.2-4 In addition, more than 10% of graduating pediatric residents are seeking future careers within PHM.5

In 2018, Fromme et al. published a study describing clinical workload for pediatric hospitalists within university-based settings.6 They characterized the diversity of work models and programmatic sustainability but limited the study to university-based programs. With over half of children receiving care within community hospitals,7 workforce patterns for community-based pediatric hospitalists should be characterized to maximize sustainability and minimize attrition across the field.

In this study, we describe programmatic variability in clinical work expectations of 70 community-based PHM programs. We aimed to describe existing work models and expectations of community-based program leaders as they relate to their unique clinical setting.

METHODS

We conducted a cross-sectional survey of community-based PHM site directors through structured interviews. Community hospital programs were self-defined by the study participants, although typically defined as general hospitals that admit pediatric patients and are not free-standing or children’s hospitals within a general hospital. Survey respondents were asked to answer questions only reflecting expectations at their community hospital.

Survey Design and Content

Building from a tool used by Fromme et al.6 we created a 12-question structured interview questionnaire focused on three areas: (1) full-time employment (FTE) metrics including definitions of a 1.0 FTE, “typical” shifts, and weekend responsibilities; (2) work volume including census parameters, service-line coverage expectations, back-up systems, and overnight call responsibilities; and (3) programmatic model including sense of sustainability (eg, minimizing burnout and attrition), support for activities such as administrative or research time, and employer model (Appendix).

We modified the survey through research team consensus. After pilot-testing by research team members at their own sites, the survey was refined for item clarity, structural design, and length. We chose to administer surveys through phone interviews over a traditional distribution due to anticipated variability in work models. The research team discussed how each question should be asked, and responses were clarified to maintain consistency.

 

 

Survey Administration

Given the absence of a national registry or database for community-based PHM programs, study participation was solicited through an invitation posted on the American Academy of Pediatrics Section on Hospital Medicine (AAP SOHM) Listserv and the AAP SOHM Community Hospitalist Listserv in May 2018. Invitations were posted twice at two weeks apart. Each research team member completed 6-19 interviews. Responses to survey questions were recorded in REDCap, a secure, web-based data capture instrument.8

Participating in the study was considered implied consent, and participants did not receive a monetary incentive, although respondents were offered deidentified survey data for participation. The study was exempted through the University of Chicago Institutional Review Board.

Data Analysis

Employers were dichotomized as community hospital employer (including primary community hospital employment/private organization) or noncommunity hospital employer (including children’s/university hospital employment or school of medicine). Descriptive statistics were reported to compare the demographics of two employer groups. P values were calculated using two-sample t-tests for the continuous variables and chi-square or Fisher-exact tests for the categorical variables. Mann–Whitney U-test was performed for continuous variables without normality. Analyses were performed using the R Statistical Programming Language (R Foundation for Statistical Computing, Vienna, Austria), version 3.4.3.

RESULTS

Participation and Program Characteristics

We interviewed 70 community-based PHM site directors representing programs across 29 states (Table 1) and five geographic regions: Midwest (34.3%), Northeast (11.4%), Southeast (15.7%), Southwest (4.3%), and West (34.3%). Employer models varied across groups, with more noncommunity hospital employers (57%) than community hospital employers (43%). The top three services covered by pediatric hospitalists were pediatric inpatient or observation bed admissions (97%), emergency department consults (89%), and general newborns (67%). PHM programs also provided coverage for other services, including newborn deliveries (43%), Special Care Nursery/Level II Neonatal Intensive Care Unit (41%), step-down unit (20%), and mental health units (13%). About 59% of programs provided education for family medicine residents, 36% were for pediatric residents, and 70% worked with advanced practice providers. The majority of programs (70%) provided in-house coverage overnight.

Clinical Work Expectations and Employer Model

Clinical work expectations varied broadly across programs (Table 2). The median expected hours for a 1.0 FTE was 1,882 hours per year (interquartile range [IQR] 1,805, 2,016), and the median expected weekend coverage/year (defined as covering two days or two nights of the weekend) was 21 (IQR 14, 24). Most programs did not expand staff coverage based on seasonality (73%), and less than 20% of programs operated with a census cap. Median support for nondirect patient care activities was 4% (IQR 0,10) of a program’s total FTE (ie, a 5.0 FTE program would have 0.20 FTE support). Programs with community hospital employers had an 8% higher expectation of 1.0 FTE hours/year (P = .01) and viewed an appropriate pediatric morning census as 20% higher (P = .01; Table 2).

Program Sustainability

Twenty-six (37%) site directors described their program as unsustainable. When programmatic characteristics and clinical work expectations were analyzed by perception of sustainability, we observed no difference between programs that were perceived as unsustainable in the number of 1.0 FTE hours/year (P = .16), weekends/year (P = .65), in-house call (P = .36), or the presence of a back-up system (P = .61).

 

 

DISCUSSION

To our knowledge, this study is the first to describe clinical work models exclusively for pediatric community hospitalist programs. We found that expectations for clinical FTE hours, weekend coverage, appropriate morning census, support for nondirect patient care activities, and perception of sustainability varied broadly across programs. The only variable affecting some of these differences was employer model, with those employed by a community hospital employer having a higher expectation for hours/year and appropriate morning pediatric census than those employed by noncommunity hospital employers.

With a growing emphasis on physician burnout and career satisfaction,9-11 understanding the characteristics of community hospital work settings is critical for identifying and building sustainable employment models. Previous studies have identified that the balance of clinical and nonclinical responsibilities and the setting of community versus university-based programs are major contributors to burnout and career satisfaction.9,11 Interestingly, although community hospital-based programs have limited FTE for nondirect patient care activities, we found that a higher percentage of program site directors perceived their program models as sustainable when compared with university-based programs in prior research (63% versus 50%).6 Elucidating why community hospital PHM programs are perceived as more sustainable provides an opportunity for future research. Potential reasons may include fewer academic requirements for promotion or an increased connection to a local community.

We also found that the employer model had a statistically significant impact on expected FTE hours per year but not on perception of sustainability. Programs employed by community hospitals worked 8% more hours per year than those employed by noncommunity hospital employers and accepted a higher morning pediatric census. This variation in hours and census level appropriateness is likely multifactorial, potentially from higher nonclinical expectations for promotion (eg, academic or scholarly production) at school of medicine or children’s hospital employed programs versus limited reimbursement for administrative responsibilities within community hospital employment models.

There are several potential next steps for our findings. As our data are the first attempt (to our knowledge) at describing the current practice and expectations exclusively within community hospital programs, this study can be used as a starting point for the development of workload expectation standards. Increasing transparency nationally for individual community programs potentially promotes discussions around burnout and attrition. Having objective data to compare program models may assist in advocating with local hospital leadership for restructuring that better aligns with national norms.

Our study has several limitations. First, our sampling frame was based upon a self-selection of program directors. This may have led to a biased representation of programs with higher workloads motivated to develop a standard to compare with other programs, which may have potentially led to an overestimation of hours. Second, without a registry or database for community-based pediatric hospitalist programs, we do not know the percentage of community-based programs that our sample represents. Although our results cannot speak for all community PHM programs, we attempted to mitigate nonresponse bias through the breadth of programs represented, which spanned 29 states, five geographic regions, and teaching and nonteaching programs. The interview-based method for data collection allowed the research team to clarify questions and responses across sites, thereby improving the quality and consistency of the data for the represented study sample. Finally, other factors possibly contributed to sustainability that we did not address in this study, such as programs that are dependent on billable encounters as part of their salary support.

 

 

CONCLUSION

As a newly recognized subspecialty, creating a reference for community-based program leaders to determine and discuss individual models and expectations with hospital administrators may help address programmatic sustainability. It may also allow for the analysis of long-term career satisfaction and longevity within community PHM programs based on workload. Future studies should further explore root causes for workload discrepancies between community and university employed programs along with establishing potential standards for PHM program development.

Acknowledgments

We would like to thank the Stanford School of Medicine Quantitative Sciences Unit staff for their assistance in statistical analysis.

Disclosure

The authors have nothing to disclose.

References

1. Robert MW, Lee G. Zero to 50,000—The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. https://doi.org/10.1002/jhm.2020.
3. Paul DH, Jennifer D, Elizabeth R, et al. Proposed dashboard for pediatric hospital medicine groups. Hosp Pediatr. 2012;2(2):59-68. https://doi.org/10.1542/hpeds.2012-0004
4. Gary LF, Kathryn B, Kamilah N, Indu L. Characteristics of the pediatric hospitalist workforce: its roles and work environment. Pediatrics 2007;120(1):33-39. https://doi.org/10.1542/peds.2007-0304
5. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce. 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
6. Fromme HB, Chen CO, Fine BR, Gosdin C, Shaughnessy EE. Pediatric hospitalist workload and sustainability in university-based programs: results from a national interview-based survey. J Hosp Med. 2018;13(10):702-705. https://doi.org/10.12788/jhm.2977.
7. Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children’s hospitals in the United States. J Hosp Med. 2016;11(11):743-749. https://doi.org/10.1002/jhm.2624.
8. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
9. Laurie AP, Aisha BD, Mary CO. Association between practice setting and pediatric hospitalist career satisfaction. Hosp Pediatr. 2013;3(3):285-291. https://doi.org/10.1542/hpeds.2012-0085
10. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. https://doi.org/10.1007/s11606-011-1780-z.
11. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907

References

1. Robert MW, Lee G. Zero to 50,000—The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Gosdin C, Simmons J, Yau C, Sucharew H, Carlson D, Paciorkowski N. Survey of academic pediatric hospitalist programs in the US: organizational, administrative, and financial factors. J Hosp Med. 2013;8(6):285-291. https://doi.org/10.1002/jhm.2020.
3. Paul DH, Jennifer D, Elizabeth R, et al. Proposed dashboard for pediatric hospital medicine groups. Hosp Pediatr. 2012;2(2):59-68. https://doi.org/10.1542/hpeds.2012-0004
4. Gary LF, Kathryn B, Kamilah N, Indu L. Characteristics of the pediatric hospitalist workforce: its roles and work environment. Pediatrics 2007;120(1):33-39. https://doi.org/10.1542/peds.2007-0304
5. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce. 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
6. Fromme HB, Chen CO, Fine BR, Gosdin C, Shaughnessy EE. Pediatric hospitalist workload and sustainability in university-based programs: results from a national interview-based survey. J Hosp Med. 2018;13(10):702-705. https://doi.org/10.12788/jhm.2977.
7. Leyenaar JK, Ralston SL, Shieh MS, Pekow PS, Mangione-Smith R, Lindenauer PK. Epidemiology of pediatric hospitalizations at general hospitals and freestanding children’s hospitals in the United States. J Hosp Med. 2016;11(11):743-749. https://doi.org/10.1002/jhm.2624.
8. Harris PA, Taylor R, Thielke R, Payne J, Gonzalez N, Conde JG. Research electronic data capture (REDCap)--a metadata-driven methodology and workflow process for providing translational research informatics support. J Biomed Inform. 2009;42(2):377-381. https://doi.org/10.1016/j.jbi.2008.08.010.
9. Laurie AP, Aisha BD, Mary CO. Association between practice setting and pediatric hospitalist career satisfaction. Hosp Pediatr. 2013;3(3):285-291. https://doi.org/10.1542/hpeds.2012-0085
10. Hinami K, Whelan CT, Wolosin RJ, Miller JA, Wetterneck TB. Worklife and satisfaction of hospitalists: toward flourishing careers. J Gen Intern Med. 2011;27(1):28-36. https://doi.org/10.1007/s11606-011-1780-z.
11. Hinami K, Whelan CT, Miller JA, Wolosin RJ, Wetterneck TB. Job characteristics, satisfaction, and burnout across hospitalist practice models. J Hosp Med. 2012;7(5):402-410. https://doi.org/10.1002/jhm.1907

Issue
Journal of Hospital Medicine 14(11)
Issue
Journal of Hospital Medicine 14(11)
Page Number
682-685. Published online first August 21, 2019
Page Number
682-685. Published online first August 21, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Francisco Alvarez, MD; E-mail: [email protected]; Telephone: 650-736-4421
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

Improving Resident Feedback on Diagnostic Reasoning after Handovers: The LOOP Project

Article Type
Changed
Sun, 10/13/2019 - 21:49

One of the most promising methods for improving medical decision-making is learning from the outcomes of one’s decisions and either maintaining or modifying future decision-making based on those outcomes.1-3 This process of iterative improvement over time based on feedback is called calibration and is one of the most important drivers of lifelong learning and improvement.1

Despite the importance of knowing the outcomes of one’s decisions, this seldom occurs in modern medical education.4 Learners do not often obtain specific feedback about the decisions they make within a short enough time frame to intentionally reflect upon and modify that decision-making process.3,5 In addition, almost every patient admitted to a teaching hospital will be cared for by multiple physicians over the course of a hospitalization. These care transitions may be seen as barriers to high-quality care and education, but we suggest a different paradigm: transitions of care present opportunities for trainees to be teammates in each other’s calibration. Peers can provide specific feedback about the diagnostic process and inform one another about patient outcomes. Transitions of care allow for built-in “second opinions,” and trainees can intentionally learn by comparing the clinical reasoning involved at different points in a patient’s course. The diagnostic process is dynamic and complex; it is fundamental that trainees have the opportunity to reflect on the process to identify how and why the diagnostic process evolved throughout a patient’s hospitalization. Most inpatient diagnoses are “working diagnoses” that are likely to change. Thus, identifying the twists and turns in a patient’s diagnostic journey provides invaluable learning for future practice.

Herein, we describe the implementation and impact of a multisite initiative to engage residents in delivering feedback to their peers about medical decisions around transitions of care.

METHODS

The LOOP Project is a prospective clinical educational study that aimed to engage resident physicians to deliver feedback and updates about their colleagues’ diagnostic decision-making around care transitions. This study was deemed exempt from review by the University of Minnesota Institutional Review Board and either approved or deemed exempt by the corresponding Institutional Review Boards at all participating institutions. The study was conducted by seven programs at six institutions and included Internal Medicine, Pediatrics, and Internal Medicine–Pediatrics (PGY 1-4) residents from February 2017 to June 2017. Residents rotating through participating clinical services during the study period were invited to participate and given further information by site leads via informational presentations, written handouts, and/or emails.

 

 

The intervention entailed residents delivering structured feedback to their colleagues regarding their patients’ diagnoses after transitions of care. The predominant setting was the inpatient hospital medicine day-shift team providing feedback to the night-shift team regarding overnight admissions. Feedback about patients (usually chosen by the day-shift team) was delivered through completion of a standard templated form (Figure) usually sent within 24 hours after hospital admission through secure messaging (ie, EPIC In-Basket message utilizing a Smartphrase of the LOOP feedback form). A 24-hour time period was chosen to allow for rapid cycling of feedback focusing on initial diagnostic assessment. Site leads and resident champions promoted the project through presentations, informal discussions, and prizes for high completion rates of forms and surveys (ie, coffee cards and pizza).



Feedback forms were collected by site leads. A categorization rubric was developed during a pilot phase. Diagnoses before and after the transition of care were categorized as no change, diagnostic refinement (ie, the initial diagnosis was modified to be more specific), disease evolution (ie, the patient’s physiology or disease course changed), or major diagnostic change (ie, the initial and subsequent diagnoses differed substantially). Site leads acted as single-coders and conference calls were held to discuss coding and build consensus regarding the taxonomy. Diagnoses were not labeled as “right” or “wrong”; instead, categorization focused on differences between diagnoses before and after transitions of care.

Residents were invited to complete surveys before and after the rotation during which they had the opportunity to give or receive feedback. A unique identifier was entered by each participant to allow pairing of pre- and postsurveys. The survey (Appendix 1) was developed and refined during the initial pilot phase at the University of Minnesota. Surveys were collected using RedCap and analyzed using SAS version 9.3 (SAS Institute Inc., Cary, North Carolina). Differences between pre- and postsurveys were calculated using paired t-tests for continuous variables, and descriptive statistics were used for demographic and other items. Only surveys completed by individuals who completed both pre- and postsurveys were included in the analysis.

RESULTS

Overall, there were 716 current residents in the training programs that participated in this study; one site planned on participating but did not complete any forms. A total of 405 residents were eligible to participate during the study period. Overall, 221 (54.5%) presurveys and 90 postsurveys were completed (22.2%); 54 residents (13.3%) completed both pre- and postsurveys and were included in the analysis. Of the 54 survey respondents, 26 (48.15%) were female.

Survey results (Table) indicated significantly improved self-efficacy in identifying cognitive errors in residents’ own practice, identifying why those errors occurred, and identifying strategies to decrease future diagnostic errors. Participants noted increased frequency of discussions within teams regarding differential diagnoses, diagnostic errors, and why diagnoses changed over time. The feedback process was viewed positively by participants, who were also generally satisfied with the overall quality, frequency, and value of the feedback received. After the intervention, participants reported an increase in the amount of feedback received for night admissions and an overall increase in the perception that nighttime admissions were as “educational” as daytime admissions.



Of 544 collected forms, 238 (43.7%) showed some diagnostic change. These changes were further categorized into disease evolution (60 forms, 11.0%), diagnostic refinement (109 forms, 20.0%), and major diagnostic change (69 forms, 12.7%).

 

 

CONCLUSION

This study suggests that an intervention to operationalize standardized, structured feedback about diagnostic decision-making around transitions of care is a promising approach to improve residents’ understanding of changes in, and evolution of, the diagnostic process, as well as improve the perceived educational value of overnight admissions. In our results, over 40% of the patients admitted by residents had some change in their diagnoses after a transition of care during their early hospitalization. This finding highlights the importance of ensuring that trainees have the opportunity to know the outcomes of their decisions. Indeed, residents should be encouraged to follow-up on their own patients without prompting; however, studies show that this practice is uncommon and interventions beyond admonition are necessary.4

The diagnostic change rate observed in this study confirms that diagnosis is an iterative process and that the concept of a working diagnosis is key—a diagnosis made at admission will very likely be modified by time, the natural history of the disease, and new clinical information. When diagnoses are viewed as working diagnoses, trainees may be empowered to better understand the diagnostic process. As learners and teachers adopt this perspective, training programs are more likely to be successful in helping learners calibrate toward expertise.

Previous studies have questioned whether resident physicians view overnight admissions as valuable.6 After our intervention, we found an increase in both the amount of feedback received and the proportion of participants who agreed that night and day admissions were equally educational, suggesting that targeted diagnostic reasoning feedback can bolster educational value of nighttime admissions.

This study presents a number of limitations. First, the survey response rate was low, which could potentially lead to biased results. We excluded those respondents who did not respond to both the pre- and postsurveys from the analysis. Second, we did not measure actual change in diagnostic performance. While learners did report learning and saw feedback as valuable, self-identified learning points may not always translate to improved patient care. Additionally, residents chose the patients for whom feedback was provided, and the diagnostic change rate described may be overestimated. We did not track the total number of admissions for which feedback could have been delivered during the study. We did not include a control group, and the intervention may not be responsible for changing learners’ perceptions. However, the included programs were not implementing other new protocols focused on diagnostic reasoning during the study period. In addition, we addressed diagnostic changes early in a hospital course; a comprehensive program should address more feedback loops (eg, discharging team to admitting team).

This work is a pilot study; for future interventions focused on improving calibration to be sustainable, they should be congruent with existing clinical workflows and avoid adding to the stress and/or cognitive load of an already-busy clinical experience. The most optimal strategies for delivering feedback about clinical reasoning remain unclear.

In summary, a program to deliver structured feedback among resident physicians about diagnostic reasoning across care transitions for selected hospitalized patients is viewed positively by trainees, is feasible, and leads to changes in resident perception and self-efficacy. Future studies and interventions should aim to provide feedback more systematically, rather than just for selected patients, and objectively track diagnostic changes over time in hospitalized patients. While truly objective diagnostic information is challenging to obtain, comparing admission and other inpatient diagnoses to discharge diagnoses or diagnoses from primary care follow-up visits may be helpful. In addition, studies should aim to track trainees’ clinical decision-making over time and determine the effectiveness of feedback at improving diagnostic performance through calibration.

 

 

 

Acknowledgments

The authors thank the trainees who participated in this study, as well as the residency leadership at participating institutions. The authors also thank Qi Wang, PhD, for providing statistical analysis.

Disclosures

The authors have nothing to disclose.

Funding

The study was funded by an AAIM Innovation Grant and local support at each participating institution.

Files
References

1. Croskerry P. The feedback sanction. Acad Emerg Med. 2000;7(11):1232-1238. https://doi.org/10.1111/j.1553-2712.2000.tb00468.x.
2. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf. 2013;22(Suppl 2):ii28-ii32. https://doi.org/10.1136/bmjqs-2012-001622.
3. Dhaliwal G. Clinical excellence: make it a habit. Acad Med. 2012;87(11):1473. https://doi.org/10.1097/ACM.0b013e31826d68d9.
4. Shenvi EC, Feupe SF, Yang H, El-Kareh R. Closing the loop: a mixed-methods study about resident learning from outcome feedback after patient handoffs. Diagnosis. 2018;5(4):235-242. https://doi.org/10.1515/dx-2018-0013.
5. Rencic J. Twelve tips for teaching expertise in clinical reasoning. Med Teach. 2011;33(11):887-892. https://doi.org/10.3109/0142159X.2011.558142.
6. Bump GM, Zimmer SM, McNeil MA, Elnicki DM. Hold-over admissions: are they educational for residents? J Gen Intern Med. 2014;29(3):463-467. https://doi.org/10.1007/s11606-013-2667-y.

Article PDF
Issue
Journal of Hospital Medicine 14(10)
Publications
Topics
Page Number
622-625. Published online first August 21, 2019
Sections
Files
Files
Article PDF
Article PDF

One of the most promising methods for improving medical decision-making is learning from the outcomes of one’s decisions and either maintaining or modifying future decision-making based on those outcomes.1-3 This process of iterative improvement over time based on feedback is called calibration and is one of the most important drivers of lifelong learning and improvement.1

Despite the importance of knowing the outcomes of one’s decisions, this seldom occurs in modern medical education.4 Learners do not often obtain specific feedback about the decisions they make within a short enough time frame to intentionally reflect upon and modify that decision-making process.3,5 In addition, almost every patient admitted to a teaching hospital will be cared for by multiple physicians over the course of a hospitalization. These care transitions may be seen as barriers to high-quality care and education, but we suggest a different paradigm: transitions of care present opportunities for trainees to be teammates in each other’s calibration. Peers can provide specific feedback about the diagnostic process and inform one another about patient outcomes. Transitions of care allow for built-in “second opinions,” and trainees can intentionally learn by comparing the clinical reasoning involved at different points in a patient’s course. The diagnostic process is dynamic and complex; it is fundamental that trainees have the opportunity to reflect on the process to identify how and why the diagnostic process evolved throughout a patient’s hospitalization. Most inpatient diagnoses are “working diagnoses” that are likely to change. Thus, identifying the twists and turns in a patient’s diagnostic journey provides invaluable learning for future practice.

Herein, we describe the implementation and impact of a multisite initiative to engage residents in delivering feedback to their peers about medical decisions around transitions of care.

METHODS

The LOOP Project is a prospective clinical educational study that aimed to engage resident physicians to deliver feedback and updates about their colleagues’ diagnostic decision-making around care transitions. This study was deemed exempt from review by the University of Minnesota Institutional Review Board and either approved or deemed exempt by the corresponding Institutional Review Boards at all participating institutions. The study was conducted by seven programs at six institutions and included Internal Medicine, Pediatrics, and Internal Medicine–Pediatrics (PGY 1-4) residents from February 2017 to June 2017. Residents rotating through participating clinical services during the study period were invited to participate and given further information by site leads via informational presentations, written handouts, and/or emails.

 

 

The intervention entailed residents delivering structured feedback to their colleagues regarding their patients’ diagnoses after transitions of care. The predominant setting was the inpatient hospital medicine day-shift team providing feedback to the night-shift team regarding overnight admissions. Feedback about patients (usually chosen by the day-shift team) was delivered through completion of a standard templated form (Figure) usually sent within 24 hours after hospital admission through secure messaging (ie, EPIC In-Basket message utilizing a Smartphrase of the LOOP feedback form). A 24-hour time period was chosen to allow for rapid cycling of feedback focusing on initial diagnostic assessment. Site leads and resident champions promoted the project through presentations, informal discussions, and prizes for high completion rates of forms and surveys (ie, coffee cards and pizza).



Feedback forms were collected by site leads. A categorization rubric was developed during a pilot phase. Diagnoses before and after the transition of care were categorized as no change, diagnostic refinement (ie, the initial diagnosis was modified to be more specific), disease evolution (ie, the patient’s physiology or disease course changed), or major diagnostic change (ie, the initial and subsequent diagnoses differed substantially). Site leads acted as single-coders and conference calls were held to discuss coding and build consensus regarding the taxonomy. Diagnoses were not labeled as “right” or “wrong”; instead, categorization focused on differences between diagnoses before and after transitions of care.

Residents were invited to complete surveys before and after the rotation during which they had the opportunity to give or receive feedback. A unique identifier was entered by each participant to allow pairing of pre- and postsurveys. The survey (Appendix 1) was developed and refined during the initial pilot phase at the University of Minnesota. Surveys were collected using RedCap and analyzed using SAS version 9.3 (SAS Institute Inc., Cary, North Carolina). Differences between pre- and postsurveys were calculated using paired t-tests for continuous variables, and descriptive statistics were used for demographic and other items. Only surveys completed by individuals who completed both pre- and postsurveys were included in the analysis.

RESULTS

Overall, there were 716 current residents in the training programs that participated in this study; one site planned on participating but did not complete any forms. A total of 405 residents were eligible to participate during the study period. Overall, 221 (54.5%) presurveys and 90 postsurveys were completed (22.2%); 54 residents (13.3%) completed both pre- and postsurveys and were included in the analysis. Of the 54 survey respondents, 26 (48.15%) were female.

Survey results (Table) indicated significantly improved self-efficacy in identifying cognitive errors in residents’ own practice, identifying why those errors occurred, and identifying strategies to decrease future diagnostic errors. Participants noted increased frequency of discussions within teams regarding differential diagnoses, diagnostic errors, and why diagnoses changed over time. The feedback process was viewed positively by participants, who were also generally satisfied with the overall quality, frequency, and value of the feedback received. After the intervention, participants reported an increase in the amount of feedback received for night admissions and an overall increase in the perception that nighttime admissions were as “educational” as daytime admissions.



Of 544 collected forms, 238 (43.7%) showed some diagnostic change. These changes were further categorized into disease evolution (60 forms, 11.0%), diagnostic refinement (109 forms, 20.0%), and major diagnostic change (69 forms, 12.7%).

 

 

CONCLUSION

This study suggests that an intervention to operationalize standardized, structured feedback about diagnostic decision-making around transitions of care is a promising approach to improve residents’ understanding of changes in, and evolution of, the diagnostic process, as well as improve the perceived educational value of overnight admissions. In our results, over 40% of the patients admitted by residents had some change in their diagnoses after a transition of care during their early hospitalization. This finding highlights the importance of ensuring that trainees have the opportunity to know the outcomes of their decisions. Indeed, residents should be encouraged to follow-up on their own patients without prompting; however, studies show that this practice is uncommon and interventions beyond admonition are necessary.4

The diagnostic change rate observed in this study confirms that diagnosis is an iterative process and that the concept of a working diagnosis is key—a diagnosis made at admission will very likely be modified by time, the natural history of the disease, and new clinical information. When diagnoses are viewed as working diagnoses, trainees may be empowered to better understand the diagnostic process. As learners and teachers adopt this perspective, training programs are more likely to be successful in helping learners calibrate toward expertise.

Previous studies have questioned whether resident physicians view overnight admissions as valuable.6 After our intervention, we found an increase in both the amount of feedback received and the proportion of participants who agreed that night and day admissions were equally educational, suggesting that targeted diagnostic reasoning feedback can bolster educational value of nighttime admissions.

This study presents a number of limitations. First, the survey response rate was low, which could potentially lead to biased results. We excluded those respondents who did not respond to both the pre- and postsurveys from the analysis. Second, we did not measure actual change in diagnostic performance. While learners did report learning and saw feedback as valuable, self-identified learning points may not always translate to improved patient care. Additionally, residents chose the patients for whom feedback was provided, and the diagnostic change rate described may be overestimated. We did not track the total number of admissions for which feedback could have been delivered during the study. We did not include a control group, and the intervention may not be responsible for changing learners’ perceptions. However, the included programs were not implementing other new protocols focused on diagnostic reasoning during the study period. In addition, we addressed diagnostic changes early in a hospital course; a comprehensive program should address more feedback loops (eg, discharging team to admitting team).

This work is a pilot study; for future interventions focused on improving calibration to be sustainable, they should be congruent with existing clinical workflows and avoid adding to the stress and/or cognitive load of an already-busy clinical experience. The most optimal strategies for delivering feedback about clinical reasoning remain unclear.

In summary, a program to deliver structured feedback among resident physicians about diagnostic reasoning across care transitions for selected hospitalized patients is viewed positively by trainees, is feasible, and leads to changes in resident perception and self-efficacy. Future studies and interventions should aim to provide feedback more systematically, rather than just for selected patients, and objectively track diagnostic changes over time in hospitalized patients. While truly objective diagnostic information is challenging to obtain, comparing admission and other inpatient diagnoses to discharge diagnoses or diagnoses from primary care follow-up visits may be helpful. In addition, studies should aim to track trainees’ clinical decision-making over time and determine the effectiveness of feedback at improving diagnostic performance through calibration.

 

 

 

Acknowledgments

The authors thank the trainees who participated in this study, as well as the residency leadership at participating institutions. The authors also thank Qi Wang, PhD, for providing statistical analysis.

Disclosures

The authors have nothing to disclose.

Funding

The study was funded by an AAIM Innovation Grant and local support at each participating institution.

One of the most promising methods for improving medical decision-making is learning from the outcomes of one’s decisions and either maintaining or modifying future decision-making based on those outcomes.1-3 This process of iterative improvement over time based on feedback is called calibration and is one of the most important drivers of lifelong learning and improvement.1

Despite the importance of knowing the outcomes of one’s decisions, this seldom occurs in modern medical education.4 Learners do not often obtain specific feedback about the decisions they make within a short enough time frame to intentionally reflect upon and modify that decision-making process.3,5 In addition, almost every patient admitted to a teaching hospital will be cared for by multiple physicians over the course of a hospitalization. These care transitions may be seen as barriers to high-quality care and education, but we suggest a different paradigm: transitions of care present opportunities for trainees to be teammates in each other’s calibration. Peers can provide specific feedback about the diagnostic process and inform one another about patient outcomes. Transitions of care allow for built-in “second opinions,” and trainees can intentionally learn by comparing the clinical reasoning involved at different points in a patient’s course. The diagnostic process is dynamic and complex; it is fundamental that trainees have the opportunity to reflect on the process to identify how and why the diagnostic process evolved throughout a patient’s hospitalization. Most inpatient diagnoses are “working diagnoses” that are likely to change. Thus, identifying the twists and turns in a patient’s diagnostic journey provides invaluable learning for future practice.

Herein, we describe the implementation and impact of a multisite initiative to engage residents in delivering feedback to their peers about medical decisions around transitions of care.

METHODS

The LOOP Project is a prospective clinical educational study that aimed to engage resident physicians to deliver feedback and updates about their colleagues’ diagnostic decision-making around care transitions. This study was deemed exempt from review by the University of Minnesota Institutional Review Board and either approved or deemed exempt by the corresponding Institutional Review Boards at all participating institutions. The study was conducted by seven programs at six institutions and included Internal Medicine, Pediatrics, and Internal Medicine–Pediatrics (PGY 1-4) residents from February 2017 to June 2017. Residents rotating through participating clinical services during the study period were invited to participate and given further information by site leads via informational presentations, written handouts, and/or emails.

 

 

The intervention entailed residents delivering structured feedback to their colleagues regarding their patients’ diagnoses after transitions of care. The predominant setting was the inpatient hospital medicine day-shift team providing feedback to the night-shift team regarding overnight admissions. Feedback about patients (usually chosen by the day-shift team) was delivered through completion of a standard templated form (Figure) usually sent within 24 hours after hospital admission through secure messaging (ie, EPIC In-Basket message utilizing a Smartphrase of the LOOP feedback form). A 24-hour time period was chosen to allow for rapid cycling of feedback focusing on initial diagnostic assessment. Site leads and resident champions promoted the project through presentations, informal discussions, and prizes for high completion rates of forms and surveys (ie, coffee cards and pizza).



Feedback forms were collected by site leads. A categorization rubric was developed during a pilot phase. Diagnoses before and after the transition of care were categorized as no change, diagnostic refinement (ie, the initial diagnosis was modified to be more specific), disease evolution (ie, the patient’s physiology or disease course changed), or major diagnostic change (ie, the initial and subsequent diagnoses differed substantially). Site leads acted as single-coders and conference calls were held to discuss coding and build consensus regarding the taxonomy. Diagnoses were not labeled as “right” or “wrong”; instead, categorization focused on differences between diagnoses before and after transitions of care.

Residents were invited to complete surveys before and after the rotation during which they had the opportunity to give or receive feedback. A unique identifier was entered by each participant to allow pairing of pre- and postsurveys. The survey (Appendix 1) was developed and refined during the initial pilot phase at the University of Minnesota. Surveys were collected using RedCap and analyzed using SAS version 9.3 (SAS Institute Inc., Cary, North Carolina). Differences between pre- and postsurveys were calculated using paired t-tests for continuous variables, and descriptive statistics were used for demographic and other items. Only surveys completed by individuals who completed both pre- and postsurveys were included in the analysis.

RESULTS

Overall, there were 716 current residents in the training programs that participated in this study; one site planned on participating but did not complete any forms. A total of 405 residents were eligible to participate during the study period. Overall, 221 (54.5%) presurveys and 90 postsurveys were completed (22.2%); 54 residents (13.3%) completed both pre- and postsurveys and were included in the analysis. Of the 54 survey respondents, 26 (48.15%) were female.

Survey results (Table) indicated significantly improved self-efficacy in identifying cognitive errors in residents’ own practice, identifying why those errors occurred, and identifying strategies to decrease future diagnostic errors. Participants noted increased frequency of discussions within teams regarding differential diagnoses, diagnostic errors, and why diagnoses changed over time. The feedback process was viewed positively by participants, who were also generally satisfied with the overall quality, frequency, and value of the feedback received. After the intervention, participants reported an increase in the amount of feedback received for night admissions and an overall increase in the perception that nighttime admissions were as “educational” as daytime admissions.



Of 544 collected forms, 238 (43.7%) showed some diagnostic change. These changes were further categorized into disease evolution (60 forms, 11.0%), diagnostic refinement (109 forms, 20.0%), and major diagnostic change (69 forms, 12.7%).

 

 

CONCLUSION

This study suggests that an intervention to operationalize standardized, structured feedback about diagnostic decision-making around transitions of care is a promising approach to improve residents’ understanding of changes in, and evolution of, the diagnostic process, as well as improve the perceived educational value of overnight admissions. In our results, over 40% of the patients admitted by residents had some change in their diagnoses after a transition of care during their early hospitalization. This finding highlights the importance of ensuring that trainees have the opportunity to know the outcomes of their decisions. Indeed, residents should be encouraged to follow-up on their own patients without prompting; however, studies show that this practice is uncommon and interventions beyond admonition are necessary.4

The diagnostic change rate observed in this study confirms that diagnosis is an iterative process and that the concept of a working diagnosis is key—a diagnosis made at admission will very likely be modified by time, the natural history of the disease, and new clinical information. When diagnoses are viewed as working diagnoses, trainees may be empowered to better understand the diagnostic process. As learners and teachers adopt this perspective, training programs are more likely to be successful in helping learners calibrate toward expertise.

Previous studies have questioned whether resident physicians view overnight admissions as valuable.6 After our intervention, we found an increase in both the amount of feedback received and the proportion of participants who agreed that night and day admissions were equally educational, suggesting that targeted diagnostic reasoning feedback can bolster educational value of nighttime admissions.

This study presents a number of limitations. First, the survey response rate was low, which could potentially lead to biased results. We excluded those respondents who did not respond to both the pre- and postsurveys from the analysis. Second, we did not measure actual change in diagnostic performance. While learners did report learning and saw feedback as valuable, self-identified learning points may not always translate to improved patient care. Additionally, residents chose the patients for whom feedback was provided, and the diagnostic change rate described may be overestimated. We did not track the total number of admissions for which feedback could have been delivered during the study. We did not include a control group, and the intervention may not be responsible for changing learners’ perceptions. However, the included programs were not implementing other new protocols focused on diagnostic reasoning during the study period. In addition, we addressed diagnostic changes early in a hospital course; a comprehensive program should address more feedback loops (eg, discharging team to admitting team).

This work is a pilot study; for future interventions focused on improving calibration to be sustainable, they should be congruent with existing clinical workflows and avoid adding to the stress and/or cognitive load of an already-busy clinical experience. The most optimal strategies for delivering feedback about clinical reasoning remain unclear.

In summary, a program to deliver structured feedback among resident physicians about diagnostic reasoning across care transitions for selected hospitalized patients is viewed positively by trainees, is feasible, and leads to changes in resident perception and self-efficacy. Future studies and interventions should aim to provide feedback more systematically, rather than just for selected patients, and objectively track diagnostic changes over time in hospitalized patients. While truly objective diagnostic information is challenging to obtain, comparing admission and other inpatient diagnoses to discharge diagnoses or diagnoses from primary care follow-up visits may be helpful. In addition, studies should aim to track trainees’ clinical decision-making over time and determine the effectiveness of feedback at improving diagnostic performance through calibration.

 

 

 

Acknowledgments

The authors thank the trainees who participated in this study, as well as the residency leadership at participating institutions. The authors also thank Qi Wang, PhD, for providing statistical analysis.

Disclosures

The authors have nothing to disclose.

Funding

The study was funded by an AAIM Innovation Grant and local support at each participating institution.

References

1. Croskerry P. The feedback sanction. Acad Emerg Med. 2000;7(11):1232-1238. https://doi.org/10.1111/j.1553-2712.2000.tb00468.x.
2. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf. 2013;22(Suppl 2):ii28-ii32. https://doi.org/10.1136/bmjqs-2012-001622.
3. Dhaliwal G. Clinical excellence: make it a habit. Acad Med. 2012;87(11):1473. https://doi.org/10.1097/ACM.0b013e31826d68d9.
4. Shenvi EC, Feupe SF, Yang H, El-Kareh R. Closing the loop: a mixed-methods study about resident learning from outcome feedback after patient handoffs. Diagnosis. 2018;5(4):235-242. https://doi.org/10.1515/dx-2018-0013.
5. Rencic J. Twelve tips for teaching expertise in clinical reasoning. Med Teach. 2011;33(11):887-892. https://doi.org/10.3109/0142159X.2011.558142.
6. Bump GM, Zimmer SM, McNeil MA, Elnicki DM. Hold-over admissions: are they educational for residents? J Gen Intern Med. 2014;29(3):463-467. https://doi.org/10.1007/s11606-013-2667-y.

References

1. Croskerry P. The feedback sanction. Acad Emerg Med. 2000;7(11):1232-1238. https://doi.org/10.1111/j.1553-2712.2000.tb00468.x.
2. Trowbridge RL, Dhaliwal G, Cosby KS. Educational agenda for diagnostic error reduction. BMJ Qual Saf. 2013;22(Suppl 2):ii28-ii32. https://doi.org/10.1136/bmjqs-2012-001622.
3. Dhaliwal G. Clinical excellence: make it a habit. Acad Med. 2012;87(11):1473. https://doi.org/10.1097/ACM.0b013e31826d68d9.
4. Shenvi EC, Feupe SF, Yang H, El-Kareh R. Closing the loop: a mixed-methods study about resident learning from outcome feedback after patient handoffs. Diagnosis. 2018;5(4):235-242. https://doi.org/10.1515/dx-2018-0013.
5. Rencic J. Twelve tips for teaching expertise in clinical reasoning. Med Teach. 2011;33(11):887-892. https://doi.org/10.3109/0142159X.2011.558142.
6. Bump GM, Zimmer SM, McNeil MA, Elnicki DM. Hold-over admissions: are they educational for residents? J Gen Intern Med. 2014;29(3):463-467. https://doi.org/10.1007/s11606-013-2667-y.

Issue
Journal of Hospital Medicine 14(10)
Issue
Journal of Hospital Medicine 14(10)
Page Number
622-625. Published online first August 21, 2019
Page Number
622-625. Published online first August 21, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Kathleen P. Lane, MD; E-mail: [email protected]; Telephone: 612-624-8984
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files

An On-Treatment Analysis of the MARQUIS Study: Interventions to Improve Inpatient Medication Reconciliation

Article Type
Changed
Sun, 10/13/2019 - 21:47

Unintentional medication discrepancies in the hospital setting are common and contribute to adverse drug events, resulting in patient harm.1 Discrepancies can be resolved by implementing high-quality medication reconciliation, but there are insufficient data to guide hospitals as to which interventions are most effective at improving medication reconciliation processes and reducing harm.2 We recently reported that implementation of a best practices toolkit reduced total medication discrepancies in the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS).3 This report describes the effect of individual toolkit components on rates of medication discrepancies with the potential for patient harm.

METHODS

Detailed descriptions of the intervention toolkit and study design of MARQUIS are published.4,5 Briefly, MARQUIS was a pragmatic, mentored, quality improvement (QI) study in which five hospitals in the United States implemented interventions from a best practices toolkit to improve medication reconciliation on noncritical care medical and surgical units from September 2011 to July 2014. We used a mentored implementation approach, in which each site identified the leaders of their local quality improvement team (ie, mentees) who received mentorship from a trained physician with QI and medication safety experience.6 Mentors conducted monthly calls with their mentees and two site visits. Sites adapted and implemented one or more components from the MARQUIS toolkit, a compilation of evidence-based best practices in medication reconciliation.5,7

The primary outcome was unintentional medication discrepancies in admission and discharge orders with the potential for causing harm, as previously described.4 Trained study pharmacists at each site took “gold standard” medication histories on a random sample of up to 22 patients per month. These medications were then compared with admission and discharge medication orders, and all unintentional discrepancies were identified. The discrepancies were then adjudicated by physicians blinded to the treatment arm, who confirmed whether discrepancies were unintentional and carried the potential for patient harm.

We employed a modification of a stepped wedge methodology to measure the incremental effect of implementing nine different intervention components, introduced at different sites over the course of the study, on the number of potentially harmful discrepancies per patient. These analyses were restricted to the postimplementation period on hospital units that implemented at least one intervention. All interventions conducted at each site were categorized by component, including dates of implementation. Each intervention component could be applied more than once per site (eg, when involving a new group of providers) or implemented on a new hospital unit or service, in which case, all dates were included in the analysis. We conducted a multivariable Poisson regression (with time divided into months) adjusted for patient factors, season, and site, with the number of potentially harmful discrepancies as the dependent variable, and the total number of gold standard medications as a model offset. The model was designed to analyze changes in the y-intercept each time an intervention component was either implemented or spread and assumed the change in the y-intercept was the same for each of these events for any given component. The model also assumes that combinations of interventions had independent additive effects.

 

 

RESULTS

Across the five participating sites, 1,648 patients were enrolled from September 2011 to July 2014. This number included 613 patients during the preimplementation period and 1,035 patients during the postimplementation period, of which 791 were on intervention units and comprised the study population. Table 1 displays the intervention components implemented by site. Sites implemented between one and seven components. The most frequently implemented intervention component was training existing staff to take the best possible medication histories (BPMHs), implemented at four sites. The regression results are displayed in Table 2. Three interventions were associated with significant decreases in potentially harmful discrepancy rates: (1) clearly defining roles and responsibilities and communicating this with clinical staff (hazard ratio [HR] 0.53, 95% CI: 0.32–0.87); (2) training existing staff to perform discharge medication reconciliation and patient counseling (HR 0.64, 95% CI: 0.46–0.89); and (3) hiring additional staff to perform discharge medication reconciliation and patient counseling (HR 0.48, 95% CI: 0.31–0.77). Two interventions were associated with significant increases in potentially harmful discrepancy rates: training existing staff to take BPMHs (HR 1.38, 95% CI: 1.21–1.57) and implementing a new electronic health record (EHR; HR 2.21, 95% CI: 1.64–2.97).

DISCUSSION

We noted that three intervention components were associated with decreased rates of unintentional medication discrepancies with potential for harm, whereas two were associated with increased rates. The components with a beneficial effect were not surprising. A prior qualitative study demonstrated the confusion related to clinicians’ roles and responsibilities during medication reconciliation; therefore, clear delineations should reduce rework and improve the medication reconciliation process.8 Other studies have shown the benefits of pharmacist involvement in the inpatient setting, particularly in reducing errors at discharge.9 However, we did not anticipate that training staff to take BPMHs would be detrimental. Possible reasons for this finding that are based on direct observations by mentors at site visits or noted during monthly calls include (1) training personnel on this task without certification of competency may not sufficiently improve their skills, leading instead to diffusion of responsibility; (2) training personnel without sufficient time to perform the task well (eg, frontline nurses with many other responsibilities) may be counterproductive compared with training a few personnel with time dedicated to this task; and (3) training existing personnel in history-taking may have been used to delay the necessary hiring of more staff to take BPMHs. Future studies could address several of these shortcomings in both the design and implementation of medication history-training intervention components.

Several reasons may explain the association we found between implementing a new EHR and increased rates of discrepancies. Based on mentors’ experiences, we suspect it is because sitewide EHR implementation requires significant resources, time, and effort. Therefore, sitewide EHR implementation pulls attention away from a focus on medication safety. Most large vendor EHRs have design flaws in their medication reconciliation modules, with the overarching problem being that their systems are not designed for an interdisciplinary team approach to medication reconciliation (unpublished material). In addition, problems may also exist with the local implementation of these modules and the way they are used by clinicians (eg, bypassing critical steps in the medication reconciliation process that lead to new medication errors). We have updated the MARQUIS toolkit to include pros and cons of EHR software and ideal features and functions of medication reconciliation information technology. We should note that this finding contrasts with previous studies that showed beneficial effects of dedicated medication reconciliation applications, which used proprietary technology, often combined with process redesign, in a focused QI effort.10-13 These findings suggest the need for improvements in the design, local customization, and use of medication reconciliation modules in vendor EHRs.

Our study has several limitations. We conducted an on-treatment analysis, which may be confounded by characteristics of sites that chose to implement different intervention components; however, we adjusted for sites in the analysis. Some results are based on a limited number of sites implementing an intervention component (eg, defining roles and responsibilities). Although this was a longitudinal study, and we adjusted for seasonal effects, it is possible that temporal trends and cointerventions confounded our results. The adjudication of discrepancies for the potential for harm was somewhat subjective, although we used a rigorous process to ensure the reliability of adjudication, as in prior studies.3,14 As in the main analysis of the MARQUIS study, this analysis did not measure intervention fidelity.

Based on these analyses and the literature base, we recommend that hospitals focus first on hiring and training dedicated staff (usually pharmacists) to assist with medication reconciliation at discharge.7 Hospitals should also be aware of potential increases in medication discrepancies when implementing a large vendor EHR across their institution. Further work is needed on the best ways to mitigate these adverse effects, at both the design and local site levels. Finally, the effect of medication history training on discrepancies warrants further study.

 

 

Disclosures

SK has served as a consultant to Verustat, a remote health monitoring company. All other authors have no disclosures or conflicts of interests.

Funding

This study was supported by the Agency for Healthcare Research and Quality (grant number: R18 HS019598). JLS has received funding from (1) Mallinckrodt Pharmaceuticals for an investigator-initiated study of opioid-related adverse drug events in postsurgical patients; (2) Horizon Blue Cross Blue Shield for an honorarium and travel expenses for workshop on medication reconciliation; (3) Island Peer Review Organization for honorarium and travel expenses for workshop on medication reconciliation; and, (4) Portola Pharmaceuticals for investigator-initiated study of inpatients who decline subcutaneous medications for venous thromboembolism prophylaxis. ASM was funded by a VA HSR&D Career Development Award (12-168).

Trial Registration

ClinicalTrials.gov NCT01337063

References

1. Cornish PL, Knowles SR, Marchesano R, et al. Unintended medication discrepancies at the time of hospital admission. Arch Intern Med. 2005;165(4):424-429. https://doi.org/10.1001/archinte.165.4.424.
2. Kaboli PJ, Fernandes O. Medication reconciliation: moving forward. Arch Intern Med. 2012;172(14):1069-1070. https://doi.org/10.1001/archinternmed.2012.2667. PubMed
3. Schnipper JL, Mixon A, Stein J, et al. Effects of a multifaceted medication reconciliation quality improvement intervention on patient safety: final results of the MARQUIS study. BMJ Qual Saf. 2018;27(12):954-964. https://doi.org/10.1136/bmjqs-2018-008233.
4. Salanitro AH, Kripalani S, Resnic J, et al. Rational and design of the Multicenter Medication Reconciliation Quality Improvement Study (MARQUIS). BMC Health Serv Res. 2013;13:230. https://doi.org/10.1186/1472-6963-13-230.
5. Mueller SK, Kripalani S, Stein J, et al. Development of a toolkit to disseminate best practices in inpatient medication reconciliation. Jt Comm J Qual Patient Saf. 2013;39(8):371-382. https://10.1016/S1553-7250(13)39051-5.
6. Maynard GA, Budnitz TL, Nickel WK, et al. 2011 John M. Eisenberg patient safety and quality awards. Mentored implementation: building leaders and achieving results through a collaborative improvement model. Innovation in patient safety and quality at the national level. Jt Comm J Qual Patient Saf. 2012;38(7):301-310. https://doi.org/10.1016/S1553-7250(12)38040-9.
7. Mueller SK, Sponsler KC, Kripalani S, Schnipper JL. Hospital-based medication reconciliation practices: a systematic review. Arch Intern Med. 2012;172(14):1057-1069. https://doi.org/10.1001/archinternmed.2012.2246.
8. Vogelsmeier A, Pepper GA, Oderda L, Weir C. Medication reconciliation: a qualitative analysis of clinicians’ perceptions. Res Social Adm Pharm. 2013;9(4):419-430. https://doi.org/10.1016/j.sapharm.2012.08.002.
9. Kaboli PJ, Hoth AB, McClimon BJ, Schnipper JL. Clinical pharmacists and inpatient medical care: a systematic review. Arch Intern Med. 2006;166(9):955-964. https://doi.org/10.1001/archinte.166.9.955.
10. Plaisant C, Wu J, Hettinger AZ, Powsner S, Shneiderman B. Novel user interface design for medication reconciliation: an evaluation of Twinlist. J Am Med Inform Assoc. 2015;22(2):340-349. https://doi.org/10.1093/jamia/ocu021.
11. Bassi J, Lau F, Bardal S. Use of information technology in medication reconciliation: a scoping review. Ann Pharmacother. 2010;44(5):885-897. https://doi.org/10.1345/aph.1M699.
12. Marien S, Krug B, Spinewine A. Electronic tools to support medication reconciliation: a systematic review. J Am Med Inform Assoc. 2017;24(1):227-240. https://doi.org/10.1093/jamia/ocw068.
13. Agrawal A. Medication errors: prevention using information technology systems. Br J Clin Pharmacol. 2009;67(6):681-686. https://doi.org/10.1111/j.1365-2125.2009.03427.x.
14. Pippins JR, Gandhi TK, Hamann C, et al. Classifying and predicting errors of inpatient medication reconciliation. J Gen Intern Med. 2008;23(9):1414-1422. https://doi.org/10.1007/s11606-008-0687-9.

Article PDF
Issue
Journal of Hospital Medicine 14(10)
Publications
Topics
Page Number
614-617. Published online first August 21, 2019
Sections
Article PDF
Article PDF
Related Articles

Unintentional medication discrepancies in the hospital setting are common and contribute to adverse drug events, resulting in patient harm.1 Discrepancies can be resolved by implementing high-quality medication reconciliation, but there are insufficient data to guide hospitals as to which interventions are most effective at improving medication reconciliation processes and reducing harm.2 We recently reported that implementation of a best practices toolkit reduced total medication discrepancies in the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS).3 This report describes the effect of individual toolkit components on rates of medication discrepancies with the potential for patient harm.

METHODS

Detailed descriptions of the intervention toolkit and study design of MARQUIS are published.4,5 Briefly, MARQUIS was a pragmatic, mentored, quality improvement (QI) study in which five hospitals in the United States implemented interventions from a best practices toolkit to improve medication reconciliation on noncritical care medical and surgical units from September 2011 to July 2014. We used a mentored implementation approach, in which each site identified the leaders of their local quality improvement team (ie, mentees) who received mentorship from a trained physician with QI and medication safety experience.6 Mentors conducted monthly calls with their mentees and two site visits. Sites adapted and implemented one or more components from the MARQUIS toolkit, a compilation of evidence-based best practices in medication reconciliation.5,7

The primary outcome was unintentional medication discrepancies in admission and discharge orders with the potential for causing harm, as previously described.4 Trained study pharmacists at each site took “gold standard” medication histories on a random sample of up to 22 patients per month. These medications were then compared with admission and discharge medication orders, and all unintentional discrepancies were identified. The discrepancies were then adjudicated by physicians blinded to the treatment arm, who confirmed whether discrepancies were unintentional and carried the potential for patient harm.

We employed a modification of a stepped wedge methodology to measure the incremental effect of implementing nine different intervention components, introduced at different sites over the course of the study, on the number of potentially harmful discrepancies per patient. These analyses were restricted to the postimplementation period on hospital units that implemented at least one intervention. All interventions conducted at each site were categorized by component, including dates of implementation. Each intervention component could be applied more than once per site (eg, when involving a new group of providers) or implemented on a new hospital unit or service, in which case, all dates were included in the analysis. We conducted a multivariable Poisson regression (with time divided into months) adjusted for patient factors, season, and site, with the number of potentially harmful discrepancies as the dependent variable, and the total number of gold standard medications as a model offset. The model was designed to analyze changes in the y-intercept each time an intervention component was either implemented or spread and assumed the change in the y-intercept was the same for each of these events for any given component. The model also assumes that combinations of interventions had independent additive effects.

 

 

RESULTS

Across the five participating sites, 1,648 patients were enrolled from September 2011 to July 2014. This number included 613 patients during the preimplementation period and 1,035 patients during the postimplementation period, of which 791 were on intervention units and comprised the study population. Table 1 displays the intervention components implemented by site. Sites implemented between one and seven components. The most frequently implemented intervention component was training existing staff to take the best possible medication histories (BPMHs), implemented at four sites. The regression results are displayed in Table 2. Three interventions were associated with significant decreases in potentially harmful discrepancy rates: (1) clearly defining roles and responsibilities and communicating this with clinical staff (hazard ratio [HR] 0.53, 95% CI: 0.32–0.87); (2) training existing staff to perform discharge medication reconciliation and patient counseling (HR 0.64, 95% CI: 0.46–0.89); and (3) hiring additional staff to perform discharge medication reconciliation and patient counseling (HR 0.48, 95% CI: 0.31–0.77). Two interventions were associated with significant increases in potentially harmful discrepancy rates: training existing staff to take BPMHs (HR 1.38, 95% CI: 1.21–1.57) and implementing a new electronic health record (EHR; HR 2.21, 95% CI: 1.64–2.97).

DISCUSSION

We noted that three intervention components were associated with decreased rates of unintentional medication discrepancies with potential for harm, whereas two were associated with increased rates. The components with a beneficial effect were not surprising. A prior qualitative study demonstrated the confusion related to clinicians’ roles and responsibilities during medication reconciliation; therefore, clear delineations should reduce rework and improve the medication reconciliation process.8 Other studies have shown the benefits of pharmacist involvement in the inpatient setting, particularly in reducing errors at discharge.9 However, we did not anticipate that training staff to take BPMHs would be detrimental. Possible reasons for this finding that are based on direct observations by mentors at site visits or noted during monthly calls include (1) training personnel on this task without certification of competency may not sufficiently improve their skills, leading instead to diffusion of responsibility; (2) training personnel without sufficient time to perform the task well (eg, frontline nurses with many other responsibilities) may be counterproductive compared with training a few personnel with time dedicated to this task; and (3) training existing personnel in history-taking may have been used to delay the necessary hiring of more staff to take BPMHs. Future studies could address several of these shortcomings in both the design and implementation of medication history-training intervention components.

Several reasons may explain the association we found between implementing a new EHR and increased rates of discrepancies. Based on mentors’ experiences, we suspect it is because sitewide EHR implementation requires significant resources, time, and effort. Therefore, sitewide EHR implementation pulls attention away from a focus on medication safety. Most large vendor EHRs have design flaws in their medication reconciliation modules, with the overarching problem being that their systems are not designed for an interdisciplinary team approach to medication reconciliation (unpublished material). In addition, problems may also exist with the local implementation of these modules and the way they are used by clinicians (eg, bypassing critical steps in the medication reconciliation process that lead to new medication errors). We have updated the MARQUIS toolkit to include pros and cons of EHR software and ideal features and functions of medication reconciliation information technology. We should note that this finding contrasts with previous studies that showed beneficial effects of dedicated medication reconciliation applications, which used proprietary technology, often combined with process redesign, in a focused QI effort.10-13 These findings suggest the need for improvements in the design, local customization, and use of medication reconciliation modules in vendor EHRs.

Our study has several limitations. We conducted an on-treatment analysis, which may be confounded by characteristics of sites that chose to implement different intervention components; however, we adjusted for sites in the analysis. Some results are based on a limited number of sites implementing an intervention component (eg, defining roles and responsibilities). Although this was a longitudinal study, and we adjusted for seasonal effects, it is possible that temporal trends and cointerventions confounded our results. The adjudication of discrepancies for the potential for harm was somewhat subjective, although we used a rigorous process to ensure the reliability of adjudication, as in prior studies.3,14 As in the main analysis of the MARQUIS study, this analysis did not measure intervention fidelity.

Based on these analyses and the literature base, we recommend that hospitals focus first on hiring and training dedicated staff (usually pharmacists) to assist with medication reconciliation at discharge.7 Hospitals should also be aware of potential increases in medication discrepancies when implementing a large vendor EHR across their institution. Further work is needed on the best ways to mitigate these adverse effects, at both the design and local site levels. Finally, the effect of medication history training on discrepancies warrants further study.

 

 

Disclosures

SK has served as a consultant to Verustat, a remote health monitoring company. All other authors have no disclosures or conflicts of interests.

Funding

This study was supported by the Agency for Healthcare Research and Quality (grant number: R18 HS019598). JLS has received funding from (1) Mallinckrodt Pharmaceuticals for an investigator-initiated study of opioid-related adverse drug events in postsurgical patients; (2) Horizon Blue Cross Blue Shield for an honorarium and travel expenses for workshop on medication reconciliation; (3) Island Peer Review Organization for honorarium and travel expenses for workshop on medication reconciliation; and, (4) Portola Pharmaceuticals for investigator-initiated study of inpatients who decline subcutaneous medications for venous thromboembolism prophylaxis. ASM was funded by a VA HSR&D Career Development Award (12-168).

Trial Registration

ClinicalTrials.gov NCT01337063

Unintentional medication discrepancies in the hospital setting are common and contribute to adverse drug events, resulting in patient harm.1 Discrepancies can be resolved by implementing high-quality medication reconciliation, but there are insufficient data to guide hospitals as to which interventions are most effective at improving medication reconciliation processes and reducing harm.2 We recently reported that implementation of a best practices toolkit reduced total medication discrepancies in the Multi-Center Medication Reconciliation Quality Improvement Study (MARQUIS).3 This report describes the effect of individual toolkit components on rates of medication discrepancies with the potential for patient harm.

METHODS

Detailed descriptions of the intervention toolkit and study design of MARQUIS are published.4,5 Briefly, MARQUIS was a pragmatic, mentored, quality improvement (QI) study in which five hospitals in the United States implemented interventions from a best practices toolkit to improve medication reconciliation on noncritical care medical and surgical units from September 2011 to July 2014. We used a mentored implementation approach, in which each site identified the leaders of their local quality improvement team (ie, mentees) who received mentorship from a trained physician with QI and medication safety experience.6 Mentors conducted monthly calls with their mentees and two site visits. Sites adapted and implemented one or more components from the MARQUIS toolkit, a compilation of evidence-based best practices in medication reconciliation.5,7

The primary outcome was unintentional medication discrepancies in admission and discharge orders with the potential for causing harm, as previously described.4 Trained study pharmacists at each site took “gold standard” medication histories on a random sample of up to 22 patients per month. These medications were then compared with admission and discharge medication orders, and all unintentional discrepancies were identified. The discrepancies were then adjudicated by physicians blinded to the treatment arm, who confirmed whether discrepancies were unintentional and carried the potential for patient harm.

We employed a modification of a stepped wedge methodology to measure the incremental effect of implementing nine different intervention components, introduced at different sites over the course of the study, on the number of potentially harmful discrepancies per patient. These analyses were restricted to the postimplementation period on hospital units that implemented at least one intervention. All interventions conducted at each site were categorized by component, including dates of implementation. Each intervention component could be applied more than once per site (eg, when involving a new group of providers) or implemented on a new hospital unit or service, in which case, all dates were included in the analysis. We conducted a multivariable Poisson regression (with time divided into months) adjusted for patient factors, season, and site, with the number of potentially harmful discrepancies as the dependent variable, and the total number of gold standard medications as a model offset. The model was designed to analyze changes in the y-intercept each time an intervention component was either implemented or spread and assumed the change in the y-intercept was the same for each of these events for any given component. The model also assumes that combinations of interventions had independent additive effects.

 

 

RESULTS

Across the five participating sites, 1,648 patients were enrolled from September 2011 to July 2014. This number included 613 patients during the preimplementation period and 1,035 patients during the postimplementation period, of which 791 were on intervention units and comprised the study population. Table 1 displays the intervention components implemented by site. Sites implemented between one and seven components. The most frequently implemented intervention component was training existing staff to take the best possible medication histories (BPMHs), implemented at four sites. The regression results are displayed in Table 2. Three interventions were associated with significant decreases in potentially harmful discrepancy rates: (1) clearly defining roles and responsibilities and communicating this with clinical staff (hazard ratio [HR] 0.53, 95% CI: 0.32–0.87); (2) training existing staff to perform discharge medication reconciliation and patient counseling (HR 0.64, 95% CI: 0.46–0.89); and (3) hiring additional staff to perform discharge medication reconciliation and patient counseling (HR 0.48, 95% CI: 0.31–0.77). Two interventions were associated with significant increases in potentially harmful discrepancy rates: training existing staff to take BPMHs (HR 1.38, 95% CI: 1.21–1.57) and implementing a new electronic health record (EHR; HR 2.21, 95% CI: 1.64–2.97).

DISCUSSION

We noted that three intervention components were associated with decreased rates of unintentional medication discrepancies with potential for harm, whereas two were associated with increased rates. The components with a beneficial effect were not surprising. A prior qualitative study demonstrated the confusion related to clinicians’ roles and responsibilities during medication reconciliation; therefore, clear delineations should reduce rework and improve the medication reconciliation process.8 Other studies have shown the benefits of pharmacist involvement in the inpatient setting, particularly in reducing errors at discharge.9 However, we did not anticipate that training staff to take BPMHs would be detrimental. Possible reasons for this finding that are based on direct observations by mentors at site visits or noted during monthly calls include (1) training personnel on this task without certification of competency may not sufficiently improve their skills, leading instead to diffusion of responsibility; (2) training personnel without sufficient time to perform the task well (eg, frontline nurses with many other responsibilities) may be counterproductive compared with training a few personnel with time dedicated to this task; and (3) training existing personnel in history-taking may have been used to delay the necessary hiring of more staff to take BPMHs. Future studies could address several of these shortcomings in both the design and implementation of medication history-training intervention components.

Several reasons may explain the association we found between implementing a new EHR and increased rates of discrepancies. Based on mentors’ experiences, we suspect it is because sitewide EHR implementation requires significant resources, time, and effort. Therefore, sitewide EHR implementation pulls attention away from a focus on medication safety. Most large vendor EHRs have design flaws in their medication reconciliation modules, with the overarching problem being that their systems are not designed for an interdisciplinary team approach to medication reconciliation (unpublished material). In addition, problems may also exist with the local implementation of these modules and the way they are used by clinicians (eg, bypassing critical steps in the medication reconciliation process that lead to new medication errors). We have updated the MARQUIS toolkit to include pros and cons of EHR software and ideal features and functions of medication reconciliation information technology. We should note that this finding contrasts with previous studies that showed beneficial effects of dedicated medication reconciliation applications, which used proprietary technology, often combined with process redesign, in a focused QI effort.10-13 These findings suggest the need for improvements in the design, local customization, and use of medication reconciliation modules in vendor EHRs.

Our study has several limitations. We conducted an on-treatment analysis, which may be confounded by characteristics of sites that chose to implement different intervention components; however, we adjusted for sites in the analysis. Some results are based on a limited number of sites implementing an intervention component (eg, defining roles and responsibilities). Although this was a longitudinal study, and we adjusted for seasonal effects, it is possible that temporal trends and cointerventions confounded our results. The adjudication of discrepancies for the potential for harm was somewhat subjective, although we used a rigorous process to ensure the reliability of adjudication, as in prior studies.3,14 As in the main analysis of the MARQUIS study, this analysis did not measure intervention fidelity.

Based on these analyses and the literature base, we recommend that hospitals focus first on hiring and training dedicated staff (usually pharmacists) to assist with medication reconciliation at discharge.7 Hospitals should also be aware of potential increases in medication discrepancies when implementing a large vendor EHR across their institution. Further work is needed on the best ways to mitigate these adverse effects, at both the design and local site levels. Finally, the effect of medication history training on discrepancies warrants further study.

 

 

Disclosures

SK has served as a consultant to Verustat, a remote health monitoring company. All other authors have no disclosures or conflicts of interests.

Funding

This study was supported by the Agency for Healthcare Research and Quality (grant number: R18 HS019598). JLS has received funding from (1) Mallinckrodt Pharmaceuticals for an investigator-initiated study of opioid-related adverse drug events in postsurgical patients; (2) Horizon Blue Cross Blue Shield for an honorarium and travel expenses for workshop on medication reconciliation; (3) Island Peer Review Organization for honorarium and travel expenses for workshop on medication reconciliation; and, (4) Portola Pharmaceuticals for investigator-initiated study of inpatients who decline subcutaneous medications for venous thromboembolism prophylaxis. ASM was funded by a VA HSR&D Career Development Award (12-168).

Trial Registration

ClinicalTrials.gov NCT01337063

References

1. Cornish PL, Knowles SR, Marchesano R, et al. Unintended medication discrepancies at the time of hospital admission. Arch Intern Med. 2005;165(4):424-429. https://doi.org/10.1001/archinte.165.4.424.
2. Kaboli PJ, Fernandes O. Medication reconciliation: moving forward. Arch Intern Med. 2012;172(14):1069-1070. https://doi.org/10.1001/archinternmed.2012.2667. PubMed
3. Schnipper JL, Mixon A, Stein J, et al. Effects of a multifaceted medication reconciliation quality improvement intervention on patient safety: final results of the MARQUIS study. BMJ Qual Saf. 2018;27(12):954-964. https://doi.org/10.1136/bmjqs-2018-008233.
4. Salanitro AH, Kripalani S, Resnic J, et al. Rational and design of the Multicenter Medication Reconciliation Quality Improvement Study (MARQUIS). BMC Health Serv Res. 2013;13:230. https://doi.org/10.1186/1472-6963-13-230.
5. Mueller SK, Kripalani S, Stein J, et al. Development of a toolkit to disseminate best practices in inpatient medication reconciliation. Jt Comm J Qual Patient Saf. 2013;39(8):371-382. https://10.1016/S1553-7250(13)39051-5.
6. Maynard GA, Budnitz TL, Nickel WK, et al. 2011 John M. Eisenberg patient safety and quality awards. Mentored implementation: building leaders and achieving results through a collaborative improvement model. Innovation in patient safety and quality at the national level. Jt Comm J Qual Patient Saf. 2012;38(7):301-310. https://doi.org/10.1016/S1553-7250(12)38040-9.
7. Mueller SK, Sponsler KC, Kripalani S, Schnipper JL. Hospital-based medication reconciliation practices: a systematic review. Arch Intern Med. 2012;172(14):1057-1069. https://doi.org/10.1001/archinternmed.2012.2246.
8. Vogelsmeier A, Pepper GA, Oderda L, Weir C. Medication reconciliation: a qualitative analysis of clinicians’ perceptions. Res Social Adm Pharm. 2013;9(4):419-430. https://doi.org/10.1016/j.sapharm.2012.08.002.
9. Kaboli PJ, Hoth AB, McClimon BJ, Schnipper JL. Clinical pharmacists and inpatient medical care: a systematic review. Arch Intern Med. 2006;166(9):955-964. https://doi.org/10.1001/archinte.166.9.955.
10. Plaisant C, Wu J, Hettinger AZ, Powsner S, Shneiderman B. Novel user interface design for medication reconciliation: an evaluation of Twinlist. J Am Med Inform Assoc. 2015;22(2):340-349. https://doi.org/10.1093/jamia/ocu021.
11. Bassi J, Lau F, Bardal S. Use of information technology in medication reconciliation: a scoping review. Ann Pharmacother. 2010;44(5):885-897. https://doi.org/10.1345/aph.1M699.
12. Marien S, Krug B, Spinewine A. Electronic tools to support medication reconciliation: a systematic review. J Am Med Inform Assoc. 2017;24(1):227-240. https://doi.org/10.1093/jamia/ocw068.
13. Agrawal A. Medication errors: prevention using information technology systems. Br J Clin Pharmacol. 2009;67(6):681-686. https://doi.org/10.1111/j.1365-2125.2009.03427.x.
14. Pippins JR, Gandhi TK, Hamann C, et al. Classifying and predicting errors of inpatient medication reconciliation. J Gen Intern Med. 2008;23(9):1414-1422. https://doi.org/10.1007/s11606-008-0687-9.

References

1. Cornish PL, Knowles SR, Marchesano R, et al. Unintended medication discrepancies at the time of hospital admission. Arch Intern Med. 2005;165(4):424-429. https://doi.org/10.1001/archinte.165.4.424.
2. Kaboli PJ, Fernandes O. Medication reconciliation: moving forward. Arch Intern Med. 2012;172(14):1069-1070. https://doi.org/10.1001/archinternmed.2012.2667. PubMed
3. Schnipper JL, Mixon A, Stein J, et al. Effects of a multifaceted medication reconciliation quality improvement intervention on patient safety: final results of the MARQUIS study. BMJ Qual Saf. 2018;27(12):954-964. https://doi.org/10.1136/bmjqs-2018-008233.
4. Salanitro AH, Kripalani S, Resnic J, et al. Rational and design of the Multicenter Medication Reconciliation Quality Improvement Study (MARQUIS). BMC Health Serv Res. 2013;13:230. https://doi.org/10.1186/1472-6963-13-230.
5. Mueller SK, Kripalani S, Stein J, et al. Development of a toolkit to disseminate best practices in inpatient medication reconciliation. Jt Comm J Qual Patient Saf. 2013;39(8):371-382. https://10.1016/S1553-7250(13)39051-5.
6. Maynard GA, Budnitz TL, Nickel WK, et al. 2011 John M. Eisenberg patient safety and quality awards. Mentored implementation: building leaders and achieving results through a collaborative improvement model. Innovation in patient safety and quality at the national level. Jt Comm J Qual Patient Saf. 2012;38(7):301-310. https://doi.org/10.1016/S1553-7250(12)38040-9.
7. Mueller SK, Sponsler KC, Kripalani S, Schnipper JL. Hospital-based medication reconciliation practices: a systematic review. Arch Intern Med. 2012;172(14):1057-1069. https://doi.org/10.1001/archinternmed.2012.2246.
8. Vogelsmeier A, Pepper GA, Oderda L, Weir C. Medication reconciliation: a qualitative analysis of clinicians’ perceptions. Res Social Adm Pharm. 2013;9(4):419-430. https://doi.org/10.1016/j.sapharm.2012.08.002.
9. Kaboli PJ, Hoth AB, McClimon BJ, Schnipper JL. Clinical pharmacists and inpatient medical care: a systematic review. Arch Intern Med. 2006;166(9):955-964. https://doi.org/10.1001/archinte.166.9.955.
10. Plaisant C, Wu J, Hettinger AZ, Powsner S, Shneiderman B. Novel user interface design for medication reconciliation: an evaluation of Twinlist. J Am Med Inform Assoc. 2015;22(2):340-349. https://doi.org/10.1093/jamia/ocu021.
11. Bassi J, Lau F, Bardal S. Use of information technology in medication reconciliation: a scoping review. Ann Pharmacother. 2010;44(5):885-897. https://doi.org/10.1345/aph.1M699.
12. Marien S, Krug B, Spinewine A. Electronic tools to support medication reconciliation: a systematic review. J Am Med Inform Assoc. 2017;24(1):227-240. https://doi.org/10.1093/jamia/ocw068.
13. Agrawal A. Medication errors: prevention using information technology systems. Br J Clin Pharmacol. 2009;67(6):681-686. https://doi.org/10.1111/j.1365-2125.2009.03427.x.
14. Pippins JR, Gandhi TK, Hamann C, et al. Classifying and predicting errors of inpatient medication reconciliation. J Gen Intern Med. 2008;23(9):1414-1422. https://doi.org/10.1007/s11606-008-0687-9.

Issue
Journal of Hospital Medicine 14(10)
Issue
Journal of Hospital Medicine 14(10)
Page Number
614-617. Published online first August 21, 2019
Page Number
614-617. Published online first August 21, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
*Corresponding Author: Amanda S. Mixon, MD, MS, MSPH, FHM; E-mail: [email protected]; Telephone: 615-936-3710; Twitter: @mixovida.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media

Top Qualifications Hospitalist Leaders Seek in Candidates: Results from a National Survey

Article Type
Changed
Thu, 11/21/2019 - 14:40

Hospital Medicine (HM) is medicine’s fastest growing specialty.1 Rapid expansion of the field has been met with rising interest by young physicians, many of whom are first-time job seekers and may desire information on best practices for applying and interviewing in HM.2-4 However, no prior work has examined HM-specific candidate qualifications and qualities that may be most valued in the hiring process.

As members of the Society of Hospital Medicine (SHM) Physicians in Training Committee, a group charged with “prepar[ing] trainees and early career hospitalists in their transition into hospital medicine,” we aimed to fill this knowledge gap around the HM-specific hiring process.

METHODS

Survey Instrument

The authors developed the survey based on expertise as HM interviewers (JAD, AH, CD, EE, BK, DS, and SM) and local and national interview workshop leaders (JAD, CD, BK, SM). The questionnaire focused on objective applicant qualifications, qualities and attributes displayed during interviews (Appendix 1). Content, length, and reliability of physician understanding were assessed via feedback from local HM group leaders.

Respondents were asked to provide nonidentifying demographics and their role in their HM group’s hiring process. If they reported no role, the survey was terminated. Subsequent standardized HM group demographic questions were adapted from the Society of Hospital Medicine (SHM) State of Hospital Medicine Report.5

Survey questions were multiple choice, ranking and free-response aimed at understanding how respondents assess HM candidate attributes, skills, and behavior. For ranking questions, answer choice order was randomized to reduce answer order-based bias. One free-response question asked the respondent to provide a unique interview question they use that “reveals the most about a hospitalist candidate.” Responses were then individually inserted into the list of choices for a subsequent ranking question regarding the most important qualities a candidate must demonstrate.

Respondents were asked four open-ended questions designed to understand the approach to candidate assessment: (1) use of unique interview questions (as above); (2) identification of “red flags” during interviews; (3) distinctions between assessment of long-term (LT) career hospitalist candidates versus short-term (ST) candidates (eg, those seeking positions prior to fellowship); and (4) key qualifications of ST candidates.

Survey Administration

Survey recipients were identified via SHM administrative rosters. Surveys were distributed electronically via SHM to all current nontrainee physician members who reported a United States mailing address. The survey was determined to not constitute human subjects research by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations.

 

 

Data Analysis

Multiple-choice responses were analyzed descriptively. For ranking-type questions, answers were weighted based on ranking order.

Responses to all open-ended survey questions were analyzed using thematic analysis. We used an iterative process to develop and refine codes identifying key concepts that emerged from the data. Three authors independently coded survey responses. As a group, research team members established the coding framework and resolved discrepancies via discussion to achieve consensus.

RESULTS

Survey links were sent to 8,398 e-mail addresses, of which 7,306 were undeliverable or unopened, leaving 1,092 total eligible respondents. Of these, 347 (31.8%) responded.

A total of 236 respondents reported having a formal role in HM hiring. Of these roles, 79.0% were one-on-one interviewers, 49.6% group interviewers, 45.5% telephone/videoconference interviewers, 41.5% participated on a selection committee, and 32.1% identified as the ultimate decision-maker. Regarding graduate medical education teaching status, 42.0% of respondents identified their primary workplace as a community/affiliated teaching hospital, 33.05% as a university-based teaching hospital, and 23.0% as a nonteaching hospital. Additional characteristics are reported in Appendix 2.

Quantitative Analysis

Respondents ranked the top five qualifications of HM candidates and the top five qualities a candidate should demonstrate on the interview day to be considered for hiring (Table 1).

When asked to rate agreement with the statement “I evaluate and consider all hospital medicine candidates similarly, regardless of whether they articulate an interest in hospital medicine as a long-term career or as a short-term position before fellowship,” 99 (57.23%) respondents disagreed.

Qualitative Analysis

Thematic analysis of responses to open-ended survey questions identified several “red flag” themes (Table 2). Negative interactions with current providers or staff were commonly noted. Additional red flags were a lack of knowledge or interest in the specific HM group, an inability to articulate career goals, or abnormalities in employment history or application materials. Respondents identified an overly strong focus on lifestyle or salary as factors that might limit a candidate’s chance of advancing in the hiring process.

Responses to free-text questions additionally highlighted preferred questioning techniques and approaches to HM candidate assessment (Appendix 3). Many interview questions addressed candidate interest in a particular HM program and candidate responses to challenging scenarios they had encountered. Other questions explored career development. Respondents wanted LT candidates to have specific HM career goals, while they expected ST candidates to demonstrate commitment to and appreciation of HM as a discipline.

Some respondents described their approach to candidate assessment in terms of investment and risk. LT candidates were often viewed as investments in stability and performance; they were evaluated on current abilities and future potential as related to group-specific goals. Some respondents viewed hiring ST candidates as more risky given concerns that they might be less engaged or integrated with the group. Others viewed the hiring of LT candidates as comparably more risky, relating the longer time commitment to the potential for higher impact on the group and patient care. Accordingly, these respondents viewed ST candidate hiring as less risky, estimating their shorter time commitment as having less of a positive or negative impact, with the benefit of addressing urgent staffing issues or unfilled less desirable positions. One respondent summarized: “If they plan to be a career candidate, I care more about them as people and future coworkers. Short term folks are great if we are in a pinch and can deal with personality issues for a short period of time.”

Respondents also described how valued candidate qualities could help mitigate the risk inherent in hiring, especially for ST hires. Strong interpersonal and teamwork skills were highlighted, as well as a demonstrated record of clinical excellence, evidenced by strong training backgrounds and superlative references. A key factor aiding in ST hiring decisions was prior knowledge of the candidate, such as residents or moonlighters previously working in the respondent’s institution. This allowed for familiarity with the candidate’s clinical acumen as well as perceived ease of onboarding and knowledge of the system.

 

 

DISCUSSION

We present the results of a national survey of hospitalists identifying candidate attributes, skills, and behaviors viewed most favorably by those involved in the HM hiring process. To our knowledge, this is the first research to be published on the topic of evaluating HM candidates.

Survey respondents identified demonstrable HM candidate clinical skills and experience as highly important, consistent with prior research identifying clinical skills as being among those that hospitalists most value.6 Based on these responses, job seekers should be prepared to discuss objective measures of clinical experience when appropriate, such as number of cases seen or procedures performed. HM groups may accordingly consider the use of hiring rubrics or scoring systems to standardize these measures and reduce bias.

Respondents also highly valued more subjective assessments of HM applicants’ candidacy. The most highly ranked action item was a candidate’s ability to meaningfully respond to a respondent’s customized interview question. There was also a preference for candidates who were knowledgeable about and interested in the specifics of a particular HM group. The high value placed on these elements may suggest the need for formalized coaching or interview preparation for HM candidates. Similarly, interviewer emphasis on customized questions may also highlight an opportunity for HM groups to internally standardize how to best approach subjective components of the interview.

Our heterogeneous findings on the distinctions between ST and LT candidate hiring practices support the need for additional research on the ST HM job market. Until then, our findings reinforce the importance of applicant transparency about ST versus LT career goals. Although many programs may prefer LT candidates over ST candidates, our results suggest ST candidates may benefit from targeting groups with ST needs and using the application process as an opportunity to highlight certain mitigating strengths.

Our study has limitations. While our population included diverse national representation, the response rate and demographics of our respondents may limit generalizability beyond our study population. Respondents represented multiple perspectives within the HM hiring process and were not limited to those making the final hiring decisions. For questions with prespecified multiple-choice answers, answer choices may have influenced participant responses. Our conclusions are based on the reported preferences of those involved in the HM hiring process and not actual hiring behavior. Future research should attempt to identify factors (eg, region, graduate medical education status, practice setting type) that may be responsible for some of the heterogeneous themes we observed in our analysis.

Our research represents introductory work into the previously unpublished topic of HM-specific hiring practices. These findings may provide relevant insight for trainees considering careers in HM, hospitalists reentering the job market, and those involved in career advising, professional development and the HM hiring process.

Acknowledgments

The authors would like to acknowledge current and former members of SHM’s Physicians in Training Committee whose feedback and leadership helped to inspire this project, as well as those students, residents, and hospitalists who have participated in our Hospital Medicine Annual Meeting interview workshop.

Disclosures

The authors have no conflicts of interest to disclose.

 

 

Files
References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce, 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
3. Ratelle JT, Dupras DM, Alguire P, Masters P, Weissman A, West CP. Hospitalist career decisions among internal medicine residents. J Gen Intern Med. 2014;29(7):1026-1030. doi: 10.1007/s11606-014-2811-3.
4. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. doi: 10.12788/jhm.2703.
5. 2016 State of Hospital Medicine Report. 2016. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed 7/1/2017.
6. Plauth WH, 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Emerg Med. 2001;111(3):247-254. doi: https://doi.org/10.1016/S0002-9343(01)00837-3.

Article PDF
Issue
Journal of Hospital Medicine 14(12)
Publications
Topics
Page Number
754-757. Published online first July 24, 2019
Sections
Files
Files
Article PDF
Article PDF
Related Articles

Hospital Medicine (HM) is medicine’s fastest growing specialty.1 Rapid expansion of the field has been met with rising interest by young physicians, many of whom are first-time job seekers and may desire information on best practices for applying and interviewing in HM.2-4 However, no prior work has examined HM-specific candidate qualifications and qualities that may be most valued in the hiring process.

As members of the Society of Hospital Medicine (SHM) Physicians in Training Committee, a group charged with “prepar[ing] trainees and early career hospitalists in their transition into hospital medicine,” we aimed to fill this knowledge gap around the HM-specific hiring process.

METHODS

Survey Instrument

The authors developed the survey based on expertise as HM interviewers (JAD, AH, CD, EE, BK, DS, and SM) and local and national interview workshop leaders (JAD, CD, BK, SM). The questionnaire focused on objective applicant qualifications, qualities and attributes displayed during interviews (Appendix 1). Content, length, and reliability of physician understanding were assessed via feedback from local HM group leaders.

Respondents were asked to provide nonidentifying demographics and their role in their HM group’s hiring process. If they reported no role, the survey was terminated. Subsequent standardized HM group demographic questions were adapted from the Society of Hospital Medicine (SHM) State of Hospital Medicine Report.5

Survey questions were multiple choice, ranking and free-response aimed at understanding how respondents assess HM candidate attributes, skills, and behavior. For ranking questions, answer choice order was randomized to reduce answer order-based bias. One free-response question asked the respondent to provide a unique interview question they use that “reveals the most about a hospitalist candidate.” Responses were then individually inserted into the list of choices for a subsequent ranking question regarding the most important qualities a candidate must demonstrate.

Respondents were asked four open-ended questions designed to understand the approach to candidate assessment: (1) use of unique interview questions (as above); (2) identification of “red flags” during interviews; (3) distinctions between assessment of long-term (LT) career hospitalist candidates versus short-term (ST) candidates (eg, those seeking positions prior to fellowship); and (4) key qualifications of ST candidates.

Survey Administration

Survey recipients were identified via SHM administrative rosters. Surveys were distributed electronically via SHM to all current nontrainee physician members who reported a United States mailing address. The survey was determined to not constitute human subjects research by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations.

 

 

Data Analysis

Multiple-choice responses were analyzed descriptively. For ranking-type questions, answers were weighted based on ranking order.

Responses to all open-ended survey questions were analyzed using thematic analysis. We used an iterative process to develop and refine codes identifying key concepts that emerged from the data. Three authors independently coded survey responses. As a group, research team members established the coding framework and resolved discrepancies via discussion to achieve consensus.

RESULTS

Survey links were sent to 8,398 e-mail addresses, of which 7,306 were undeliverable or unopened, leaving 1,092 total eligible respondents. Of these, 347 (31.8%) responded.

A total of 236 respondents reported having a formal role in HM hiring. Of these roles, 79.0% were one-on-one interviewers, 49.6% group interviewers, 45.5% telephone/videoconference interviewers, 41.5% participated on a selection committee, and 32.1% identified as the ultimate decision-maker. Regarding graduate medical education teaching status, 42.0% of respondents identified their primary workplace as a community/affiliated teaching hospital, 33.05% as a university-based teaching hospital, and 23.0% as a nonteaching hospital. Additional characteristics are reported in Appendix 2.

Quantitative Analysis

Respondents ranked the top five qualifications of HM candidates and the top five qualities a candidate should demonstrate on the interview day to be considered for hiring (Table 1).

When asked to rate agreement with the statement “I evaluate and consider all hospital medicine candidates similarly, regardless of whether they articulate an interest in hospital medicine as a long-term career or as a short-term position before fellowship,” 99 (57.23%) respondents disagreed.

Qualitative Analysis

Thematic analysis of responses to open-ended survey questions identified several “red flag” themes (Table 2). Negative interactions with current providers or staff were commonly noted. Additional red flags were a lack of knowledge or interest in the specific HM group, an inability to articulate career goals, or abnormalities in employment history or application materials. Respondents identified an overly strong focus on lifestyle or salary as factors that might limit a candidate’s chance of advancing in the hiring process.

Responses to free-text questions additionally highlighted preferred questioning techniques and approaches to HM candidate assessment (Appendix 3). Many interview questions addressed candidate interest in a particular HM program and candidate responses to challenging scenarios they had encountered. Other questions explored career development. Respondents wanted LT candidates to have specific HM career goals, while they expected ST candidates to demonstrate commitment to and appreciation of HM as a discipline.

Some respondents described their approach to candidate assessment in terms of investment and risk. LT candidates were often viewed as investments in stability and performance; they were evaluated on current abilities and future potential as related to group-specific goals. Some respondents viewed hiring ST candidates as more risky given concerns that they might be less engaged or integrated with the group. Others viewed the hiring of LT candidates as comparably more risky, relating the longer time commitment to the potential for higher impact on the group and patient care. Accordingly, these respondents viewed ST candidate hiring as less risky, estimating their shorter time commitment as having less of a positive or negative impact, with the benefit of addressing urgent staffing issues or unfilled less desirable positions. One respondent summarized: “If they plan to be a career candidate, I care more about them as people and future coworkers. Short term folks are great if we are in a pinch and can deal with personality issues for a short period of time.”

Respondents also described how valued candidate qualities could help mitigate the risk inherent in hiring, especially for ST hires. Strong interpersonal and teamwork skills were highlighted, as well as a demonstrated record of clinical excellence, evidenced by strong training backgrounds and superlative references. A key factor aiding in ST hiring decisions was prior knowledge of the candidate, such as residents or moonlighters previously working in the respondent’s institution. This allowed for familiarity with the candidate’s clinical acumen as well as perceived ease of onboarding and knowledge of the system.

 

 

DISCUSSION

We present the results of a national survey of hospitalists identifying candidate attributes, skills, and behaviors viewed most favorably by those involved in the HM hiring process. To our knowledge, this is the first research to be published on the topic of evaluating HM candidates.

Survey respondents identified demonstrable HM candidate clinical skills and experience as highly important, consistent with prior research identifying clinical skills as being among those that hospitalists most value.6 Based on these responses, job seekers should be prepared to discuss objective measures of clinical experience when appropriate, such as number of cases seen or procedures performed. HM groups may accordingly consider the use of hiring rubrics or scoring systems to standardize these measures and reduce bias.

Respondents also highly valued more subjective assessments of HM applicants’ candidacy. The most highly ranked action item was a candidate’s ability to meaningfully respond to a respondent’s customized interview question. There was also a preference for candidates who were knowledgeable about and interested in the specifics of a particular HM group. The high value placed on these elements may suggest the need for formalized coaching or interview preparation for HM candidates. Similarly, interviewer emphasis on customized questions may also highlight an opportunity for HM groups to internally standardize how to best approach subjective components of the interview.

Our heterogeneous findings on the distinctions between ST and LT candidate hiring practices support the need for additional research on the ST HM job market. Until then, our findings reinforce the importance of applicant transparency about ST versus LT career goals. Although many programs may prefer LT candidates over ST candidates, our results suggest ST candidates may benefit from targeting groups with ST needs and using the application process as an opportunity to highlight certain mitigating strengths.

Our study has limitations. While our population included diverse national representation, the response rate and demographics of our respondents may limit generalizability beyond our study population. Respondents represented multiple perspectives within the HM hiring process and were not limited to those making the final hiring decisions. For questions with prespecified multiple-choice answers, answer choices may have influenced participant responses. Our conclusions are based on the reported preferences of those involved in the HM hiring process and not actual hiring behavior. Future research should attempt to identify factors (eg, region, graduate medical education status, practice setting type) that may be responsible for some of the heterogeneous themes we observed in our analysis.

Our research represents introductory work into the previously unpublished topic of HM-specific hiring practices. These findings may provide relevant insight for trainees considering careers in HM, hospitalists reentering the job market, and those involved in career advising, professional development and the HM hiring process.

Acknowledgments

The authors would like to acknowledge current and former members of SHM’s Physicians in Training Committee whose feedback and leadership helped to inspire this project, as well as those students, residents, and hospitalists who have participated in our Hospital Medicine Annual Meeting interview workshop.

Disclosures

The authors have no conflicts of interest to disclose.

 

 

Hospital Medicine (HM) is medicine’s fastest growing specialty.1 Rapid expansion of the field has been met with rising interest by young physicians, many of whom are first-time job seekers and may desire information on best practices for applying and interviewing in HM.2-4 However, no prior work has examined HM-specific candidate qualifications and qualities that may be most valued in the hiring process.

As members of the Society of Hospital Medicine (SHM) Physicians in Training Committee, a group charged with “prepar[ing] trainees and early career hospitalists in their transition into hospital medicine,” we aimed to fill this knowledge gap around the HM-specific hiring process.

METHODS

Survey Instrument

The authors developed the survey based on expertise as HM interviewers (JAD, AH, CD, EE, BK, DS, and SM) and local and national interview workshop leaders (JAD, CD, BK, SM). The questionnaire focused on objective applicant qualifications, qualities and attributes displayed during interviews (Appendix 1). Content, length, and reliability of physician understanding were assessed via feedback from local HM group leaders.

Respondents were asked to provide nonidentifying demographics and their role in their HM group’s hiring process. If they reported no role, the survey was terminated. Subsequent standardized HM group demographic questions were adapted from the Society of Hospital Medicine (SHM) State of Hospital Medicine Report.5

Survey questions were multiple choice, ranking and free-response aimed at understanding how respondents assess HM candidate attributes, skills, and behavior. For ranking questions, answer choice order was randomized to reduce answer order-based bias. One free-response question asked the respondent to provide a unique interview question they use that “reveals the most about a hospitalist candidate.” Responses were then individually inserted into the list of choices for a subsequent ranking question regarding the most important qualities a candidate must demonstrate.

Respondents were asked four open-ended questions designed to understand the approach to candidate assessment: (1) use of unique interview questions (as above); (2) identification of “red flags” during interviews; (3) distinctions between assessment of long-term (LT) career hospitalist candidates versus short-term (ST) candidates (eg, those seeking positions prior to fellowship); and (4) key qualifications of ST candidates.

Survey Administration

Survey recipients were identified via SHM administrative rosters. Surveys were distributed electronically via SHM to all current nontrainee physician members who reported a United States mailing address. The survey was determined to not constitute human subjects research by the Beth Israel Deaconess Medical Center Committee on Clinical Investigations.

 

 

Data Analysis

Multiple-choice responses were analyzed descriptively. For ranking-type questions, answers were weighted based on ranking order.

Responses to all open-ended survey questions were analyzed using thematic analysis. We used an iterative process to develop and refine codes identifying key concepts that emerged from the data. Three authors independently coded survey responses. As a group, research team members established the coding framework and resolved discrepancies via discussion to achieve consensus.

RESULTS

Survey links were sent to 8,398 e-mail addresses, of which 7,306 were undeliverable or unopened, leaving 1,092 total eligible respondents. Of these, 347 (31.8%) responded.

A total of 236 respondents reported having a formal role in HM hiring. Of these roles, 79.0% were one-on-one interviewers, 49.6% group interviewers, 45.5% telephone/videoconference interviewers, 41.5% participated on a selection committee, and 32.1% identified as the ultimate decision-maker. Regarding graduate medical education teaching status, 42.0% of respondents identified their primary workplace as a community/affiliated teaching hospital, 33.05% as a university-based teaching hospital, and 23.0% as a nonteaching hospital. Additional characteristics are reported in Appendix 2.

Quantitative Analysis

Respondents ranked the top five qualifications of HM candidates and the top five qualities a candidate should demonstrate on the interview day to be considered for hiring (Table 1).

When asked to rate agreement with the statement “I evaluate and consider all hospital medicine candidates similarly, regardless of whether they articulate an interest in hospital medicine as a long-term career or as a short-term position before fellowship,” 99 (57.23%) respondents disagreed.

Qualitative Analysis

Thematic analysis of responses to open-ended survey questions identified several “red flag” themes (Table 2). Negative interactions with current providers or staff were commonly noted. Additional red flags were a lack of knowledge or interest in the specific HM group, an inability to articulate career goals, or abnormalities in employment history or application materials. Respondents identified an overly strong focus on lifestyle or salary as factors that might limit a candidate’s chance of advancing in the hiring process.

Responses to free-text questions additionally highlighted preferred questioning techniques and approaches to HM candidate assessment (Appendix 3). Many interview questions addressed candidate interest in a particular HM program and candidate responses to challenging scenarios they had encountered. Other questions explored career development. Respondents wanted LT candidates to have specific HM career goals, while they expected ST candidates to demonstrate commitment to and appreciation of HM as a discipline.

Some respondents described their approach to candidate assessment in terms of investment and risk. LT candidates were often viewed as investments in stability and performance; they were evaluated on current abilities and future potential as related to group-specific goals. Some respondents viewed hiring ST candidates as more risky given concerns that they might be less engaged or integrated with the group. Others viewed the hiring of LT candidates as comparably more risky, relating the longer time commitment to the potential for higher impact on the group and patient care. Accordingly, these respondents viewed ST candidate hiring as less risky, estimating their shorter time commitment as having less of a positive or negative impact, with the benefit of addressing urgent staffing issues or unfilled less desirable positions. One respondent summarized: “If they plan to be a career candidate, I care more about them as people and future coworkers. Short term folks are great if we are in a pinch and can deal with personality issues for a short period of time.”

Respondents also described how valued candidate qualities could help mitigate the risk inherent in hiring, especially for ST hires. Strong interpersonal and teamwork skills were highlighted, as well as a demonstrated record of clinical excellence, evidenced by strong training backgrounds and superlative references. A key factor aiding in ST hiring decisions was prior knowledge of the candidate, such as residents or moonlighters previously working in the respondent’s institution. This allowed for familiarity with the candidate’s clinical acumen as well as perceived ease of onboarding and knowledge of the system.

 

 

DISCUSSION

We present the results of a national survey of hospitalists identifying candidate attributes, skills, and behaviors viewed most favorably by those involved in the HM hiring process. To our knowledge, this is the first research to be published on the topic of evaluating HM candidates.

Survey respondents identified demonstrable HM candidate clinical skills and experience as highly important, consistent with prior research identifying clinical skills as being among those that hospitalists most value.6 Based on these responses, job seekers should be prepared to discuss objective measures of clinical experience when appropriate, such as number of cases seen or procedures performed. HM groups may accordingly consider the use of hiring rubrics or scoring systems to standardize these measures and reduce bias.

Respondents also highly valued more subjective assessments of HM applicants’ candidacy. The most highly ranked action item was a candidate’s ability to meaningfully respond to a respondent’s customized interview question. There was also a preference for candidates who were knowledgeable about and interested in the specifics of a particular HM group. The high value placed on these elements may suggest the need for formalized coaching or interview preparation for HM candidates. Similarly, interviewer emphasis on customized questions may also highlight an opportunity for HM groups to internally standardize how to best approach subjective components of the interview.

Our heterogeneous findings on the distinctions between ST and LT candidate hiring practices support the need for additional research on the ST HM job market. Until then, our findings reinforce the importance of applicant transparency about ST versus LT career goals. Although many programs may prefer LT candidates over ST candidates, our results suggest ST candidates may benefit from targeting groups with ST needs and using the application process as an opportunity to highlight certain mitigating strengths.

Our study has limitations. While our population included diverse national representation, the response rate and demographics of our respondents may limit generalizability beyond our study population. Respondents represented multiple perspectives within the HM hiring process and were not limited to those making the final hiring decisions. For questions with prespecified multiple-choice answers, answer choices may have influenced participant responses. Our conclusions are based on the reported preferences of those involved in the HM hiring process and not actual hiring behavior. Future research should attempt to identify factors (eg, region, graduate medical education status, practice setting type) that may be responsible for some of the heterogeneous themes we observed in our analysis.

Our research represents introductory work into the previously unpublished topic of HM-specific hiring practices. These findings may provide relevant insight for trainees considering careers in HM, hospitalists reentering the job market, and those involved in career advising, professional development and the HM hiring process.

Acknowledgments

The authors would like to acknowledge current and former members of SHM’s Physicians in Training Committee whose feedback and leadership helped to inspire this project, as well as those students, residents, and hospitalists who have participated in our Hospital Medicine Annual Meeting interview workshop.

Disclosures

The authors have no conflicts of interest to disclose.

 

 

References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce, 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
3. Ratelle JT, Dupras DM, Alguire P, Masters P, Weissman A, West CP. Hospitalist career decisions among internal medicine residents. J Gen Intern Med. 2014;29(7):1026-1030. doi: 10.1007/s11606-014-2811-3.
4. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. doi: 10.12788/jhm.2703.
5. 2016 State of Hospital Medicine Report. 2016. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed 7/1/2017.
6. Plauth WH, 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Emerg Med. 2001;111(3):247-254. doi: https://doi.org/10.1016/S0002-9343(01)00837-3.

References

1. Wachter RM, Goldman L. Zero to 50,000-The 20th anniversary of the hospitalist. N Engl J Med. 2016;375(11):1009-1011. https://doi.org/10.1056/NEJMp1607958.
2. Leyenaar JK, Frintner MP. Graduating pediatric residents entering the hospital medicine workforce, 2006-2015. Acad Pediatr. 2018;18(2):200-207. https://doi.org/10.1016/j.acap.2017.05.001.
3. Ratelle JT, Dupras DM, Alguire P, Masters P, Weissman A, West CP. Hospitalist career decisions among internal medicine residents. J Gen Intern Med. 2014;29(7):1026-1030. doi: 10.1007/s11606-014-2811-3.
4. Sweigart JR, Tad-Y D, Kneeland P, Williams MV, Glasheen JJ. Hospital medicine resident training tracks: developing the hospital medicine pipeline. J Hosp Med. 2017;12(3):173-176. doi: 10.12788/jhm.2703.
5. 2016 State of Hospital Medicine Report. 2016. https://www.hospitalmedicine.org/practice-management/shms-state-of-hospital-medicine/. Accessed 7/1/2017.
6. Plauth WH, 3rd, Pantilat SZ, Wachter RM, Fenton CL. Hospitalists’ perceptions of their residency training needs: results of a national survey. Am J Emerg Med. 2001;111(3):247-254. doi: https://doi.org/10.1016/S0002-9343(01)00837-3.

Issue
Journal of Hospital Medicine 14(12)
Issue
Journal of Hospital Medicine 14(12)
Page Number
754-757. Published online first July 24, 2019
Page Number
754-757. Published online first July 24, 2019
Publications
Publications
Topics
Article Type
Sections
Article Source

© 2019 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Corresponding Author: Joshua Allen-Dicker, MD, MPH; E-mail: [email protected]; Telephone: 617-754-4677; Twitter: @DrJoshuaAD.
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Gating Strategy
First Peek Free
Article PDF Media
Media Files