Videodermoscopy as a Novel Tool for Dermatologic Education

Article Type
Changed
Thu, 03/28/2019 - 14:48
Display Headline
Videodermoscopy as a Novel Tool for Dermatologic Education

Dermoscopy, or the noninvasive in vivo examination of the epidermis and superficial dermis using magnification, facilitates the diagnosis of pigmented and nonpigmented skin lesions.1 Despite the benefit of dermoscopy in making early and accurate diagnoses of potentially life-limiting skin cancers, only 48% of dermatologists in the United States use dermoscopy in their practices.2 The most commonly cited reason for not using dermoscopy is lack of training.

Although the use of dermoscopy is associated with younger age and more recent graduation from residency compared to nonusers, dermatology resident physicians continue to receive limited training in dermoscopy.2 In a survey of 139 dermatology chief residents, 48% were not satisfied with the dermoscopy training that they had received during residency. Residents who received bedside instruction in dermoscopy reported greater satisfaction with their dermoscopy training compared to those who did not receive bedside instruction.3 This article provides a brief comparison of standard dermoscopy versus videodermoscopy for the instruction of trainees on common dermatologic diagnoses.

Bedside Dermoscopy

Standard optical dermatoscopes used for patient care and educational purposes typically incorporate 10-fold magnification and permit examination by a single viewer through a lens. With standard dermatoscopes, bedside dermoscopy instruction consists of the independent sequential viewing of skin lesions by instructors and trainees. Trainees must independently search for dermoscopic features noted by the instructor, which may be difficult for novice users. Simultaneous viewing of lesions would allow instructors to clearly indicate in real time pertinent dermoscopic features to their trainees.

Videodermatoscopes facilitate the simultaneous examination of cutaneous lesions by projecting the dermoscopic image onto a digital screen. Furthermore, these devices can incorporate magnifications of up to 200-fold or greater. In recent years, research pertaining to videodermoscopy has focused on the high magnification capabilities of these devices, specifically dermoscopic features that are visualized at magnifications greater than 10-fold, including the light brown nests of basal cell carcinomas that are seen at 50- to 70-fold magnification, twisted red capillary loops seen in active scalp psoriasis at 50-fold magnification, and longitudinal white indentations seen on nail plates affected by onychomycosis at 20-fold magnification.4-6 The potential value of videodermoscopy in medical education lies not only in the high magnification potential, which may make subtle dermoscopic findings more apparent to novice dermoscopists, but also in the ability to facilitate simultaneous dermoscopic examinations by instructors and trainees.

Educational Applications for Videodermoscopy

To illustrate the educational potential of videodermoscopy, images taken with a standard dermatoscope at 10-fold magnification are presented with videodermoscopic images taken at magnifications ranging from 60- to 185-fold (Figures 1–3). These examples demonstrate the potential for videodermoscopy to facilitate the visualization of subtle dermoscopic features by novice dermoscopists, relating to both the enhanced magnification potential and the potential for simultaneous rather than sequential examination.

Figure 1. Comedolike openings of seborrheic keratosis demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

Figure 2. Pigment network of a nevus demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

Figure 3. Club-shaped root of a telogen hair demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

 

 

Final Thoughts

High-magnification videodermoscopy may be a useful tool to further dermoscopic education. Videodermatoscopes vary in functionality and cost but are available at price points comparable to those of standard optical dermatoscopes. Owners of standard dermatoscopes can approximate some of the benefits of a digital videodermatoscope by using the standard dermatoscope in conjunction with a camera, including those integrated into mobile phones and tablets. By attaching the standard dermatoscope to a camera with a digital display, the digital zoom of the camera can be used to magnify the standard dermoscopic image, enhancing the ability of novice dermoscopists to visualize subtle findings. By presenting this magnified image on a digital display, dermoscopy instructors and trainees would be able to simultaneously view dermoscopic images of lesions, sometimes with magnifications comparable to videodermatoscopes.

In the setting of a dermatology residency program, videodermoscopy can be incorporated into bedside teaching with experienced dermoscopists and for the live presentation of dermoscopic features at departmental grand rounds. By facilitating the simultaneous, high-magnification and live viewing of skin lesions by dermoscopy instructors and trainees, digital videodermoscopy has the potential to address an area of weakness in dermatologic training.

References
  1. Vestergaard ME, Macaskill P, Holt PE, et al. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: a meta-analysis of studies performed in a clinical setting. Br J Dermatol. 2008;159:669-676.
  2. Engasser HC, Warshaw EM. Dermatoscopy use by US dermatologists: a cross-sectional survey [published online July 8, 2010]. J Am Acad Dermatol. 2010;63:412-419, 419.e1-419.e2.
  3. Wu TP, Newlove T, Smith L, et al. The importance of dedicated dermoscopy training during residency: a survey of US dermatology chief residents. J Am Acad Dermatol. 2013;68:1000-1005.
  4. Seidenari S, Bellucci C, Bassoli S, et al. High magnification digital dermoscopy of basal cell carcinoma: a single-centre study on 400 cases. Acta Derm Venereol. 2014;94:677-682.
  5. Ross EK, Vincenzi C, Tosti A. Videodermoscopy in the evaluation of hair and scalp disorders. J Am Acad Dermatol. 2006;55:799-806.
  6. Piraccini BM, Balestri R, Starace M, et al. Nail digital dermoscopy (onychoscopy) in the diagnosis of onychomycosis. J Eur Acad Dermatol Venereol. 2013;27:509-513.
Article PDF
Author and Disclosure Information

All from the Department of Dermatology, Stanford University Medical Center, California. Dr. Nord also is from the Dermatology Service, VA Palo Alto Health Care System, California.

The authors report no conflict of interest.

This case was part of a presentation at the 8th Cosmetic Surgery Forum under the direction of Joel Schlessinger, MD; November 30-December 3, 2006; Las Vegas, Nevada. Dr. Sheu was a Top 10 Fellow and Resident Grant winner.

Correspondence: Kristin M. Nord, MD, VA Palo Alto Healthcare System, Dermatology Service, Mail Code 123, 3801 Miranda Ave, Palo Alto, CA 94304 ([email protected]).

Issue
Cutis - 100(2)
Publications
Topics
Page Number
E25-E27
Sections
Author and Disclosure Information

All from the Department of Dermatology, Stanford University Medical Center, California. Dr. Nord also is from the Dermatology Service, VA Palo Alto Health Care System, California.

The authors report no conflict of interest.

This case was part of a presentation at the 8th Cosmetic Surgery Forum under the direction of Joel Schlessinger, MD; November 30-December 3, 2006; Las Vegas, Nevada. Dr. Sheu was a Top 10 Fellow and Resident Grant winner.

Correspondence: Kristin M. Nord, MD, VA Palo Alto Healthcare System, Dermatology Service, Mail Code 123, 3801 Miranda Ave, Palo Alto, CA 94304 ([email protected]).

Author and Disclosure Information

All from the Department of Dermatology, Stanford University Medical Center, California. Dr. Nord also is from the Dermatology Service, VA Palo Alto Health Care System, California.

The authors report no conflict of interest.

This case was part of a presentation at the 8th Cosmetic Surgery Forum under the direction of Joel Schlessinger, MD; November 30-December 3, 2006; Las Vegas, Nevada. Dr. Sheu was a Top 10 Fellow and Resident Grant winner.

Correspondence: Kristin M. Nord, MD, VA Palo Alto Healthcare System, Dermatology Service, Mail Code 123, 3801 Miranda Ave, Palo Alto, CA 94304 ([email protected]).

Article PDF
Article PDF
Related Articles

Dermoscopy, or the noninvasive in vivo examination of the epidermis and superficial dermis using magnification, facilitates the diagnosis of pigmented and nonpigmented skin lesions.1 Despite the benefit of dermoscopy in making early and accurate diagnoses of potentially life-limiting skin cancers, only 48% of dermatologists in the United States use dermoscopy in their practices.2 The most commonly cited reason for not using dermoscopy is lack of training.

Although the use of dermoscopy is associated with younger age and more recent graduation from residency compared to nonusers, dermatology resident physicians continue to receive limited training in dermoscopy.2 In a survey of 139 dermatology chief residents, 48% were not satisfied with the dermoscopy training that they had received during residency. Residents who received bedside instruction in dermoscopy reported greater satisfaction with their dermoscopy training compared to those who did not receive bedside instruction.3 This article provides a brief comparison of standard dermoscopy versus videodermoscopy for the instruction of trainees on common dermatologic diagnoses.

Bedside Dermoscopy

Standard optical dermatoscopes used for patient care and educational purposes typically incorporate 10-fold magnification and permit examination by a single viewer through a lens. With standard dermatoscopes, bedside dermoscopy instruction consists of the independent sequential viewing of skin lesions by instructors and trainees. Trainees must independently search for dermoscopic features noted by the instructor, which may be difficult for novice users. Simultaneous viewing of lesions would allow instructors to clearly indicate in real time pertinent dermoscopic features to their trainees.

Videodermatoscopes facilitate the simultaneous examination of cutaneous lesions by projecting the dermoscopic image onto a digital screen. Furthermore, these devices can incorporate magnifications of up to 200-fold or greater. In recent years, research pertaining to videodermoscopy has focused on the high magnification capabilities of these devices, specifically dermoscopic features that are visualized at magnifications greater than 10-fold, including the light brown nests of basal cell carcinomas that are seen at 50- to 70-fold magnification, twisted red capillary loops seen in active scalp psoriasis at 50-fold magnification, and longitudinal white indentations seen on nail plates affected by onychomycosis at 20-fold magnification.4-6 The potential value of videodermoscopy in medical education lies not only in the high magnification potential, which may make subtle dermoscopic findings more apparent to novice dermoscopists, but also in the ability to facilitate simultaneous dermoscopic examinations by instructors and trainees.

Educational Applications for Videodermoscopy

To illustrate the educational potential of videodermoscopy, images taken with a standard dermatoscope at 10-fold magnification are presented with videodermoscopic images taken at magnifications ranging from 60- to 185-fold (Figures 1–3). These examples demonstrate the potential for videodermoscopy to facilitate the visualization of subtle dermoscopic features by novice dermoscopists, relating to both the enhanced magnification potential and the potential for simultaneous rather than sequential examination.

Figure 1. Comedolike openings of seborrheic keratosis demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

Figure 2. Pigment network of a nevus demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

Figure 3. Club-shaped root of a telogen hair demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

 

 

Final Thoughts

High-magnification videodermoscopy may be a useful tool to further dermoscopic education. Videodermatoscopes vary in functionality and cost but are available at price points comparable to those of standard optical dermatoscopes. Owners of standard dermatoscopes can approximate some of the benefits of a digital videodermatoscope by using the standard dermatoscope in conjunction with a camera, including those integrated into mobile phones and tablets. By attaching the standard dermatoscope to a camera with a digital display, the digital zoom of the camera can be used to magnify the standard dermoscopic image, enhancing the ability of novice dermoscopists to visualize subtle findings. By presenting this magnified image on a digital display, dermoscopy instructors and trainees would be able to simultaneously view dermoscopic images of lesions, sometimes with magnifications comparable to videodermatoscopes.

In the setting of a dermatology residency program, videodermoscopy can be incorporated into bedside teaching with experienced dermoscopists and for the live presentation of dermoscopic features at departmental grand rounds. By facilitating the simultaneous, high-magnification and live viewing of skin lesions by dermoscopy instructors and trainees, digital videodermoscopy has the potential to address an area of weakness in dermatologic training.

Dermoscopy, or the noninvasive in vivo examination of the epidermis and superficial dermis using magnification, facilitates the diagnosis of pigmented and nonpigmented skin lesions.1 Despite the benefit of dermoscopy in making early and accurate diagnoses of potentially life-limiting skin cancers, only 48% of dermatologists in the United States use dermoscopy in their practices.2 The most commonly cited reason for not using dermoscopy is lack of training.

Although the use of dermoscopy is associated with younger age and more recent graduation from residency compared to nonusers, dermatology resident physicians continue to receive limited training in dermoscopy.2 In a survey of 139 dermatology chief residents, 48% were not satisfied with the dermoscopy training that they had received during residency. Residents who received bedside instruction in dermoscopy reported greater satisfaction with their dermoscopy training compared to those who did not receive bedside instruction.3 This article provides a brief comparison of standard dermoscopy versus videodermoscopy for the instruction of trainees on common dermatologic diagnoses.

Bedside Dermoscopy

Standard optical dermatoscopes used for patient care and educational purposes typically incorporate 10-fold magnification and permit examination by a single viewer through a lens. With standard dermatoscopes, bedside dermoscopy instruction consists of the independent sequential viewing of skin lesions by instructors and trainees. Trainees must independently search for dermoscopic features noted by the instructor, which may be difficult for novice users. Simultaneous viewing of lesions would allow instructors to clearly indicate in real time pertinent dermoscopic features to their trainees.

Videodermatoscopes facilitate the simultaneous examination of cutaneous lesions by projecting the dermoscopic image onto a digital screen. Furthermore, these devices can incorporate magnifications of up to 200-fold or greater. In recent years, research pertaining to videodermoscopy has focused on the high magnification capabilities of these devices, specifically dermoscopic features that are visualized at magnifications greater than 10-fold, including the light brown nests of basal cell carcinomas that are seen at 50- to 70-fold magnification, twisted red capillary loops seen in active scalp psoriasis at 50-fold magnification, and longitudinal white indentations seen on nail plates affected by onychomycosis at 20-fold magnification.4-6 The potential value of videodermoscopy in medical education lies not only in the high magnification potential, which may make subtle dermoscopic findings more apparent to novice dermoscopists, but also in the ability to facilitate simultaneous dermoscopic examinations by instructors and trainees.

Educational Applications for Videodermoscopy

To illustrate the educational potential of videodermoscopy, images taken with a standard dermatoscope at 10-fold magnification are presented with videodermoscopic images taken at magnifications ranging from 60- to 185-fold (Figures 1–3). These examples demonstrate the potential for videodermoscopy to facilitate the visualization of subtle dermoscopic features by novice dermoscopists, relating to both the enhanced magnification potential and the potential for simultaneous rather than sequential examination.

Figure 1. Comedolike openings of seborrheic keratosis demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

Figure 2. Pigment network of a nevus demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

Figure 3. Club-shaped root of a telogen hair demonstrated using standard dermoscopy (A)(10-fold magnification) versus videodermoscopy (B)(60-fold magnification).

 

 

Final Thoughts

High-magnification videodermoscopy may be a useful tool to further dermoscopic education. Videodermatoscopes vary in functionality and cost but are available at price points comparable to those of standard optical dermatoscopes. Owners of standard dermatoscopes can approximate some of the benefits of a digital videodermatoscope by using the standard dermatoscope in conjunction with a camera, including those integrated into mobile phones and tablets. By attaching the standard dermatoscope to a camera with a digital display, the digital zoom of the camera can be used to magnify the standard dermoscopic image, enhancing the ability of novice dermoscopists to visualize subtle findings. By presenting this magnified image on a digital display, dermoscopy instructors and trainees would be able to simultaneously view dermoscopic images of lesions, sometimes with magnifications comparable to videodermatoscopes.

In the setting of a dermatology residency program, videodermoscopy can be incorporated into bedside teaching with experienced dermoscopists and for the live presentation of dermoscopic features at departmental grand rounds. By facilitating the simultaneous, high-magnification and live viewing of skin lesions by dermoscopy instructors and trainees, digital videodermoscopy has the potential to address an area of weakness in dermatologic training.

References
  1. Vestergaard ME, Macaskill P, Holt PE, et al. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: a meta-analysis of studies performed in a clinical setting. Br J Dermatol. 2008;159:669-676.
  2. Engasser HC, Warshaw EM. Dermatoscopy use by US dermatologists: a cross-sectional survey [published online July 8, 2010]. J Am Acad Dermatol. 2010;63:412-419, 419.e1-419.e2.
  3. Wu TP, Newlove T, Smith L, et al. The importance of dedicated dermoscopy training during residency: a survey of US dermatology chief residents. J Am Acad Dermatol. 2013;68:1000-1005.
  4. Seidenari S, Bellucci C, Bassoli S, et al. High magnification digital dermoscopy of basal cell carcinoma: a single-centre study on 400 cases. Acta Derm Venereol. 2014;94:677-682.
  5. Ross EK, Vincenzi C, Tosti A. Videodermoscopy in the evaluation of hair and scalp disorders. J Am Acad Dermatol. 2006;55:799-806.
  6. Piraccini BM, Balestri R, Starace M, et al. Nail digital dermoscopy (onychoscopy) in the diagnosis of onychomycosis. J Eur Acad Dermatol Venereol. 2013;27:509-513.
References
  1. Vestergaard ME, Macaskill P, Holt PE, et al. Dermoscopy compared with naked eye examination for the diagnosis of primary melanoma: a meta-analysis of studies performed in a clinical setting. Br J Dermatol. 2008;159:669-676.
  2. Engasser HC, Warshaw EM. Dermatoscopy use by US dermatologists: a cross-sectional survey [published online July 8, 2010]. J Am Acad Dermatol. 2010;63:412-419, 419.e1-419.e2.
  3. Wu TP, Newlove T, Smith L, et al. The importance of dedicated dermoscopy training during residency: a survey of US dermatology chief residents. J Am Acad Dermatol. 2013;68:1000-1005.
  4. Seidenari S, Bellucci C, Bassoli S, et al. High magnification digital dermoscopy of basal cell carcinoma: a single-centre study on 400 cases. Acta Derm Venereol. 2014;94:677-682.
  5. Ross EK, Vincenzi C, Tosti A. Videodermoscopy in the evaluation of hair and scalp disorders. J Am Acad Dermatol. 2006;55:799-806.
  6. Piraccini BM, Balestri R, Starace M, et al. Nail digital dermoscopy (onychoscopy) in the diagnosis of onychomycosis. J Eur Acad Dermatol Venereol. 2013;27:509-513.
Issue
Cutis - 100(2)
Issue
Cutis - 100(2)
Page Number
E25-E27
Page Number
E25-E27
Publications
Publications
Topics
Article Type
Display Headline
Videodermoscopy as a Novel Tool for Dermatologic Education
Display Headline
Videodermoscopy as a Novel Tool for Dermatologic Education
Sections
Inside the Article

Resident Pearl

  • Bedside dermoscopy training can be enhanced through the use of videodermoscopy, which permits simultaneous, high-magnification viewing.
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

Obstetric trauma rates show long-term decline

Article Type
Changed
Fri, 01/18/2019 - 16:59

 

Obstetric trauma rates have dropped since 2000 for vaginal deliveries both with and without instrument assistance, but assisted deliveries are still six times more likely to result in injuries, according to the Agency for Healthcare Research and Quality.

In 2014, the trauma rate for unassisted vaginal deliveries was 19 per 1,000, a drop of 51% from the rate of 39 per 1,000 deliveries in 2000.

For deliveries involving instruments, such as forceps and vacuums, the trauma rate fell from 196 per 1,000 to 119, a drop of just over 39%, the AHRQ said in its annual National Healthcare Quality and Disparities Report.

For this analysis, injuries were defined as third- or fourth-degree lacerations of the perineum; rates were adjusted by age using hospitalizations for 2010 as the standard population.

Publications
Topics
Sections

 

Obstetric trauma rates have dropped since 2000 for vaginal deliveries both with and without instrument assistance, but assisted deliveries are still six times more likely to result in injuries, according to the Agency for Healthcare Research and Quality.

In 2014, the trauma rate for unassisted vaginal deliveries was 19 per 1,000, a drop of 51% from the rate of 39 per 1,000 deliveries in 2000.

For deliveries involving instruments, such as forceps and vacuums, the trauma rate fell from 196 per 1,000 to 119, a drop of just over 39%, the AHRQ said in its annual National Healthcare Quality and Disparities Report.

For this analysis, injuries were defined as third- or fourth-degree lacerations of the perineum; rates were adjusted by age using hospitalizations for 2010 as the standard population.

 

Obstetric trauma rates have dropped since 2000 for vaginal deliveries both with and without instrument assistance, but assisted deliveries are still six times more likely to result in injuries, according to the Agency for Healthcare Research and Quality.

In 2014, the trauma rate for unassisted vaginal deliveries was 19 per 1,000, a drop of 51% from the rate of 39 per 1,000 deliveries in 2000.

For deliveries involving instruments, such as forceps and vacuums, the trauma rate fell from 196 per 1,000 to 119, a drop of just over 39%, the AHRQ said in its annual National Healthcare Quality and Disparities Report.

For this analysis, injuries were defined as third- or fourth-degree lacerations of the perineum; rates were adjusted by age using hospitalizations for 2010 as the standard population.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default

Musculoskeletal ultrasound training now offered in nearly all U.S. rheumatology fellowships

Article Type
Changed
Tue, 02/07/2023 - 16:56

Musculoskeletal ultrasound (MSUS) fellowship opportunities are growing among rheumatology programs across the country as professionals push for more standardized education, according to a survey of fellowship program directors.

Rise in use of MSUS among rheumatologists is spurring more comprehensive education for providers to acquire these skill sets, which researchers have gathered will only become more prevalent.

Bogdanhoda/Thinkstock
“Our specialty has seen a dramatic increase in the use of point-of-care MSUS among rheumatologists as evidenced by delineation of MSUS standards of use, inclusion of MSUS-related criteria into disease classification systems, increased attendance by rheumatologists at training courses, provision of the RhMSUS certification, and the almost universal desire among fellowship training programs to integrate MSUS education into the curriculum,” wrote Karina Torralba, MD, of Loma Linda (Calif.) University, and her coinvestigators.

 

The investigators sent two surveys to 113 rheumatology fellowship program directors. In the first survey, responses from the directors of 108 programs indicated that 101 (94%) offered MSUS programs (Arthritis Care Res. 2017 Aug 4. doi: 10.1002/acr.23336).

While this number has increased dramatically since a 2013 survey showed that 60% offered MSUS programs, the new survey found that 66% of respondents would prefer for the program to be optional, as opposed to a formal part of the fellowship program.

This sentiment for nonformal education programs was mirrored in the second survey specifically targeting the 101 programs that were known to provide some sort of MSUS education.

Among the 74 program directors who responded, 30 (41%) reported having a formal curriculum, while 44 (59%) did not, citing a major barrier being a lack of interested fellows to learn the material (P = .012)

Another major barrier, according to Dr. Torralba and her colleagues, is access to faculty with enough teaching experience to properly teach MSUS skills, with 62 (84%) reporting having no or only one faculty member with MSUS certification (P = .049).

Programs without proper faculty available and even those with available faculty are choosing to outsource lessons to expensive teaching programs such as the Ultrasound School of North American Rheumatologists (USSONAR) fellowship course, according to Dr. Torralba and her associates.

“While cost of external courses can be prohibitive, (expenses for a 2- to 4-day course costs between $1,500 and $4,000), programs may augment MSUS teaching using these courses for several reasons,” according to Dr. Torralba and her colleagues. [These include] insufficient number of teaching faculty, limited time or support for faculty to deliver all educational content, inadequate confidence or competency for faculty to teach content, and utilization of external materials to bolster resources.”

While these barriers will still need addressing, according to Dr. Torralba and her colleagues, half of responders noted previous barriers such as political pushback and lack of fellow interest are starting to recede, giving more room for programs to start developing MSUS programs that researchers assert are necessary for future developing rheumatologists.

“A standardized MSUS curriculum developed and endorsed by program directors and MSUS lead educators is now reasonably within sights,” the investigators wrote. “We need to work together to proactively champion MSUS education for both faculty and fellows who desire to attain this skill set.”

This study was limited by the self-reporting nature of the survey sent, as well as the small population of the sample. Researchers were also forced to rely on program directors’ perception of how effective their MSUS programs were instead of asking those participating in the programs directly.

The researchers reported no relevant financial disclosures.

Publications
Topics
Sections
Related Articles

Musculoskeletal ultrasound (MSUS) fellowship opportunities are growing among rheumatology programs across the country as professionals push for more standardized education, according to a survey of fellowship program directors.

Rise in use of MSUS among rheumatologists is spurring more comprehensive education for providers to acquire these skill sets, which researchers have gathered will only become more prevalent.

Bogdanhoda/Thinkstock
“Our specialty has seen a dramatic increase in the use of point-of-care MSUS among rheumatologists as evidenced by delineation of MSUS standards of use, inclusion of MSUS-related criteria into disease classification systems, increased attendance by rheumatologists at training courses, provision of the RhMSUS certification, and the almost universal desire among fellowship training programs to integrate MSUS education into the curriculum,” wrote Karina Torralba, MD, of Loma Linda (Calif.) University, and her coinvestigators.

 

The investigators sent two surveys to 113 rheumatology fellowship program directors. In the first survey, responses from the directors of 108 programs indicated that 101 (94%) offered MSUS programs (Arthritis Care Res. 2017 Aug 4. doi: 10.1002/acr.23336).

While this number has increased dramatically since a 2013 survey showed that 60% offered MSUS programs, the new survey found that 66% of respondents would prefer for the program to be optional, as opposed to a formal part of the fellowship program.

This sentiment for nonformal education programs was mirrored in the second survey specifically targeting the 101 programs that were known to provide some sort of MSUS education.

Among the 74 program directors who responded, 30 (41%) reported having a formal curriculum, while 44 (59%) did not, citing a major barrier being a lack of interested fellows to learn the material (P = .012)

Another major barrier, according to Dr. Torralba and her colleagues, is access to faculty with enough teaching experience to properly teach MSUS skills, with 62 (84%) reporting having no or only one faculty member with MSUS certification (P = .049).

Programs without proper faculty available and even those with available faculty are choosing to outsource lessons to expensive teaching programs such as the Ultrasound School of North American Rheumatologists (USSONAR) fellowship course, according to Dr. Torralba and her associates.

“While cost of external courses can be prohibitive, (expenses for a 2- to 4-day course costs between $1,500 and $4,000), programs may augment MSUS teaching using these courses for several reasons,” according to Dr. Torralba and her colleagues. [These include] insufficient number of teaching faculty, limited time or support for faculty to deliver all educational content, inadequate confidence or competency for faculty to teach content, and utilization of external materials to bolster resources.”

While these barriers will still need addressing, according to Dr. Torralba and her colleagues, half of responders noted previous barriers such as political pushback and lack of fellow interest are starting to recede, giving more room for programs to start developing MSUS programs that researchers assert are necessary for future developing rheumatologists.

“A standardized MSUS curriculum developed and endorsed by program directors and MSUS lead educators is now reasonably within sights,” the investigators wrote. “We need to work together to proactively champion MSUS education for both faculty and fellows who desire to attain this skill set.”

This study was limited by the self-reporting nature of the survey sent, as well as the small population of the sample. Researchers were also forced to rely on program directors’ perception of how effective their MSUS programs were instead of asking those participating in the programs directly.

The researchers reported no relevant financial disclosures.

Musculoskeletal ultrasound (MSUS) fellowship opportunities are growing among rheumatology programs across the country as professionals push for more standardized education, according to a survey of fellowship program directors.

Rise in use of MSUS among rheumatologists is spurring more comprehensive education for providers to acquire these skill sets, which researchers have gathered will only become more prevalent.

Bogdanhoda/Thinkstock
“Our specialty has seen a dramatic increase in the use of point-of-care MSUS among rheumatologists as evidenced by delineation of MSUS standards of use, inclusion of MSUS-related criteria into disease classification systems, increased attendance by rheumatologists at training courses, provision of the RhMSUS certification, and the almost universal desire among fellowship training programs to integrate MSUS education into the curriculum,” wrote Karina Torralba, MD, of Loma Linda (Calif.) University, and her coinvestigators.

 

The investigators sent two surveys to 113 rheumatology fellowship program directors. In the first survey, responses from the directors of 108 programs indicated that 101 (94%) offered MSUS programs (Arthritis Care Res. 2017 Aug 4. doi: 10.1002/acr.23336).

While this number has increased dramatically since a 2013 survey showed that 60% offered MSUS programs, the new survey found that 66% of respondents would prefer for the program to be optional, as opposed to a formal part of the fellowship program.

This sentiment for nonformal education programs was mirrored in the second survey specifically targeting the 101 programs that were known to provide some sort of MSUS education.

Among the 74 program directors who responded, 30 (41%) reported having a formal curriculum, while 44 (59%) did not, citing a major barrier being a lack of interested fellows to learn the material (P = .012)

Another major barrier, according to Dr. Torralba and her colleagues, is access to faculty with enough teaching experience to properly teach MSUS skills, with 62 (84%) reporting having no or only one faculty member with MSUS certification (P = .049).

Programs without proper faculty available and even those with available faculty are choosing to outsource lessons to expensive teaching programs such as the Ultrasound School of North American Rheumatologists (USSONAR) fellowship course, according to Dr. Torralba and her associates.

“While cost of external courses can be prohibitive, (expenses for a 2- to 4-day course costs between $1,500 and $4,000), programs may augment MSUS teaching using these courses for several reasons,” according to Dr. Torralba and her colleagues. [These include] insufficient number of teaching faculty, limited time or support for faculty to deliver all educational content, inadequate confidence or competency for faculty to teach content, and utilization of external materials to bolster resources.”

While these barriers will still need addressing, according to Dr. Torralba and her colleagues, half of responders noted previous barriers such as political pushback and lack of fellow interest are starting to recede, giving more room for programs to start developing MSUS programs that researchers assert are necessary for future developing rheumatologists.

“A standardized MSUS curriculum developed and endorsed by program directors and MSUS lead educators is now reasonably within sights,” the investigators wrote. “We need to work together to proactively champion MSUS education for both faculty and fellows who desire to attain this skill set.”

This study was limited by the self-reporting nature of the survey sent, as well as the small population of the sample. Researchers were also forced to rely on program directors’ perception of how effective their MSUS programs were instead of asking those participating in the programs directly.

The researchers reported no relevant financial disclosures.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM ARTHRITIS CARE & RESEARCH

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

Key clinical point: Musculoskeletal ultrasound fellowship opportunities continue to grow, but many still have not adopted a formal or mandatory program.

Major finding: Of 108 program directors who responded to a survey, 101 (94%) offered a musculoskeletal ultrasound fellowship.

Data source: Survey of 113 rheumatology fellowship program directors gathered from the Fellowship and Residency Electronic Interactive Database Access (FREIDA) online database.

Disclosures: The investigators reported no relevant financial disclosures.

Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.

The Authors Reply, “What Can Be Done to Maintain Positive Patient Experience and Improve Residents’ Satisfaction?” and “Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial”

Article Type
Changed
Mon, 06/04/2018 - 14:55

We thank Talari et al. for their comments in response to our randomized controlled trial evaluating the impact of standardized rounds on patient, attending, and trainee satisfaction. We agree that many factors beyond rounding structure contribute to resident satisfaction, including those highlighted by the authors, and would enthusiastically welcome additional research in this realm.

Because our study intervention addressed rounding structure, we elected to specifically focus on satisfaction with rounds, both from the physician and patient perspectives. We chose to ask about patient satisfaction with attending rounds, as opposed to more generic measures of patient satisfaction, to allow for more direct comparison between attending/resident responses and patient responses. Certainly, there are many other factors that affect overall patient experience. Surveys such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) and Press Ganey do not specifically address rounds, are often completed several weeks following hospitalization, and may have low response rates. Relying on such global assessments of patient experience may also reduce the power of the study. Although patient responses to our survey may be higher than scores seen with HCAHPS and Press Ganey, the randomized nature of our study helps control for other differences in the hospitalization experience unrelated to rounding structure. Similarly, because physician teams were randomly assigned, differences in census were not a major factor in the study. Physician blinding was not possible due to the nature of the intervention, which may have affected the satisfaction reports from attendings and residents. For our primary outcome (patient satisfaction with rounds), patients were blinded to the nature of our intervention, and all study team members involved in data collection and statistical analyses were blinded to study arm allocation.

In summary, we feel that evaluating the trade-offs and consequences of interventions should be examined from multiple perspectives, and we welcome additional investigations in this area.

Article PDF
Issue
Journal of Hospital Medicine 12 (9)
Topics
Page Number
786
Sections
Article PDF
Article PDF

We thank Talari et al. for their comments in response to our randomized controlled trial evaluating the impact of standardized rounds on patient, attending, and trainee satisfaction. We agree that many factors beyond rounding structure contribute to resident satisfaction, including those highlighted by the authors, and would enthusiastically welcome additional research in this realm.

Because our study intervention addressed rounding structure, we elected to specifically focus on satisfaction with rounds, both from the physician and patient perspectives. We chose to ask about patient satisfaction with attending rounds, as opposed to more generic measures of patient satisfaction, to allow for more direct comparison between attending/resident responses and patient responses. Certainly, there are many other factors that affect overall patient experience. Surveys such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) and Press Ganey do not specifically address rounds, are often completed several weeks following hospitalization, and may have low response rates. Relying on such global assessments of patient experience may also reduce the power of the study. Although patient responses to our survey may be higher than scores seen with HCAHPS and Press Ganey, the randomized nature of our study helps control for other differences in the hospitalization experience unrelated to rounding structure. Similarly, because physician teams were randomly assigned, differences in census were not a major factor in the study. Physician blinding was not possible due to the nature of the intervention, which may have affected the satisfaction reports from attendings and residents. For our primary outcome (patient satisfaction with rounds), patients were blinded to the nature of our intervention, and all study team members involved in data collection and statistical analyses were blinded to study arm allocation.

In summary, we feel that evaluating the trade-offs and consequences of interventions should be examined from multiple perspectives, and we welcome additional investigations in this area.

We thank Talari et al. for their comments in response to our randomized controlled trial evaluating the impact of standardized rounds on patient, attending, and trainee satisfaction. We agree that many factors beyond rounding structure contribute to resident satisfaction, including those highlighted by the authors, and would enthusiastically welcome additional research in this realm.

Because our study intervention addressed rounding structure, we elected to specifically focus on satisfaction with rounds, both from the physician and patient perspectives. We chose to ask about patient satisfaction with attending rounds, as opposed to more generic measures of patient satisfaction, to allow for more direct comparison between attending/resident responses and patient responses. Certainly, there are many other factors that affect overall patient experience. Surveys such as Hospital Consumer Assessment of Healthcare Providers and Systems (HCAHPS) and Press Ganey do not specifically address rounds, are often completed several weeks following hospitalization, and may have low response rates. Relying on such global assessments of patient experience may also reduce the power of the study. Although patient responses to our survey may be higher than scores seen with HCAHPS and Press Ganey, the randomized nature of our study helps control for other differences in the hospitalization experience unrelated to rounding structure. Similarly, because physician teams were randomly assigned, differences in census were not a major factor in the study. Physician blinding was not possible due to the nature of the intervention, which may have affected the satisfaction reports from attendings and residents. For our primary outcome (patient satisfaction with rounds), patients were blinded to the nature of our intervention, and all study team members involved in data collection and statistical analyses were blinded to study arm allocation.

In summary, we feel that evaluating the trade-offs and consequences of interventions should be examined from multiple perspectives, and we welcome additional investigations in this area.

Issue
Journal of Hospital Medicine 12 (9)
Issue
Journal of Hospital Medicine 12 (9)
Page Number
786
Page Number
786
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Use ProPublica
Gating Strategy
First Peek Free
Article PDF Media

What Can Be Done to Maintain Positive Patient Experience and Improve Residents’ Satisfaction? In Reference to: “Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial”

Article Type
Changed
Thu, 05/10/2018 - 10:47

We read the article by Monash et al.1 published in the March 2017 issue with great interest. This randomized study showed a discrepancy between patients’ and residents’ satisfaction with standardized rounds; for example, residents reported less autonomy, efficiency, teaching, and longer time of rounds.

We agree that letting residents lead the rounds with minimal participation of an attending (only when needed) may improve resident satisfaction. Other factors, such as quality of teaching, positive comments to learners during bedside rounds (whenever appropriate), and a positive attending attitude, might be helpful.2,3 We believe that the adaptation of such a model through the prism of residents’ benefit will lead to better satisfaction among trainees.

On the other hand, we note that the nature of the study might have exaggerated patient satisfaction when compared with real-world surveys.4 The survey appears to focus only on attending rounds and did not consider other factors like hospitality, pain control, etc. A low patient census and lack of double blinding are other potential factors.

In conclusion, we want to congratulate the authors for raising this important topic and showing positive patients’ satisfaction with standardized rounds on teaching services. Further research should focus on improving residents’ satisfaction without compromising patients’ experiences.

References

1. Monash B, Najafi N, Mourad M, et al. Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial. J Hosp Med. 2017;12(3):143-149. PubMed
2. Williams KN, Ramani S, Fraser B, Orlander JD. Improving bedside teaching: findings from a focus group study of learners. Acad Med. 2008;83(3):257-264. PubMed
3. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
4. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590-593. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12 (9)
Topics
Page Number
785
Sections
Article PDF
Article PDF

We read the article by Monash et al.1 published in the March 2017 issue with great interest. This randomized study showed a discrepancy between patients’ and residents’ satisfaction with standardized rounds; for example, residents reported less autonomy, efficiency, teaching, and longer time of rounds.

We agree that letting residents lead the rounds with minimal participation of an attending (only when needed) may improve resident satisfaction. Other factors, such as quality of teaching, positive comments to learners during bedside rounds (whenever appropriate), and a positive attending attitude, might be helpful.2,3 We believe that the adaptation of such a model through the prism of residents’ benefit will lead to better satisfaction among trainees.

On the other hand, we note that the nature of the study might have exaggerated patient satisfaction when compared with real-world surveys.4 The survey appears to focus only on attending rounds and did not consider other factors like hospitality, pain control, etc. A low patient census and lack of double blinding are other potential factors.

In conclusion, we want to congratulate the authors for raising this important topic and showing positive patients’ satisfaction with standardized rounds on teaching services. Further research should focus on improving residents’ satisfaction without compromising patients’ experiences.

We read the article by Monash et al.1 published in the March 2017 issue with great interest. This randomized study showed a discrepancy between patients’ and residents’ satisfaction with standardized rounds; for example, residents reported less autonomy, efficiency, teaching, and longer time of rounds.

We agree that letting residents lead the rounds with minimal participation of an attending (only when needed) may improve resident satisfaction. Other factors, such as quality of teaching, positive comments to learners during bedside rounds (whenever appropriate), and a positive attending attitude, might be helpful.2,3 We believe that the adaptation of such a model through the prism of residents’ benefit will lead to better satisfaction among trainees.

On the other hand, we note that the nature of the study might have exaggerated patient satisfaction when compared with real-world surveys.4 The survey appears to focus only on attending rounds and did not consider other factors like hospitality, pain control, etc. A low patient census and lack of double blinding are other potential factors.

In conclusion, we want to congratulate the authors for raising this important topic and showing positive patients’ satisfaction with standardized rounds on teaching services. Further research should focus on improving residents’ satisfaction without compromising patients’ experiences.

References

1. Monash B, Najafi N, Mourad M, et al. Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial. J Hosp Med. 2017;12(3):143-149. PubMed
2. Williams KN, Ramani S, Fraser B, Orlander JD. Improving bedside teaching: findings from a focus group study of learners. Acad Med. 2008;83(3):257-264. PubMed
3. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
4. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590-593. PubMed

References

1. Monash B, Najafi N, Mourad M, et al. Standardized Attending Rounds to Improve the Patient Experience: A Pragmatic Cluster Randomized Controlled Trial. J Hosp Med. 2017;12(3):143-149. PubMed
2. Williams KN, Ramani S, Fraser B, Orlander JD. Improving bedside teaching: findings from a focus group study of learners. Acad Med. 2008;83(3):257-264. PubMed
3. Castiglioni A, Shewchuk RM, Willett LL, Heudebert GR, Centor RM. A pilot study using nominal group technique to assess residents’ perceptions of successful attending rounds. J Gen Intern Med. 2008;23(7):1060-1065. PubMed
4. Siddiqui ZK, Wu AW, Kurbanova N, Qayyum R. Comparison of Hospital Consumer Assessment of Healthcare Providers and Systems patient satisfaction scores for specialty hospitals and general medical hospitals: confounding effect of survey response rate. J Hosp Med. 2014;9(9):590-593. PubMed

Issue
Journal of Hospital Medicine 12 (9)
Issue
Journal of Hospital Medicine 12 (9)
Page Number
785
Page Number
785
Topics
Article Type
Sections
Disallow All Ads
Correspondence Location
Goutham Talari, MD, University of Kentucky Hospital, A. B. Chandler Medical Center, 800 Rose Street, MN 602, Lexington, KY, 40536; Telephone: 859-323-6047; Fax: 859-257-3873; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gating Strategy
First Peek Free
Article PDF Media

The Authors Reply: “Cost and Utility of Thrombophilia Testing”

Article Type
Changed
Fri, 12/14/2018 - 08:05

We thank Dr. Berse and colleagues for their correspondence about our paper.1,2 We are pleased they agreed with our conclusion: Thrombophilia testing has limited clinical utility in most inpatient settings.

Berse and colleagues critiqued details of our methodology in calculating payer cost, including how we estimated the number of Medicare claims for thrombophilia testing. We estimated that there were at least 280,000 Medicare claims in 2014 using CodeMap® (Wheaton Partners, LLC, Schaumburg, IL), a dataset of utilization data from the Physician Supplier Procedure Summary Master File from all Medicare Part B carriers.3 This estimate was similar to that reported in a previous publication.4

Berse and colleagues generated a lower cost estimate of $405 for 11 of the 13 thrombophilia tests referenced in our paper (excluding factor V and methylenetetrahydrofolate reductase mutations) by using the average Medicare payment.2 However, private insurance companies or self-paying patients often pay multiples of Medicare reimbursement. Our institutional data suggest that the average reimbursement across all payors not based on a diagnosis-related group for 12 of these 13 tests is $1,327 (Table). Importantly, these expenses do not factor in costs related to increased premiums for health, disability, and life insurance that may occur due to an inappropriately ordered, positive thrombophilia test. Nor, for that matter, do they include the psychological stress of the patient that may result from a positive genetic test.

Thus, regardless of the precise estimates, even a conservative estimate of 33 to 80 million dollars of unnecessary spending is far too much. Rather, it is a perfect example of “Things We Do for No Reason.”

Disclosure

Nothing to report.

References

1. Petrilli CM, Mack M, Petrilli JJ, Hickner A, Saint S, Chopra V. Understanding the role of physician attire on patient perceptions: a systematic review of the literature--targeting attire to improve likelihood of rapport (TAILOR) investigators. BMJ Open. 2015;5(1):e006578. DOI:10.1136/bmjopen-2014-006578. PubMed
2. Berse B, Lynch JA, Bowen S, Grosse SD. In Reference to: “Cost and Utility of Thrombophilia Testing.” J Hosp Med. 2017;12(9):783. 
3. CodeMap® https://www.codemap.com/. Accessed March 2, 2017.
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-7. DOI:10.1309/KV06-32LJ-8EDM-EWQT. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12 (9)
Topics
Page Number
784
Sections
Article PDF
Article PDF

We thank Dr. Berse and colleagues for their correspondence about our paper.1,2 We are pleased they agreed with our conclusion: Thrombophilia testing has limited clinical utility in most inpatient settings.

Berse and colleagues critiqued details of our methodology in calculating payer cost, including how we estimated the number of Medicare claims for thrombophilia testing. We estimated that there were at least 280,000 Medicare claims in 2014 using CodeMap® (Wheaton Partners, LLC, Schaumburg, IL), a dataset of utilization data from the Physician Supplier Procedure Summary Master File from all Medicare Part B carriers.3 This estimate was similar to that reported in a previous publication.4

Berse and colleagues generated a lower cost estimate of $405 for 11 of the 13 thrombophilia tests referenced in our paper (excluding factor V and methylenetetrahydrofolate reductase mutations) by using the average Medicare payment.2 However, private insurance companies or self-paying patients often pay multiples of Medicare reimbursement. Our institutional data suggest that the average reimbursement across all payors not based on a diagnosis-related group for 12 of these 13 tests is $1,327 (Table). Importantly, these expenses do not factor in costs related to increased premiums for health, disability, and life insurance that may occur due to an inappropriately ordered, positive thrombophilia test. Nor, for that matter, do they include the psychological stress of the patient that may result from a positive genetic test.

Thus, regardless of the precise estimates, even a conservative estimate of 33 to 80 million dollars of unnecessary spending is far too much. Rather, it is a perfect example of “Things We Do for No Reason.”

Disclosure

Nothing to report.

We thank Dr. Berse and colleagues for their correspondence about our paper.1,2 We are pleased they agreed with our conclusion: Thrombophilia testing has limited clinical utility in most inpatient settings.

Berse and colleagues critiqued details of our methodology in calculating payer cost, including how we estimated the number of Medicare claims for thrombophilia testing. We estimated that there were at least 280,000 Medicare claims in 2014 using CodeMap® (Wheaton Partners, LLC, Schaumburg, IL), a dataset of utilization data from the Physician Supplier Procedure Summary Master File from all Medicare Part B carriers.3 This estimate was similar to that reported in a previous publication.4

Berse and colleagues generated a lower cost estimate of $405 for 11 of the 13 thrombophilia tests referenced in our paper (excluding factor V and methylenetetrahydrofolate reductase mutations) by using the average Medicare payment.2 However, private insurance companies or self-paying patients often pay multiples of Medicare reimbursement. Our institutional data suggest that the average reimbursement across all payors not based on a diagnosis-related group for 12 of these 13 tests is $1,327 (Table). Importantly, these expenses do not factor in costs related to increased premiums for health, disability, and life insurance that may occur due to an inappropriately ordered, positive thrombophilia test. Nor, for that matter, do they include the psychological stress of the patient that may result from a positive genetic test.

Thus, regardless of the precise estimates, even a conservative estimate of 33 to 80 million dollars of unnecessary spending is far too much. Rather, it is a perfect example of “Things We Do for No Reason.”

Disclosure

Nothing to report.

References

1. Petrilli CM, Mack M, Petrilli JJ, Hickner A, Saint S, Chopra V. Understanding the role of physician attire on patient perceptions: a systematic review of the literature--targeting attire to improve likelihood of rapport (TAILOR) investigators. BMJ Open. 2015;5(1):e006578. DOI:10.1136/bmjopen-2014-006578. PubMed
2. Berse B, Lynch JA, Bowen S, Grosse SD. In Reference to: “Cost and Utility of Thrombophilia Testing.” J Hosp Med. 2017;12(9):783. 
3. CodeMap® https://www.codemap.com/. Accessed March 2, 2017.
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-7. DOI:10.1309/KV06-32LJ-8EDM-EWQT. PubMed

References

1. Petrilli CM, Mack M, Petrilli JJ, Hickner A, Saint S, Chopra V. Understanding the role of physician attire on patient perceptions: a systematic review of the literature--targeting attire to improve likelihood of rapport (TAILOR) investigators. BMJ Open. 2015;5(1):e006578. DOI:10.1136/bmjopen-2014-006578. PubMed
2. Berse B, Lynch JA, Bowen S, Grosse SD. In Reference to: “Cost and Utility of Thrombophilia Testing.” J Hosp Med. 2017;12(9):783. 
3. CodeMap® https://www.codemap.com/. Accessed March 2, 2017.
4. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-7. DOI:10.1309/KV06-32LJ-8EDM-EWQT. PubMed

Issue
Journal of Hospital Medicine 12 (9)
Issue
Journal of Hospital Medicine 12 (9)
Page Number
784
Page Number
784
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Christopher Petrilli, MD, Michigan Medicine,1500 E. Medical Center Drive, Ann Arbor, MI 48105. Telephone: 734-936-5582; Fax: 734-647-9443; E-mail: [email protected]
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media

In Reference to: “Cost and Utility of Thrombophilia Testing”

Article Type
Changed
Fri, 12/14/2018 - 08:05

The article by Petrilli et al. points to the important but complicated issue of ordering laboratory testing for thrombophilia despite multiple guidelines that dispute the clinical utility of such testing for many indications.1 We question the basis of these authors’ assertion that Medicare spends $300 to $672 million for thrombophilia testing annually. They arrived at this figure by multiplying the price of a thrombophilia test panel (between $1100 and $2400) by the number of annual Medicare claims for thrombophilia analysis, which they estimated at 280,000. The price of the panel is derived from two papers: (1) a 2001 review2 that lists prices of various thrombophilia-related tests adding up to $1782, and (2) a 2006 evaluation by Somma et al.3 of thrombophilia screening at one hospital in New York in 2005. The latter paper refers to various thrombophilia panels from Quest Diagnostics with list prices ranging from $1311 to $2429. However, the repertoire of available test panels and their prices have changed over the last decade. The cost evaluation of thrombophilia testing should be based on actual current payments for tests, and not on list prices for laboratory offerings from over a decade ago. Several laboratories offer mutational analysis of 3 genes—F5, F2, and MTHFR—as a thrombophilia risk panel. Based on the Current Procedural Terminology (CPT) codes listed by the test suppliers (81240, 81241, and 81291), the average Medicare payment for the combination of these 3 markers in 2013 was $172.4 A broader panel of several biochemical, immunological, and genetic assays had a maximum Medicare payment in 2015 of $405 (Table).5

Also, the annual number of Medicare claims for thrombophilia evaluation was not documented by Petrilli et al.1 In support of the estimate of 280,000 Medicare claims for thrombophilia testing in 2014, the authors cite Somma et al.,3 but that paper referred to 275,000 estimated new venous thromboembolism cases in the United States, not the number of claims for thrombophilia testing for all payers, let alone for Medicare. In 2013, Medicare expenditures for genetic testing of the three markers that could be identified by unique CPT codes (F2, F5, and MTHFR) amounted to $33,235,621.4 This accounts only for DNA analysis, not the functional testing of various components of blood clotting cascade, which may precede or accompany genetic testing.

In conclusion, the cost evaluation of thrombophilia screening is more challenging than the calculation by Petrilli et al. suggests.1 Even if Medicare paid as much as $400 per individual tested and assuming up to 200,000 individuals underwent thrombophilia testing per year, the aggregate Medicare expenditure would have been no more than roughly $80 million. Thus, the estimated range in the article appears to have overstated actual Medicare expenditures by an order of magnitude. This does not take away from their overall conclusion that payers are burdened with significant expenditures for laboratory testing that may not present clinical value for many patients.6 We need research into the patterns of utilization as well as improvements in documentation of expenditures associated with these tests.

Disclosure

The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention, the Department of Veterans Affairs, or the United States government. The authors have nothing to disclose.

References

1. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
2. Abramson N, Abramson S. Hypercoagulability: clinical assessment and treatment. South Med J. 2001;94(10):1013-1020. PubMed
3. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
4. Lynch JA, Berse B, Dotson WD, Khoury MJ, Coomer N, Kautter J. Utilization of genetic tests: Analysis of gene-specific billing in Medicare claims data [Published online ahead of print January 26, 2017]. Genet Med. 2017. doi: 10.1038/gim.2016.209. PubMed
5. Centers for Medicare and Medicaid Services. Clinical Laboratory Fee Schedule 2016. https://www.cms.gov/Medicare/Medicare-fee-for-service-Payment/clinicallabfeesched/index.html. Accessed on December 20, 2016.
6. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12 (9)
Topics
Page Number
783
Sections
Article PDF
Article PDF

The article by Petrilli et al. points to the important but complicated issue of ordering laboratory testing for thrombophilia despite multiple guidelines that dispute the clinical utility of such testing for many indications.1 We question the basis of these authors’ assertion that Medicare spends $300 to $672 million for thrombophilia testing annually. They arrived at this figure by multiplying the price of a thrombophilia test panel (between $1100 and $2400) by the number of annual Medicare claims for thrombophilia analysis, which they estimated at 280,000. The price of the panel is derived from two papers: (1) a 2001 review2 that lists prices of various thrombophilia-related tests adding up to $1782, and (2) a 2006 evaluation by Somma et al.3 of thrombophilia screening at one hospital in New York in 2005. The latter paper refers to various thrombophilia panels from Quest Diagnostics with list prices ranging from $1311 to $2429. However, the repertoire of available test panels and their prices have changed over the last decade. The cost evaluation of thrombophilia testing should be based on actual current payments for tests, and not on list prices for laboratory offerings from over a decade ago. Several laboratories offer mutational analysis of 3 genes—F5, F2, and MTHFR—as a thrombophilia risk panel. Based on the Current Procedural Terminology (CPT) codes listed by the test suppliers (81240, 81241, and 81291), the average Medicare payment for the combination of these 3 markers in 2013 was $172.4 A broader panel of several biochemical, immunological, and genetic assays had a maximum Medicare payment in 2015 of $405 (Table).5

Also, the annual number of Medicare claims for thrombophilia evaluation was not documented by Petrilli et al.1 In support of the estimate of 280,000 Medicare claims for thrombophilia testing in 2014, the authors cite Somma et al.,3 but that paper referred to 275,000 estimated new venous thromboembolism cases in the United States, not the number of claims for thrombophilia testing for all payers, let alone for Medicare. In 2013, Medicare expenditures for genetic testing of the three markers that could be identified by unique CPT codes (F2, F5, and MTHFR) amounted to $33,235,621.4 This accounts only for DNA analysis, not the functional testing of various components of blood clotting cascade, which may precede or accompany genetic testing.

In conclusion, the cost evaluation of thrombophilia screening is more challenging than the calculation by Petrilli et al. suggests.1 Even if Medicare paid as much as $400 per individual tested and assuming up to 200,000 individuals underwent thrombophilia testing per year, the aggregate Medicare expenditure would have been no more than roughly $80 million. Thus, the estimated range in the article appears to have overstated actual Medicare expenditures by an order of magnitude. This does not take away from their overall conclusion that payers are burdened with significant expenditures for laboratory testing that may not present clinical value for many patients.6 We need research into the patterns of utilization as well as improvements in documentation of expenditures associated with these tests.

Disclosure

The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention, the Department of Veterans Affairs, or the United States government. The authors have nothing to disclose.

The article by Petrilli et al. points to the important but complicated issue of ordering laboratory testing for thrombophilia despite multiple guidelines that dispute the clinical utility of such testing for many indications.1 We question the basis of these authors’ assertion that Medicare spends $300 to $672 million for thrombophilia testing annually. They arrived at this figure by multiplying the price of a thrombophilia test panel (between $1100 and $2400) by the number of annual Medicare claims for thrombophilia analysis, which they estimated at 280,000. The price of the panel is derived from two papers: (1) a 2001 review2 that lists prices of various thrombophilia-related tests adding up to $1782, and (2) a 2006 evaluation by Somma et al.3 of thrombophilia screening at one hospital in New York in 2005. The latter paper refers to various thrombophilia panels from Quest Diagnostics with list prices ranging from $1311 to $2429. However, the repertoire of available test panels and their prices have changed over the last decade. The cost evaluation of thrombophilia testing should be based on actual current payments for tests, and not on list prices for laboratory offerings from over a decade ago. Several laboratories offer mutational analysis of 3 genes—F5, F2, and MTHFR—as a thrombophilia risk panel. Based on the Current Procedural Terminology (CPT) codes listed by the test suppliers (81240, 81241, and 81291), the average Medicare payment for the combination of these 3 markers in 2013 was $172.4 A broader panel of several biochemical, immunological, and genetic assays had a maximum Medicare payment in 2015 of $405 (Table).5

Also, the annual number of Medicare claims for thrombophilia evaluation was not documented by Petrilli et al.1 In support of the estimate of 280,000 Medicare claims for thrombophilia testing in 2014, the authors cite Somma et al.,3 but that paper referred to 275,000 estimated new venous thromboembolism cases in the United States, not the number of claims for thrombophilia testing for all payers, let alone for Medicare. In 2013, Medicare expenditures for genetic testing of the three markers that could be identified by unique CPT codes (F2, F5, and MTHFR) amounted to $33,235,621.4 This accounts only for DNA analysis, not the functional testing of various components of blood clotting cascade, which may precede or accompany genetic testing.

In conclusion, the cost evaluation of thrombophilia screening is more challenging than the calculation by Petrilli et al. suggests.1 Even if Medicare paid as much as $400 per individual tested and assuming up to 200,000 individuals underwent thrombophilia testing per year, the aggregate Medicare expenditure would have been no more than roughly $80 million. Thus, the estimated range in the article appears to have overstated actual Medicare expenditures by an order of magnitude. This does not take away from their overall conclusion that payers are burdened with significant expenditures for laboratory testing that may not present clinical value for many patients.6 We need research into the patterns of utilization as well as improvements in documentation of expenditures associated with these tests.

Disclosure

The findings and conclusions in this report are those of the authors and do not necessarily represent the official position of the Centers for Disease Control and Prevention, the Department of Veterans Affairs, or the United States government. The authors have nothing to disclose.

References

1. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
2. Abramson N, Abramson S. Hypercoagulability: clinical assessment and treatment. South Med J. 2001;94(10):1013-1020. PubMed
3. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
4. Lynch JA, Berse B, Dotson WD, Khoury MJ, Coomer N, Kautter J. Utilization of genetic tests: Analysis of gene-specific billing in Medicare claims data [Published online ahead of print January 26, 2017]. Genet Med. 2017. doi: 10.1038/gim.2016.209. PubMed
5. Centers for Medicare and Medicaid Services. Clinical Laboratory Fee Schedule 2016. https://www.cms.gov/Medicare/Medicare-fee-for-service-Payment/clinicallabfeesched/index.html. Accessed on December 20, 2016.
6. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed

References

1. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
2. Abramson N, Abramson S. Hypercoagulability: clinical assessment and treatment. South Med J. 2001;94(10):1013-1020. PubMed
3. Somma J, Sussman, II, Rand JH. An evaluation of thrombophilia screening in an urban tertiary care medical center: A “real world” experience. Am J Clin Pathol. 2006;126(1):120-127. PubMed
4. Lynch JA, Berse B, Dotson WD, Khoury MJ, Coomer N, Kautter J. Utilization of genetic tests: Analysis of gene-specific billing in Medicare claims data [Published online ahead of print January 26, 2017]. Genet Med. 2017. doi: 10.1038/gim.2016.209. PubMed
5. Centers for Medicare and Medicaid Services. Clinical Laboratory Fee Schedule 2016. https://www.cms.gov/Medicare/Medicare-fee-for-service-Payment/clinicallabfeesched/index.html. Accessed on December 20, 2016.
6. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed

Issue
Journal of Hospital Medicine 12 (9)
Issue
Journal of Hospital Medicine 12 (9)
Page Number
783
Page Number
783
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Julie A. Lynch, PhD, RN, MBA; [email protected]
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media

Reducing Routine Labs—Teaching Residents Restraint

Article Type
Changed
Thu, 09/28/2017 - 21:48

Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.

Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.

Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7

Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10

In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.

This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.

However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.

A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.

Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.

Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.

 

 

Disclosure

The authors declare no conflicts of interest.

References

1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74. 
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12 (9)
Topics
Page Number
781-782
Sections
Article PDF
Article PDF

Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.

Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.

Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7

Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10

In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.

This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.

However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.

A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.

Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.

Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.

 

 

Disclosure

The authors declare no conflicts of interest.

Inappropriate resource utilization is a pervasive problem in healthcare, and it has received increasing emphasis over the last few years as financial strain on the healthcare system has grown. This waste has led to new models of care—bundled care payments, accountable care organizations, and merit-based payment systems. Professional organizations have also emphasized the provision of high-value care and avoiding unnecessary diagnostic testing and treatment. In April 2012, the American Board of Internal Medicine (ABIM) launched the Choosing Wisely initiative to assist professional societies in putting forth recommendations on clinical circumstances in which particular tests and procedures should be avoided.

Until recently, teaching cost-effective care was not widely considered an important part of internal medicine residency programs. In a 2010 study surveying residents about resource utilization feedback, only 37% of internal medicine residents reported receiving any feedback on resource utilization and 20% reported receiving regular feedback.1 These findings are especially significant in the broader context of national healthcare spending, as there is evidence that physicians who train in high-spending localities tend to have high-spending patterns later in their careers.2 Another study showed similar findings when looking at region of training relative to success at recognizing high-value care on ABIM test questions.3 The Accreditation Council for Graduate Medical Education has developed the Clinical Learning Environment Review program to help address this need. This program provides feedback to teaching hospitals about their success at teaching residents and fellows to provide high-value medical care.

Given the current zeitgeist of emphasizing cost-effective, high-value care, appropriate utilization of routine labs is one area that stands out as an especially low-hanging fruit. The Society of Hospital Medicine, as part of the Choosing Wisely campaign, recommended minimizing routine lab draws in hospitalized patients with clinical and laboratory stability.4 Certainly, avoiding unnecessary routine lab draws is ideal because it saves patients the pain of superfluous phlebotomy, allows phlebotomy resources to be directed to blood draws with actual clinical utility, and saves money. There is also good evidence that hospital-acquired anemia, an effect of overuse of routine blood draws, has an adverse impact on morbidity and mortality in postmyocardial infarction patients5,6 and more generally in hospitalized patients.7

Several studies have examined lab utilization on teaching services. Not surprisingly, the vast majority of test utilization is attributable to the interns (45%) and residents (26%), rather than attendings.8 Another study showed that internal medicine residents at one center had a much stronger self-reported predilection for ordering daily recurring routine labs rather than one-time labs for the following morning when admitting patients and when picking up patients, as compared with hospitalist attendings.9 This self-reported tendency translated into ordering more complete blood counts and basic chemistry panels per patient per day. A qualitative study looking at why internal medicine and general surgery residents ordered unnecessary labs yielded a number of responses, including ingrained habit, lack of price transparency, clinical uncertainty, belief that the attending expected it, and absence of a culture emphasizing resource utilization.10

In this issue of the Journal of Hospital Medicine, Kurtzman and colleagues report on a mixed-methods study looking at internal medicine resident engagement at their center with an electronic medical record–associated dashboard providing feedback on lab utilization.11 Over a 6-month period, the residents randomized into the dashboard group received weekly e-mails while on service with a brief synopsis of their lab utilization relative to their peers and also a link to a dashboard with a time-series display of their relative lab ordering. While the majority of residents (74%) opened the e-mail, only a minority (21%) actually accessed the dashboard. Also, there was not a statistically significant relationship between dashboard use and lab ordering, though there was a trend to decreased lab ordering associated with opening the dashboard. The residents who participated in a focus group expressed both positive and negative opinions on the dashboard.

This is one example of social comparison feedback, which aims to improve performance by providing information to physicians on their performance relative to their peers. It has been shown to be effective in other areas of clinical medicine like limiting antibiotic overutilization in patients with upper respiratory infections.12 One study examining social comparison feedback and objective feedback found that social comparison feedback improved performance for a simulated work task more for high performers but less for low performers than standard objective feedback.13 The utility of this type of feedback has not been extensively studied in healthcare.

However, the audit and feedback strategy, of which social comparison feedback is a subtype, has been extensively studied in healthcare. A 2012 Cochrane Review found that audit and feedback leads to “small but potentially important improvements in professional practice.”14 They found a wide variation in the effect of feedback among the 140 studies they analyzed. The factors strongly associated with a significant improvement after feedback were as follows: poor performance at baseline, a colleague or supervisor as the one providing the audit and feedback, repetitive feedback, feedback given both verbally and in writing, and clear advice or guidance on how to improve. Many of these components were missing from this study—that may be one reason the authors did not find a significant relationship between dashboard use and lab ordering.

A number of interventions, however, have been shown to decrease lab utilization, including unbundling of the components of the metabolic panel and disallowing daily recurring lab orders,15 fee displays,16 cost reminders,17 didactics and data feedback,18 and a multifaceted approach (didactics, monthly feedback, checklist, and financial incentives).19 A multipronged strategy, including an element of education, audit and feedback, hard-stop limits on redundant lab ordering, and fee information is likely to be the most successful strategy to reducing lab overutilization for both residents and attending physicians. Resource overutilization is a multifactorial problem, and multifactorial problems call for multifaceted solutions. Moreover, it may be necessary to employ both “carrot” and “stick” elements to such an approach, rewarding physicians who practice appropriate stewardship, but also penalizing practitioners who do not appropriately adjust their lab ordering tendencies after receiving feedback showing overuse.

Physician behavior is difficult to change, and there are many reasons why physicians order inappropriate tests and studies, including provider uncertainty, fear of malpractice litigation, and inadequate time to consider the utility of a test. Audit and feedback should be integrated into residency curriculums focusing on high-value care, in which hospitalists should play a central role. If supervising attendings are not integrated into such curriculums and continue to both overorder tests themselves and allow residents to do so, then the informal curriculum will trump the formal one.

Physicians respond to incentives, and appropriately designed incentives should be developed to help steer them to order only those tests and studies that are medically indicated. Such incentives must be provided alongside audit and feedback with appropriate goals that account for patient complexity. Ultimately, routine lab ordering is just one area of overutilization in hospital medicine, and the techniques that are successful at reducing overuse in this arena will need to be applied to other aspects of medicine like imaging and medication prescribing.

 

 

Disclosure

The authors declare no conflicts of interest.

References

1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74. 
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed

References

1. Dine CJ, Miller J, Fuld A, Bellini LM, Iwashyna TJ. Educating Physicians-in-Training About Resource Utilization and Their Own Outcomes of Care in the Inpatient Setting. J Grad Med Educ. 2010;2(2):175-180. PubMed
2. Chen C, Petterson S, Phillips R, Bazemore A, Mullan F. Spending patterns in region of residency training and subsequent expenditures for care provided by practicing physicians for Medicare beneficiaries. JAMA. 2014;312(22):2385-2393. PubMed
3. Sirovich BE, Lipner RS, Johnston M, Holmboe ES. The association between residency training and internists’ ability to practice conservatively. JAMA Intern Med. 2014;174(10):1640-1648. PubMed
4. Bulger J, Nickel W, Messler J, et al. Choosing wisely in adult hospital medicine: Five opportunities for improved healthcare value. J Hosp Med. 2013;8(9):486-492. PubMed
5. Salisbury AC, Amin AP, Reid KJ, et al. Hospital-acquired anemia and in-hospital mortality in patients with acute myocardial infarction. Am Heart J. 2011;162(2):300-309.e3. PubMed
6. Meroño O, Cladellas M, Recasens L, et al. In-hospital acquired anemia in acute coronary syndrome. Predictors, in-hospital prognosis and one-year mortality. Rev Esp Cardiol (Engl Ed). 2012;65(8):742-748. PubMed
7. Koch CG, Li L, Sun Z, et al. Hospital-acquired anemia: Prevalence, outcomes, and healthcare implications. J Hosp Med. 2013;8(9):506-512. PubMed
8. Iwashyna TJ, Fuld A, Asch DA, Bellini LM. The impact of residents, interns, and attendings on inpatient laboratory ordering patterns: a report from one university’s hospitalist service. Acad Med. 2011;86(1):139-145. PubMed
9. Ellenbogen MI, Ma M, Christensen NP, Lee J, O’Leary KJ. Differences in Routine Laboratory Ordering Between a Teaching Service and a Hospitalist Service at a Single Academic Medical Center. South Med J. 2017;110(1):25-30. PubMed
10. Sedrak MS, Patel MS, Ziemba JB, et al. Residents’ self-report on why they order perceived unnecessary inpatient laboratory tests. J Hosp Med. 2016;11(12):869-872. PubMed
11. Kurtzman G, Dine J, Epstein A, et al. Internal Medicine Resident Engagement with a Laboratory Utilization Dashboard: Mixed Methods Study. J Hosp Med. 2017;12(9):743-746. PubMed
12. Meeker D, Linder JA, Fox CR, et al. Effect of Behavioral Interventions on Inappropriate Antibiotic Prescribing Among Primary Care Practices: A Randomized Clinical Trial. JAMA. 2016;315(6):562-570. PubMed
13. Moon K, Lee K, Lee K, Oah S. The Effects of Social Comparison and Objective Feedback on Work Performance Across Different Performance Levels. J Organ Behav Manage. 2017;37(1):63-74. 
14. Ivers N, Jamtvedt G, Flottorp S, et al. Audit and feedback : effects on professional practice and healthcare outcomes ( Review ). Cochrane Database Syst Rev. 2012;(6):CD000259. PubMed
15. Neilson EG, Johnson KB, Rosenbloom ST, Dupont WD, Talbert D, Giuse DA. The Impact of Peer Management on Test-Ordering Behavior. Ann Intern Med. 2004;141:196-204. PubMed
16. Feldman LS, Shihab HM, Thiemann D, et al. Impact of providing fee data on laboratory test ordering: a controlled clinical trial. JAMA Intern Med. 2013;173(10):903-908. PubMed
17. Stuebing EA, Miner TJ. Surgical vampires and rising health care expenditure: reducing the cost of daily phlebotomy. Arch Surg. 2011;146:524-527. PubMed
18. Iams W, Heck J, Kapp M, et al. A Multidisciplinary Housestaff-Led Initiative to Safely Reduce Daily Laboratory Testing. Acad Med. 2016;91(6):813-820. PubMed
19. Yarbrough PM, Kukhareva P V., Horton D, Edholm K, Kawamoto K. Multifaceted intervention including education, rounding checklist implementation, cost feedback, and financial incentives reduces inpatient laboratory costs. J Hosp Med. 2016;11(5):348-354. PubMed

Issue
Journal of Hospital Medicine 12 (9)
Issue
Journal of Hospital Medicine 12 (9)
Page Number
781-782
Page Number
781-782
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
"Michael I. Ellenbogen, MD", Hospitalist Program, Division of General Internal Medicine, Johns Hopkins School of Medicine, Baltimore, Maryland; Telephone: 443-287-4362; Fax: 410-502-0923; E-mail: [email protected]
Content Gating
Gated (full article locked unless allowed per User)
Alternative CME
Disqus Comments
Default
Gating Strategy
First Peek Free
Article PDF Media

Does the Week-End Justify the Means?

Article Type
Changed
Thu, 09/28/2017 - 21:47

Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?

Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.

Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.

A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11

Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.

To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.

One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.

Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.

But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.

The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”

 

 

Disclosure

The authors have nothing to disclose.

References

1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12 (9)
Topics
Page Number
779-780
Sections
Article PDF
Article PDF

Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?

Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.

Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.

A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11

Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.

To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.

One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.

Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.

But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.

The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”

 

 

Disclosure

The authors have nothing to disclose.

Let’s face it—rates of hospital admission are on the rise, but there are still just 7 days in a week. That means that patients are increasingly admitted on weekdays and on the weekend, requiring more nurses and doctors to look after them. Why then are there no lines for coffee on a Saturday? Does this reduced intensity of staffing translate into worse care for our patients?

Since one of its earliest descriptions in hospitalized patients, the “weekend effect” has been extensively studied in various patient populations and hospital settings.1-5 The results have been varied, depending on the place of care,6 reason for care, type of admission,5,7 or admitting diagnosis.1,8,9 Many researchers have posited the drivers behind the weekend effect, including understaffed wards, intensity of specialist care, delays in procedural treatments, or severity of illness, but the truth is that we still don’t know.

Pauls et al. performed a robust systematic review and meta-analysis examining the rates of in-hospital mortality in patients admitted on the weekend compared with those admitted on weekdays.10 They analyzed predetermined subgroups to identify system- and patient-level factors associated with a difference in weekend mortality.

A total of 97 studies—comprising an astounding 51 million patients—was included in the study. They found that individuals admitted on the weekend carried an almost 20% increase in the risk of death compared with those who landed in hospital on a weekday. The effect was present for both in-hospital deaths and when looking specifically at 30-day mortality. Translating these findings into practice, an additional 14 deaths per 1000 admissions occur when patients are admitted on the weekend. Brain surgery can be less risky.11

Despite this concerning finding, no individual factor was identified that could account for the effect. There was a 16% and 11% increase in mortality in weekend patients associated with decreased hospital staffing and delays to procedural therapies, respectively. No differences were found when examining reduced rates of procedures or illness severity on weekends compared with weekdays. But one must always interpret subgroup analyses, even prespecified ones, with caution because they often lack the statistical power to make concrete conclusions.

To this end, an important finding of the study by Pauls et al. highlights the variation in mortality risk as it relates to the weekend effect.10 Even for individuals with cancer, a disease with a relatively predictable rate of decline, there are weekend differences in mortality risk that depend upon the type of cancer.8,12 This heterogeneity persists when examining for the possible factors that contribute to the effect, introducing a significant amount of noise into the analysis, and may explain why research to date has been unable to find the proverbial black cat in the coal cellar.

One thing Pauls et al. makes clear is that the weekend effect appears to be a real phenomenon, despite significant heterogeneity in the literature.10 Only a high-quality, systematic review has the capability to draw such conclusions. Prior work demonstrates that this effect is substantial in some individuals,and this study confirms that it perseveres beyond an immediate time period following admission.1,9 The elements contributing to the weekend effect remain undefined and are likely as complex as our healthcare system itself.

Society and policy makers should resist the tantalizing urge to invoke interventions aimed at fixing this issue before fully understanding the drivers of a system problem. The government of the United Kingdom has decreed a manifesto to create a “7-day National Health Service,” in which weekend services and physician staffing will match that of the weekdays. Considering recent labor tensions between junior doctors in the United Kingdom over pay and working hours, the stakes are at an all-time high.

But such drastic measures violate a primary directive of quality improvement science to study and understand the problem before reflexively jumping to solutions. This will require new research endeavors aimed at determining the underlying factor(s) responsible for the weekend effect. Once we are confident in its cause, only then can careful evaluation of targeted interventions aimed at the highest-risk admissions be instituted. As global hospital and healthcare budgets bend under increasing strain, a critical component of any proposed intervention must be to examine the cost-effectiveness in doing so. Because the weekend effect is one of increased mortality, it will be hard to justify an acceptable price for an individual’s life. And it is not as straightforward as a randomized trial examining the efficacy of parachutes. Any formal evaluation must account for the unintended consequences and opportunity costs of implementing a potential fix aimed at minimizing the weekend effect.

The weekend effect has now been studied for over 15 years. Pauls et al. add to our knowledge of this phenomenon, confirming that the overall risk of mortality for patients admitted on the weekend is real, variable, and substantial.10 As more individuals are admitted to hospitals, resulting in increasing numbers of admissions on the weekend, a desperate search for the underlying cause must be carried out before we can fix it. Whatever the means to the end, our elation will continue to be tempered by a feeling of uneasiness every time our coworkers joyously exclaim, “TGIF!”

 

 

Disclosure

The authors have nothing to disclose.

References

1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed

References

1. Bell CM, Redelmeier DA. Mortality among patients admitted to hospitals on weekends as compared with weekdays. N Engl J Med. 2001;345(9):663-668. doi:10.1056/NEJMsa003376. PubMed
2. Bell CM, Redelmeier DA. Waiting for urgent procedures on the weekend among emergently hospitalized patients. AJM. 2004;117(3):175-181. doi:10.1016/j.amjmed.2004.02.047. PubMed
3. Kalaitzakis E, Helgeson J, Strömdahl M, Tóth E. Weekend admission in upper GI bleeding: does it have an impact on outcome? Gastrointest Endosc. 2015;81(5):1295-1296. doi:10.1016/j.gie.2014.12.003. PubMed
4. Nanchal R, Kumar G, Taneja A, et al. Pulmonary embolism: the weekend effect. Chest. 2012;142(3):690-696. doi:10.1378/chest.11-2663. PubMed
5. Ricciardi R, Roberts PL, Read TE, Baxter NN, Marcello PW, Schoetz DJ. Mortality rate after nonelective hospital admission. Arch Surg. 2011;146(5):545-551. PubMed
6. Wunsch H, Mapstone J, Brady T, Hanks R, Rowan K. Hospital mortality associated with day and time of admission to intensive care units. Intensive Care Med. 2004;30(5):895-901. doi:10.1007/s00134-004-2170-3. PubMed
7. Freemantle N, Richardson M, Wood J, et al. Weekend hospitalization and additional risk of death: an analysis of inpatient data. J R Soc Med. 2012;105(2):74-84. doi:10.1258/jrsm.2012.120009. PubMed
8. Lapointe-Shaw L, Bell CM. It’s not you, it’s me: time to narrow the gap in weekend care. BMJ Qual Saf. 2014;23(3):180-182. doi:10.1136/bmjqs-2013-002674. PubMed
9. Concha OP, Gallego B, Hillman K, Delaney GP, Coiera E. Do variations in hospital mortality patterns after weekend admission reflect reduced quality of care or different patient cohorts? A population-based study. BMJ Qual Saf. 2014;23(3):215-222. doi:10.1136/bmjqs-2013-002218. PubMed
10. Pauls LA, Johnson-Paben R, McGready J, Murphy JD, Pronovost PJ, Wu CL. The Weekend Effect in Hospitalized Patients: A Meta-analysis. J Hosp Med. 2017;12(9):760-766. PubMed
11. American College of Surgeons. NSQIP Risk Calculator. http://riskcalculator.facs.org/RiskCalculator/. Accessed on July 5, 2017.
12. Lapointe-Shaw L, Abushomar H, Chen XK, et al. Care and outcomes of patients with cancer admitted to the hospital on weekends and holidays: a retrospective cohort study. J Natl Compr Canc Netw. 2016;14(7):867-874. PubMed

Issue
Journal of Hospital Medicine 12 (9)
Issue
Journal of Hospital Medicine 12 (9)
Page Number
779-780
Page Number
779-780
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Chaim M. Bell, MD, PhD, Sinai Health System, Department of Medicine, 600 University Ave. Room 427, Toronto, ON, Canada M5G 1X5. ; Telephone: 416-586-4800 x2583 ; Fax: 416-586-8350; E-mail: [email protected]
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Article PDF Media

Inpatient Thrombophilia Testing: At What Expense?

Article Type
Changed
Fri, 12/14/2018 - 08:04

Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.

However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.

In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.

These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7

It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8

How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.

In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.

 

 

Disclosure

Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.

References

1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed

Article PDF
Issue
Journal of Hospital Medicine 12 (9)
Topics
Page Number
777-778
Sections
Article PDF
Article PDF

Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.

However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.

In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.

These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7

It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8

How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.

In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.

 

 

Disclosure

Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.

Thrombotic disorders, such as venous thromboembolism (VTE) and acute ischemic stroke, are highly prevalent,1 morbid, and anxiety-provoking conditions for patients, their families, and providers.2 Often, a clear cause for these thrombotic events cannot be found, leading to diagnoses of “cryptogenic stroke” or “idiopathic VTE.” In response, many patients and clinicians search for a cause with thrombophilia testing.

However, evaluation for thrombophilia is rarely clinically useful in hospitalized patients. Test results are often inaccurate in the setting of acute thrombosis or active anticoagulation. Even when thrombophilia results are reliable, they seldom alter immediate management of the underlying condition, especially for the inherited forms.3 An important exception is when there is high clinical suspicion for the antiphospholipid syndrome (APS), because APS test results may affect both short-term and long-term drug choices and international normalized ratio target range. Despite the broad recommendations against routine use of thrombophilia testing (including the Choosing Wisely campaign),4 patterns and cost of testing for inpatient thrombophilia evaluation have not been well reported.

In this issue of Journal of Hospital Medicine, Cox et al.5 and Mou et al.6 retrospectively review the appropriateness and impact of inpatient thrombophilia testing at 2 academic centers. In the report by Mou and colleagues, nearly half of all thrombophilia tests were felt to be inappropriate at an excess cost of over $40,000. Cox and colleagues identified that 77% of patients received 1 or more thrombophilia tests with minimal clinical utility. Perhaps most striking, Cox and colleagues report that management was affected in only 2 of 163 patients (1.2%) that received thrombophilia testing; both had cryptogenic stroke and both were started on anticoagulation after testing positive for multiple coagulation defects.

These studies confirm 2 key findings: first, that 43%-63% of tests are potentially inaccurate or of low utility, and second, that inpatient thrombophilia testing can be costly. Importantly, the costs of inappropriate testing were likely underestimated. For example, Mou et al. excluded 16.6% of tests that were performed for reasons that could not always be easily justified—such as “tests ordered with no documentation or justification” or “work-up sent solely on suspicion of possible thrombotic event without diagnostic confirmation.” Additionally, Mou et al. defined appropriateness more generously than current guidelines; for example, “recurrent provoked VTE” was listed as an appropriate indication for thrombophilia testing, although this is not supported by current guidelines for inherited thrombophilia evaluation. Similarly, Cox et al included cryptogenic stroke as an appropriate indication to perform thrombophilia testing; however, current American Heart Association and American Stroke Association guidelines state that usefulness of screening for hypercoagulable states in such patients is unknown.7 Furthermore, APS testing is not recommended in all cases of cryptogenic stroke in the absence of other clinical manifestations of APS.7

It remains puzzling why physicians continue to order inpatient thrombophilia testing despite their low clinical utility and inaccurate results. Cox and colleagues suggested that a lack of clinician and patient education may explain part of this reason. Likewise, easy access to “thrombophilia panels” make it easy for any clinician to order a number of tests that appear to be expert endorsed due to their inclusion in the panel. Cox et al. found that 79% of all thrombophilia tests were ordered as a part of a panel. Finally, patients and clinicians are continually searching for a reason why the thromboembolic event occurred. The thrombophilia test results (even if potentially inaccurate), may lead to a false sense of relief for both parties, no matter the results. If a thrombophilia is found, then patients and clinicians often have a sense for why the thrombotic event occurred. If the testing is negative, there may be a false sense of reassurance that “no genetic” cause for thrombosis exists.8

How can we improve care in this regard? Given the magnitude of financial and psychological cost of inappropriate inpatient thrombophilia testing,9 a robust deimplementation effort is needed.10,11 Electronic-medical-record–based solutions may be the most effective tool to educate physicians at the point of care while simultaneously deterring inappropriate ordering. Examples include eliminating tests without evidence of clinical utility in the inpatient setting (ie, methylenetetrahydrofolate reductase); using hard stops to prevent unintentional duplicative tests12; and preventing providers from ordering tests that are not reliable in certain settings—such as protein S activity when patients are receiving warfarin. The latter intervention would have prevented 16% of tests (on 44% of the patients) performed in the Cox et al study. Other promising efforts include embedding guidelines into order sets and requiring the provider to choose a guideline-based reason before being allowed to order such a test. Finally, eliminating thrombophilia “panels” may reduce unnecessary duplicate testing and avoid giving a false sense of clinical validation to ordering providers who may not be familiar with the indications or nuances of each individual test.

In light of mounting evidence, including the 2 important studies discussed above, it is no longer appropriate or wise to allow unfettered access to thrombophilia testing in hospitalized patients. The evidence suggests that these tests are often ordered without regard to expense, utility, or accuracy in hospital-based settings. Deimplementation efforts that provide hard stops, education, and limited access to such testing in the electronic medical ordering system when ordering thrombophilia workups now appear necessary.

 

 

Disclosure

Lauren Heidemann and Christopher Petrilli have no conflicts of interest to report. Geoffrey Barnes reports the following conflicts of interest: Research funding from NIH/NHLBI (K01 HL135392), Blue Cross-Blue Shield of Michigan, and BMS/Pfizer. Consulting from BMS/Pfizer and Portola.

References

1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed

References

1. Heit JA. Thrombophilia: common questions on laboratory assessment and management. Hematology Am Soc Hematol Educ Program. 2007:127-135. PubMed
2. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics--2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-322. PubMed
3. Petrilli CM, Heidemann L, Mack M, Durance P, Chopra V. Inpatient inherited thrombophilia testing. J Hosp Med. 2016;11(11):801-804. PubMed
4. American Society of Hematology. Ten Things Physicians and Patients Should Question. Choosing Wisely 2014. http://www.choosingwisely.org/societies/american-society-of-hematology/. Accessed July 3, 2017.
5. Cox N, Johnson SA, Vazquez S, et al. Patterns and appropriateness of thrombophilia testing in an academic medical center. J Hosp Med. 2017;12(9):705-709. PubMed
6. Mou E, Kwang H, Hom J, et al. Magnitude of potentially inappropriate thrombophilia testing in the inpatient hospital setting. J Hosp Med. 2017;12(9):735-738. PubMed
7. Kernan WN, Ovbiagele B, Black HR, et al. Guidelines for the prevention of stroke in patients with stroke and transient ischemic attack: a guideline for healthcare professionals from the American Heart Association/American Stroke Association. Stroke. 2014;45(7):2160-2236. PubMed
8. Stevens SM, Woller SC, Bauer KA, et al. Guidance for the evaluation and treatment of hereditary and acquired thrombophilia. J Thromb Thrombolysis. 2016;41(1):154-164. PubMed
9. Bank I, Scavenius MP, Buller HR, Middeldorp S. Social aspects of genetic testing for factor V Leiden mutation in healthy individuals and their importance for daily practice. Thromb Res. 2004;113(1):7-12. PubMed
10. Niven DJ, Mrklas KJ, Holodinsky JK, et al. Towards understanding the de-adoption of low-value clinical practices: a scoping review. BMC Med. 2015;13:255. PubMed
11. Prasad V, Ioannidis JP. Evidence-based de-implementation for contradicted, unproven, and aspiring healthcare practices. Implement Sci. 2014;9:1. PubMed
12. Procop GW, Keating C, Stagno P, et al. Reducing duplicate testing: a comparison of two clinical decision support tools. Am J Clin Pathol. 2015;143(5):623-626. PubMed

Issue
Journal of Hospital Medicine 12 (9)
Issue
Journal of Hospital Medicine 12 (9)
Page Number
777-778
Page Number
777-778
Topics
Article Type
Sections
Article Source

© 2017 Society of Hospital Medicine

Disallow All Ads
Correspondence Location
Lauren Heidemann, MD, 1500 E Medical Center Drive, SPC 5376, Ann Arbor, MI, 48109-5376; Telephone: 734-647-6928; Fax: 734-232-9343; E-mail: [email protected]
Content Gating
Open Access (article Unlocked/Open Access)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media