User login
An Analysis of the Involvement and Attitudes of Resident Physicians in Reporting Errors in Patient Care
From Adelante Healthcare, Mesa, AZ (Dr. Chin), University Hospitals of Cleveland, Cleveland, OH (Drs. Delozier, Bascug, Levine, Bejanishvili, and Wynbrandt and Janet C. Peachey, Rachel M. Cerminara, and Sharon M. Darkovich), and Houston Methodist Hospitals, Houston, TX (Dr. Bhakta).
Abstract
Background: Resident physicians play an active role in the reporting of errors that occur in patient care. Previous studies indicate that residents significantly underreport errors in patient care.
Methods: Fifty-four of 80 eligible residents enrolled at University Hospitals–Regional Hospitals (UH-RH) during the 2018-2019 academic year completed a survey assessing their knowledge and experience in completing Patient Advocacy and Shared Stories (PASS) reports, which serve as incident reports in the UH health system in reporting errors in patient care. A series of interventions aimed at educating residents about the PASS report system were then conducted. The 54 residents who completed the first survey received it again 4 months later.
Results: Residents demonstrated greater understanding of when filing PASS reports was appropriate after the intervention, as significantly more residents reported having been involved in a situation where they should have filed a PASS report but did not (P = 0.036).
Conclusion: In this study, residents often did not report errors in patient care because they simply did not know the process for doing so. In addition, many residents often felt that the reporting of patient errors could be used as a form of retaliation.
Keywords: resident physicians; quality improvement; high-value care; medical errors; patient safety.
Resident physicians play a critical role in patient care. Residents undergo extensive supervised training in order to one day be able to practice medicine in an unsupervised setting, with the goal of providing the highest quality of care possible. One study reported that primary care provided by residents in a training program is of similar or higher quality than that provided by attending physicians.1
Besides providing high-quality care, it is important that residents play an active role in the reporting of errors that occur regarding patient care as well as in identifying events that may compromise patient safety and quality.2 In fact, increased reporting of patient errors has been shown to decrease liability-related costs for hospitals.3 Unfortunately, physicians, and residents in particular, have historically been poor reporters of errors in patient care.4 This is especially true when comparing physicians to other health professionals, such as nurses, in error reporting.5
Several studies have examined the involvement of residents in reporting errors in patient care. One recent study showed that a graduate medical education financial incentive program significantly increased the number of patient safety events reported by residents and fellows.6 This study, along with several others, supports the concept of using incentives to help improve the reporting of errors in patient care for physicians in training.7-10 Another study used Quality Improvement Knowledge Assessment Tool (QIKAT) scores to assess quality improvement (QI) knowledge. The study demonstrated that self-assessment scores of QI skills using QIKAT scores improved following a targeted intervention.11 Because further information on the involvement and attitudes of residents in reporting errors in patient care is needed, University Hospitals of Cleveland (UH) designed and implemented a QI study during the 2018-2019 academic year. This prospective study used anonymous surveys to objectively examine the involvement and attitudes of residents in reporting errors in patient care.
Methods
The UH health system uses Patient Advocacy and Shared Stories (PASS) reports as incident reports to not only disclose errors in patient care but also to identify any events that may compromise patient safety and quality. Based on preliminary review, nurses, ancillary staff, and administrators file the majority of PASS reports.
The study group consisted of residents at University Hospitals–Regional Hospitals (UH-RH), which is comprised of 2 hospitals: University Hospitals–Richmond Medical Center (UH-RMC) and University Hospitals –Bedford Medical Center (UH-BMC). UH-RMC and UH-BMC are 2 medium-sized university-affiliated community hospitals located in the Cleveland metropolitan area in Northeast Ohio. Both serve as clinical training sites for Case Western Reserve University School of Medicine and Lake Erie College of Osteopathic Medicine, the latter of which helped fund this study. The study was submitted to the Institutional Review Board (IRB) of University Hospitals of Cleveland and granted “not human subjects research” status as a QI study.
Surveys
UH-RH offers residency programs in dermatology, emergency medicine, family medicine, internal medicine, orthopedic surgery, and physical medicine and rehabilitation, along with a 1-year transitional/preliminary year. A total of 80 residents enrolled at UH-RH during the 2018-2019 academic year. All 80 residents at UH-RH received an email in December 2018 asking them to complete an anonymous survey regarding the PASS report system. The survey was administered using the REDCap software system and consisted of 15 multiple-choice questions. As an incentive for completing the survey, residents were offered a $10 Amazon gift card. The gift cards were funded through a research grant from Lake Erie College of Osteopathic Medicine. Residents were given 1 week to complete the survey. At the end of the week, 54 of 80 residents completed the first survey.
Following the first survey, efforts were undertaken by the study authors, in conjunction with the quality improvement department at UH-RH, to educate residents about the PASS report system. These interventions included giving a lecture on the PASS report system during resident didactic sessions, sending an email to all residents about the PASS report system, and providing residents an opportunity to complete an optional online training course regarding the PASS report system. As an incentive for completing the online training course, residents were offered a $10 Amazon gift card. As before, the gift cards were funded through a research grant from Lake Erie College of Osteopathic Medicine.
A second survey was administered in April 2019, 4 months after the first survey. To determine whether the intervention made an impact on the involvement and attitudes of residents in the reporting errors in patient care, only residents who completed the first survey were sent the second survey. The second survey consisted of the same questions as the first survey and was also administered using the REDCap software system. As an incentive for completing the survey, residents were offered another $10 Amazon gift card, again were funded through a research grant from Lake Erie College of Osteopathic Medicine. Residents were given 1 week to complete the survey.
Analysis
Chi-square analyses were utilized to examine differences between preintervention and postintervention responses across categories. All analyses were conducted using R statistical software, version 3.6.1 (R Foundation for Statistical Computing).
Results
A total of 54 of 80 eligible residents responded to the first survey (Table). Twenty-nine of 54 eligible residents responded to the second survey. Postintervention, significantly more residents indicated being involved in a situation where they should have filed a PASS report but did not (58.6% vs 53.7%; P = 0.036). Improvement was seen in PASS knowledge postintervention, where fewer residents reported not knowing how to file a PASS report (31.5% vs 55.2%; P = 0.059). No other improvements were significant, nor were there significant differences in responses between any other categories pre- and postintervention.
Discussion
Errors in patient care are a common occurrence in the hospital setting. Reporting errors when they happen is important for hospitals to gain data and better care for patients, but studies show that patient errors are usually underreported. This is concerning, as data on errors and other aspects of patient care are needed to inform quality improvement programs.
This study measured residents’ attitudes and knowledge regarding the filing of a PASS report. It also aimed to increase both the frequency of and knowledge about filing a PASS report with interventions. The results from each survey indicated a statistically significant increase in knowledge of when to file a PASS report. In the first survey, 53.7% of residents responded they they were involved in an instance where they should have filed a PASS report but did not. In the second survey, 58.5% of residents reported being involved in an instance where they should have filed a PASS report but did not. This difference was statistically significant (P = 0.036), sugesting that the intervention was successful at increasing residents’ knowledge regarding PASS reports and the appropriate times to file a PASS report.
The survey results also showed a trend toward increasing aggregate knowledge level of how to file PASS reports on the first survey and second surveys (from 31.5% vs 55.2%. This demonstrates an increase in knowledge of how to file a PASS report among residents at our hospital after the intervention. It should be noted that the intervention that was performed in this study was simple, easy to perform, and can be completed at any hospital system that uses a similar system for reporting patient errors.
Another important trend indicating the effectiveness of the intervention was a 15% increase in knowledge of what the PASS report acronym stands for, along with a 13.1% aggregate increase in the number of residents who filed a PASS report. This indicated that residents may have wanted to file a PASS report previously but simply did not know how to until the intervention. In addition, there was also a decrease in the aggregate percentages of residents who had never filed a PASS report and an increase in how many PASS reports were filed.
While PASS reports are a great way for hospitals to gain data and insight into problems at their sites, there was also a negative view of PASS reports. For example, a large percentage of residents indicated that filing a PASS report would not make any difference and that PASS reports are often used as a form of retaliation, either against themselves as the submitter or the person(s) mentioned in the PASS report. More specifically, more than 50% of residents felt that PASS reports were sometimes or often used as a form of retaliation against others. While many residents correctly identified in the survey that PASS reports are not equivalent to a “write-up,” it is concerning that they still feel there is a strong potential for retaliation when filing a PASS report. This finding is unfortunate but matches the results of a multicenter study that found that 44.6% of residents felt uncomfortable reporting patient errors, possibly secondary to fear of retaliation, along with issues with the reporting system.12
It is interesting to note that a minority of residents indicated that they feel that PASS reports are filed as often as they should be (25.9% on first survey and 24.1% on second survey). This is concerning, as the data gathered through PASS reports is used to improve patient care. However, the percentage reported in our study, although low, is higher than that reported in a similar study involving patients with Medicare insurance, which showed that only 14% of patient safety events were reported.13 These results demonstrate that further interventions are necessary in order to ensure that a PASS report is filed each time a patient safety event occurs.
Another finding of note is that the majority of residents also feel that the process of filing a PASS report is too time consuming. The majority of residents who have completed a PASS report stated that it took them between 10 and 20 minutes to complete a PASS report, but those same individuals also feel that it should take < 10 minutes to complete a PASS report. This is an important issue for hospital systems to address. Reducing the time it takes to file a PASS report may facilitate an increase in the amount of PASS reports filed.
We administered our surveys using email outreach to residents asking them to complete an anonymous online survey regarding the PASS report system using the REDCap software system. Researchers have various ways of administering surveys, ranging from paper surveys, emails, and even mobile apps. One study showed that online surveys tend to have higher response rates compared to non-online surveys, such as paper surveys and telephone surveys, which is likely due to the ease of use of online surveys.14 At the same time, unsolicited email surveys have been shown to have a negative influence on response rates. Mobile apps are a new way of administering surveys. However, research has not found any significant difference in the time required to complete the survey using mobile apps compared to other forms of administering surveys. In addition, surveys using mobile apps did not have increased response rates compared to other forms of administering surveys.15
To increase the response rate of our surveys, we offered gift cards to the study population for completing the survey. Studies have shown that surveys that offer incentives tend to have higher response rates than surveys that do not.16 Also, in addition to serving as a method for gathering data from our study population, we used our surveys as an intervention to increase awareness of PASS reporting, as reported in other studies. For example, another study used the HABITS questionnaire to not only gather information about children’s diet, but also to promote behavioral change towards healthy eating habits.17
This study had several limitations. First, the study was conducted using an anonymous online survey, which means we could not clarify questions that residents found confusing or needed further explanation. For example, 17 residents indicated in the first survey that they knew how to PASS report, but 19 residents indicated in the same survey that they have filed a PASS report in the past.
A second limitation of the study was that fewer residents completed the second survey (29 of 54 eligible residents) compared to the first survey (54 of 80 eligible residents). This may have impacted the results of the analysis, as certain findings were not statistically significant, despite trends in the data.
A third limitation of the study is that not all of the residents that completed the first and second surveys completed the entire intervention. For example, some residents did not attend the didactic lecture discussing PASS reports, and as such may not have received the appropriate training prior to completing the second survey.
The findings from this study can be used by the residency programs at UH-RH and by residency programs across the country to improve the involvement and attitudes of residents in reporting errors in patient care. Hospital staff need to be encouraged and educated on how to better report patient errors and the importance of reporting these errors. It would benefit hospital systems to provide continued and targeted training to familiarize physicians with the process of reporting patient errors, and take steps to reduce the time it takes to report patient errors. By increasing the reporting of errors, hospitals will be able to improve patient care through initiatives aimed at preventing errors.
Conclusion
Residents play an important role in providing high-quality care for patients. Part of providing high-quality care is the reporting of errors in patient care when they occur. Physicians, and in particular, residents, have historically underreported errors in patient care. Part of this underreporting results from residents not knowing or understanding the process of filing a report and feeling that the reports could be used as a form of retaliation. For hospital systems to continue to improve patient care, it is important for residents to not only know how to report errors in patient care but to feel comfortable doing so.
Corresponding author: Andrew J. Chin, DO, MS, MPH, Department of Internal Medicine, Adelante Healthcare, 1705 W Main St, Mesa, AZ 85201; [email protected].
Financial disclosures: None.
Funding: This study was funded by a research grant provided by Lake Eric College of Osteopathic Medicine to Andrew J. Chin and Anish Bhakta.
1. Zallman L, Ma J, Xiao L, Lasser KE. Quality of US primary care delivered by resident and staff physicians. J Gen Intern Med. 2010;25(11):1193-1197.
2. Bagain JP. The future of graduate medical education: a systems-based approach to ensure patient safety. Acad Med. 2015;90(9):1199-1202.
3. Kachalia A, Kaufman SR, Boothman R, et al. Liability claims and costs before and after implementation of a medical disclosure program. Ann Intern Med. 2010;153(4):213-221.
4. Kaldjian LC, Jones EW, Wu BJ, et al. Reporting medical errors to improve patient safety: a survey of physicians in teaching hospitals. Arch Intern Med. 2008;168(1):40-46.
5. Rowin EJ, Lucier D, Pauker SG, et al. Does error and adverse event reporting by physicians and nurses differ? Jt Comm J Qual Patient Saf. 2008;34(9):537-545.
6. Turner DA, Bae J, Cheely G, et al. Improving resident and fellow engagement in patient safety through a graduate medical education incentive program. J Grad Med Educ. 2018;10(6):671-675.
7. Macht R, Balen A, McAneny D, Hess D. A multifaceted intervention to increase surgery resident engagement in reporting adverse events. J Surg Educ. 2015;72(6):e117-e122.
8. Scott DR, Weimer M, English C, et al. A novel approach to increase residents’ involvement in reporting adverse events. Acad Med. 2011;86(6):742-746.
9. Stewart DA, Junn J, Adams MA, et al. House staff participation in patient safety reporting: identification of predominant barriers and implementation of a pilot program. South Med J. 2016;109(7):395-400.
10. Vidyarthi AR, Green AL, Rosenbluth G, Baron RB. Engaging residents and fellows to improve institution-wide quality: the first six years of a novel financial incentive program. Acad Med. 2014;89(3):460-468.
11. Fok MC, Wong RY. Impact of a competency based curriculum on quality improvement among internal medicine residents. BMC Med Educ. 2014;14:252.
12. Wijesekera TP, Sanders L, Windish DM. Education and reporting of diagnostic errors among physicians in internal medicine training programs. JAMA Intern Med. 2018;178(11):1548-1549.
13. Levinson DR. Hospital incident reporting systems do not capture most patient harm. Washington, D.C.: U.S. Department of Health and Human Services Office of the Inspector General. January 2012. Report No. OEI-06-09-00091.
14. Evans JR, Mathur A. The value of online surveys. Internet Research. 2005;15(2):192-219.
15. Marcano Belisario JS, Jamsek J, Huckvale K, et al. Comparison of self‐administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database of Syst Rev. 2015;7:MR000042.
16. Manfreda KL, Batagelj Z, Vehovar V. Design of web survey questionnaires: three basic experiments. J Comput Mediat Commun. 2002;7(3):JCMC731.
17. Wright ND, Groisman‐Perelstein AE, Wylie‐Rosett J, et al. A lifestyle assessment and intervention tool for pediatric weight management: the HABITS questionnaire. J Hum Nutr Diet. 2011;24(1):96-100.
From Adelante Healthcare, Mesa, AZ (Dr. Chin), University Hospitals of Cleveland, Cleveland, OH (Drs. Delozier, Bascug, Levine, Bejanishvili, and Wynbrandt and Janet C. Peachey, Rachel M. Cerminara, and Sharon M. Darkovich), and Houston Methodist Hospitals, Houston, TX (Dr. Bhakta).
Abstract
Background: Resident physicians play an active role in the reporting of errors that occur in patient care. Previous studies indicate that residents significantly underreport errors in patient care.
Methods: Fifty-four of 80 eligible residents enrolled at University Hospitals–Regional Hospitals (UH-RH) during the 2018-2019 academic year completed a survey assessing their knowledge and experience in completing Patient Advocacy and Shared Stories (PASS) reports, which serve as incident reports in the UH health system in reporting errors in patient care. A series of interventions aimed at educating residents about the PASS report system were then conducted. The 54 residents who completed the first survey received it again 4 months later.
Results: Residents demonstrated greater understanding of when filing PASS reports was appropriate after the intervention, as significantly more residents reported having been involved in a situation where they should have filed a PASS report but did not (P = 0.036).
Conclusion: In this study, residents often did not report errors in patient care because they simply did not know the process for doing so. In addition, many residents often felt that the reporting of patient errors could be used as a form of retaliation.
Keywords: resident physicians; quality improvement; high-value care; medical errors; patient safety.
Resident physicians play a critical role in patient care. Residents undergo extensive supervised training in order to one day be able to practice medicine in an unsupervised setting, with the goal of providing the highest quality of care possible. One study reported that primary care provided by residents in a training program is of similar or higher quality than that provided by attending physicians.1
Besides providing high-quality care, it is important that residents play an active role in the reporting of errors that occur regarding patient care as well as in identifying events that may compromise patient safety and quality.2 In fact, increased reporting of patient errors has been shown to decrease liability-related costs for hospitals.3 Unfortunately, physicians, and residents in particular, have historically been poor reporters of errors in patient care.4 This is especially true when comparing physicians to other health professionals, such as nurses, in error reporting.5
Several studies have examined the involvement of residents in reporting errors in patient care. One recent study showed that a graduate medical education financial incentive program significantly increased the number of patient safety events reported by residents and fellows.6 This study, along with several others, supports the concept of using incentives to help improve the reporting of errors in patient care for physicians in training.7-10 Another study used Quality Improvement Knowledge Assessment Tool (QIKAT) scores to assess quality improvement (QI) knowledge. The study demonstrated that self-assessment scores of QI skills using QIKAT scores improved following a targeted intervention.11 Because further information on the involvement and attitudes of residents in reporting errors in patient care is needed, University Hospitals of Cleveland (UH) designed and implemented a QI study during the 2018-2019 academic year. This prospective study used anonymous surveys to objectively examine the involvement and attitudes of residents in reporting errors in patient care.
Methods
The UH health system uses Patient Advocacy and Shared Stories (PASS) reports as incident reports to not only disclose errors in patient care but also to identify any events that may compromise patient safety and quality. Based on preliminary review, nurses, ancillary staff, and administrators file the majority of PASS reports.
The study group consisted of residents at University Hospitals–Regional Hospitals (UH-RH), which is comprised of 2 hospitals: University Hospitals–Richmond Medical Center (UH-RMC) and University Hospitals –Bedford Medical Center (UH-BMC). UH-RMC and UH-BMC are 2 medium-sized university-affiliated community hospitals located in the Cleveland metropolitan area in Northeast Ohio. Both serve as clinical training sites for Case Western Reserve University School of Medicine and Lake Erie College of Osteopathic Medicine, the latter of which helped fund this study. The study was submitted to the Institutional Review Board (IRB) of University Hospitals of Cleveland and granted “not human subjects research” status as a QI study.
Surveys
UH-RH offers residency programs in dermatology, emergency medicine, family medicine, internal medicine, orthopedic surgery, and physical medicine and rehabilitation, along with a 1-year transitional/preliminary year. A total of 80 residents enrolled at UH-RH during the 2018-2019 academic year. All 80 residents at UH-RH received an email in December 2018 asking them to complete an anonymous survey regarding the PASS report system. The survey was administered using the REDCap software system and consisted of 15 multiple-choice questions. As an incentive for completing the survey, residents were offered a $10 Amazon gift card. The gift cards were funded through a research grant from Lake Erie College of Osteopathic Medicine. Residents were given 1 week to complete the survey. At the end of the week, 54 of 80 residents completed the first survey.
Following the first survey, efforts were undertaken by the study authors, in conjunction with the quality improvement department at UH-RH, to educate residents about the PASS report system. These interventions included giving a lecture on the PASS report system during resident didactic sessions, sending an email to all residents about the PASS report system, and providing residents an opportunity to complete an optional online training course regarding the PASS report system. As an incentive for completing the online training course, residents were offered a $10 Amazon gift card. As before, the gift cards were funded through a research grant from Lake Erie College of Osteopathic Medicine.
A second survey was administered in April 2019, 4 months after the first survey. To determine whether the intervention made an impact on the involvement and attitudes of residents in the reporting errors in patient care, only residents who completed the first survey were sent the second survey. The second survey consisted of the same questions as the first survey and was also administered using the REDCap software system. As an incentive for completing the survey, residents were offered another $10 Amazon gift card, again were funded through a research grant from Lake Erie College of Osteopathic Medicine. Residents were given 1 week to complete the survey.
Analysis
Chi-square analyses were utilized to examine differences between preintervention and postintervention responses across categories. All analyses were conducted using R statistical software, version 3.6.1 (R Foundation for Statistical Computing).
Results
A total of 54 of 80 eligible residents responded to the first survey (Table). Twenty-nine of 54 eligible residents responded to the second survey. Postintervention, significantly more residents indicated being involved in a situation where they should have filed a PASS report but did not (58.6% vs 53.7%; P = 0.036). Improvement was seen in PASS knowledge postintervention, where fewer residents reported not knowing how to file a PASS report (31.5% vs 55.2%; P = 0.059). No other improvements were significant, nor were there significant differences in responses between any other categories pre- and postintervention.
Discussion
Errors in patient care are a common occurrence in the hospital setting. Reporting errors when they happen is important for hospitals to gain data and better care for patients, but studies show that patient errors are usually underreported. This is concerning, as data on errors and other aspects of patient care are needed to inform quality improvement programs.
This study measured residents’ attitudes and knowledge regarding the filing of a PASS report. It also aimed to increase both the frequency of and knowledge about filing a PASS report with interventions. The results from each survey indicated a statistically significant increase in knowledge of when to file a PASS report. In the first survey, 53.7% of residents responded they they were involved in an instance where they should have filed a PASS report but did not. In the second survey, 58.5% of residents reported being involved in an instance where they should have filed a PASS report but did not. This difference was statistically significant (P = 0.036), sugesting that the intervention was successful at increasing residents’ knowledge regarding PASS reports and the appropriate times to file a PASS report.
The survey results also showed a trend toward increasing aggregate knowledge level of how to file PASS reports on the first survey and second surveys (from 31.5% vs 55.2%. This demonstrates an increase in knowledge of how to file a PASS report among residents at our hospital after the intervention. It should be noted that the intervention that was performed in this study was simple, easy to perform, and can be completed at any hospital system that uses a similar system for reporting patient errors.
Another important trend indicating the effectiveness of the intervention was a 15% increase in knowledge of what the PASS report acronym stands for, along with a 13.1% aggregate increase in the number of residents who filed a PASS report. This indicated that residents may have wanted to file a PASS report previously but simply did not know how to until the intervention. In addition, there was also a decrease in the aggregate percentages of residents who had never filed a PASS report and an increase in how many PASS reports were filed.
While PASS reports are a great way for hospitals to gain data and insight into problems at their sites, there was also a negative view of PASS reports. For example, a large percentage of residents indicated that filing a PASS report would not make any difference and that PASS reports are often used as a form of retaliation, either against themselves as the submitter or the person(s) mentioned in the PASS report. More specifically, more than 50% of residents felt that PASS reports were sometimes or often used as a form of retaliation against others. While many residents correctly identified in the survey that PASS reports are not equivalent to a “write-up,” it is concerning that they still feel there is a strong potential for retaliation when filing a PASS report. This finding is unfortunate but matches the results of a multicenter study that found that 44.6% of residents felt uncomfortable reporting patient errors, possibly secondary to fear of retaliation, along with issues with the reporting system.12
It is interesting to note that a minority of residents indicated that they feel that PASS reports are filed as often as they should be (25.9% on first survey and 24.1% on second survey). This is concerning, as the data gathered through PASS reports is used to improve patient care. However, the percentage reported in our study, although low, is higher than that reported in a similar study involving patients with Medicare insurance, which showed that only 14% of patient safety events were reported.13 These results demonstrate that further interventions are necessary in order to ensure that a PASS report is filed each time a patient safety event occurs.
Another finding of note is that the majority of residents also feel that the process of filing a PASS report is too time consuming. The majority of residents who have completed a PASS report stated that it took them between 10 and 20 minutes to complete a PASS report, but those same individuals also feel that it should take < 10 minutes to complete a PASS report. This is an important issue for hospital systems to address. Reducing the time it takes to file a PASS report may facilitate an increase in the amount of PASS reports filed.
We administered our surveys using email outreach to residents asking them to complete an anonymous online survey regarding the PASS report system using the REDCap software system. Researchers have various ways of administering surveys, ranging from paper surveys, emails, and even mobile apps. One study showed that online surveys tend to have higher response rates compared to non-online surveys, such as paper surveys and telephone surveys, which is likely due to the ease of use of online surveys.14 At the same time, unsolicited email surveys have been shown to have a negative influence on response rates. Mobile apps are a new way of administering surveys. However, research has not found any significant difference in the time required to complete the survey using mobile apps compared to other forms of administering surveys. In addition, surveys using mobile apps did not have increased response rates compared to other forms of administering surveys.15
To increase the response rate of our surveys, we offered gift cards to the study population for completing the survey. Studies have shown that surveys that offer incentives tend to have higher response rates than surveys that do not.16 Also, in addition to serving as a method for gathering data from our study population, we used our surveys as an intervention to increase awareness of PASS reporting, as reported in other studies. For example, another study used the HABITS questionnaire to not only gather information about children’s diet, but also to promote behavioral change towards healthy eating habits.17
This study had several limitations. First, the study was conducted using an anonymous online survey, which means we could not clarify questions that residents found confusing or needed further explanation. For example, 17 residents indicated in the first survey that they knew how to PASS report, but 19 residents indicated in the same survey that they have filed a PASS report in the past.
A second limitation of the study was that fewer residents completed the second survey (29 of 54 eligible residents) compared to the first survey (54 of 80 eligible residents). This may have impacted the results of the analysis, as certain findings were not statistically significant, despite trends in the data.
A third limitation of the study is that not all of the residents that completed the first and second surveys completed the entire intervention. For example, some residents did not attend the didactic lecture discussing PASS reports, and as such may not have received the appropriate training prior to completing the second survey.
The findings from this study can be used by the residency programs at UH-RH and by residency programs across the country to improve the involvement and attitudes of residents in reporting errors in patient care. Hospital staff need to be encouraged and educated on how to better report patient errors and the importance of reporting these errors. It would benefit hospital systems to provide continued and targeted training to familiarize physicians with the process of reporting patient errors, and take steps to reduce the time it takes to report patient errors. By increasing the reporting of errors, hospitals will be able to improve patient care through initiatives aimed at preventing errors.
Conclusion
Residents play an important role in providing high-quality care for patients. Part of providing high-quality care is the reporting of errors in patient care when they occur. Physicians, and in particular, residents, have historically underreported errors in patient care. Part of this underreporting results from residents not knowing or understanding the process of filing a report and feeling that the reports could be used as a form of retaliation. For hospital systems to continue to improve patient care, it is important for residents to not only know how to report errors in patient care but to feel comfortable doing so.
Corresponding author: Andrew J. Chin, DO, MS, MPH, Department of Internal Medicine, Adelante Healthcare, 1705 W Main St, Mesa, AZ 85201; [email protected].
Financial disclosures: None.
Funding: This study was funded by a research grant provided by Lake Eric College of Osteopathic Medicine to Andrew J. Chin and Anish Bhakta.
From Adelante Healthcare, Mesa, AZ (Dr. Chin), University Hospitals of Cleveland, Cleveland, OH (Drs. Delozier, Bascug, Levine, Bejanishvili, and Wynbrandt and Janet C. Peachey, Rachel M. Cerminara, and Sharon M. Darkovich), and Houston Methodist Hospitals, Houston, TX (Dr. Bhakta).
Abstract
Background: Resident physicians play an active role in the reporting of errors that occur in patient care. Previous studies indicate that residents significantly underreport errors in patient care.
Methods: Fifty-four of 80 eligible residents enrolled at University Hospitals–Regional Hospitals (UH-RH) during the 2018-2019 academic year completed a survey assessing their knowledge and experience in completing Patient Advocacy and Shared Stories (PASS) reports, which serve as incident reports in the UH health system in reporting errors in patient care. A series of interventions aimed at educating residents about the PASS report system were then conducted. The 54 residents who completed the first survey received it again 4 months later.
Results: Residents demonstrated greater understanding of when filing PASS reports was appropriate after the intervention, as significantly more residents reported having been involved in a situation where they should have filed a PASS report but did not (P = 0.036).
Conclusion: In this study, residents often did not report errors in patient care because they simply did not know the process for doing so. In addition, many residents often felt that the reporting of patient errors could be used as a form of retaliation.
Keywords: resident physicians; quality improvement; high-value care; medical errors; patient safety.
Resident physicians play a critical role in patient care. Residents undergo extensive supervised training in order to one day be able to practice medicine in an unsupervised setting, with the goal of providing the highest quality of care possible. One study reported that primary care provided by residents in a training program is of similar or higher quality than that provided by attending physicians.1
Besides providing high-quality care, it is important that residents play an active role in the reporting of errors that occur regarding patient care as well as in identifying events that may compromise patient safety and quality.2 In fact, increased reporting of patient errors has been shown to decrease liability-related costs for hospitals.3 Unfortunately, physicians, and residents in particular, have historically been poor reporters of errors in patient care.4 This is especially true when comparing physicians to other health professionals, such as nurses, in error reporting.5
Several studies have examined the involvement of residents in reporting errors in patient care. One recent study showed that a graduate medical education financial incentive program significantly increased the number of patient safety events reported by residents and fellows.6 This study, along with several others, supports the concept of using incentives to help improve the reporting of errors in patient care for physicians in training.7-10 Another study used Quality Improvement Knowledge Assessment Tool (QIKAT) scores to assess quality improvement (QI) knowledge. The study demonstrated that self-assessment scores of QI skills using QIKAT scores improved following a targeted intervention.11 Because further information on the involvement and attitudes of residents in reporting errors in patient care is needed, University Hospitals of Cleveland (UH) designed and implemented a QI study during the 2018-2019 academic year. This prospective study used anonymous surveys to objectively examine the involvement and attitudes of residents in reporting errors in patient care.
Methods
The UH health system uses Patient Advocacy and Shared Stories (PASS) reports as incident reports to not only disclose errors in patient care but also to identify any events that may compromise patient safety and quality. Based on preliminary review, nurses, ancillary staff, and administrators file the majority of PASS reports.
The study group consisted of residents at University Hospitals–Regional Hospitals (UH-RH), which is comprised of 2 hospitals: University Hospitals–Richmond Medical Center (UH-RMC) and University Hospitals –Bedford Medical Center (UH-BMC). UH-RMC and UH-BMC are 2 medium-sized university-affiliated community hospitals located in the Cleveland metropolitan area in Northeast Ohio. Both serve as clinical training sites for Case Western Reserve University School of Medicine and Lake Erie College of Osteopathic Medicine, the latter of which helped fund this study. The study was submitted to the Institutional Review Board (IRB) of University Hospitals of Cleveland and granted “not human subjects research” status as a QI study.
Surveys
UH-RH offers residency programs in dermatology, emergency medicine, family medicine, internal medicine, orthopedic surgery, and physical medicine and rehabilitation, along with a 1-year transitional/preliminary year. A total of 80 residents enrolled at UH-RH during the 2018-2019 academic year. All 80 residents at UH-RH received an email in December 2018 asking them to complete an anonymous survey regarding the PASS report system. The survey was administered using the REDCap software system and consisted of 15 multiple-choice questions. As an incentive for completing the survey, residents were offered a $10 Amazon gift card. The gift cards were funded through a research grant from Lake Erie College of Osteopathic Medicine. Residents were given 1 week to complete the survey. At the end of the week, 54 of 80 residents completed the first survey.
Following the first survey, efforts were undertaken by the study authors, in conjunction with the quality improvement department at UH-RH, to educate residents about the PASS report system. These interventions included giving a lecture on the PASS report system during resident didactic sessions, sending an email to all residents about the PASS report system, and providing residents an opportunity to complete an optional online training course regarding the PASS report system. As an incentive for completing the online training course, residents were offered a $10 Amazon gift card. As before, the gift cards were funded through a research grant from Lake Erie College of Osteopathic Medicine.
A second survey was administered in April 2019, 4 months after the first survey. To determine whether the intervention made an impact on the involvement and attitudes of residents in the reporting errors in patient care, only residents who completed the first survey were sent the second survey. The second survey consisted of the same questions as the first survey and was also administered using the REDCap software system. As an incentive for completing the survey, residents were offered another $10 Amazon gift card, again were funded through a research grant from Lake Erie College of Osteopathic Medicine. Residents were given 1 week to complete the survey.
Analysis
Chi-square analyses were utilized to examine differences between preintervention and postintervention responses across categories. All analyses were conducted using R statistical software, version 3.6.1 (R Foundation for Statistical Computing).
Results
A total of 54 of 80 eligible residents responded to the first survey (Table). Twenty-nine of 54 eligible residents responded to the second survey. Postintervention, significantly more residents indicated being involved in a situation where they should have filed a PASS report but did not (58.6% vs 53.7%; P = 0.036). Improvement was seen in PASS knowledge postintervention, where fewer residents reported not knowing how to file a PASS report (31.5% vs 55.2%; P = 0.059). No other improvements were significant, nor were there significant differences in responses between any other categories pre- and postintervention.
Discussion
Errors in patient care are a common occurrence in the hospital setting. Reporting errors when they happen is important for hospitals to gain data and better care for patients, but studies show that patient errors are usually underreported. This is concerning, as data on errors and other aspects of patient care are needed to inform quality improvement programs.
This study measured residents’ attitudes and knowledge regarding the filing of a PASS report. It also aimed to increase both the frequency of and knowledge about filing a PASS report with interventions. The results from each survey indicated a statistically significant increase in knowledge of when to file a PASS report. In the first survey, 53.7% of residents responded they they were involved in an instance where they should have filed a PASS report but did not. In the second survey, 58.5% of residents reported being involved in an instance where they should have filed a PASS report but did not. This difference was statistically significant (P = 0.036), sugesting that the intervention was successful at increasing residents’ knowledge regarding PASS reports and the appropriate times to file a PASS report.
The survey results also showed a trend toward increasing aggregate knowledge level of how to file PASS reports on the first survey and second surveys (from 31.5% vs 55.2%. This demonstrates an increase in knowledge of how to file a PASS report among residents at our hospital after the intervention. It should be noted that the intervention that was performed in this study was simple, easy to perform, and can be completed at any hospital system that uses a similar system for reporting patient errors.
Another important trend indicating the effectiveness of the intervention was a 15% increase in knowledge of what the PASS report acronym stands for, along with a 13.1% aggregate increase in the number of residents who filed a PASS report. This indicated that residents may have wanted to file a PASS report previously but simply did not know how to until the intervention. In addition, there was also a decrease in the aggregate percentages of residents who had never filed a PASS report and an increase in how many PASS reports were filed.
While PASS reports are a great way for hospitals to gain data and insight into problems at their sites, there was also a negative view of PASS reports. For example, a large percentage of residents indicated that filing a PASS report would not make any difference and that PASS reports are often used as a form of retaliation, either against themselves as the submitter or the person(s) mentioned in the PASS report. More specifically, more than 50% of residents felt that PASS reports were sometimes or often used as a form of retaliation against others. While many residents correctly identified in the survey that PASS reports are not equivalent to a “write-up,” it is concerning that they still feel there is a strong potential for retaliation when filing a PASS report. This finding is unfortunate but matches the results of a multicenter study that found that 44.6% of residents felt uncomfortable reporting patient errors, possibly secondary to fear of retaliation, along with issues with the reporting system.12
It is interesting to note that a minority of residents indicated that they feel that PASS reports are filed as often as they should be (25.9% on first survey and 24.1% on second survey). This is concerning, as the data gathered through PASS reports is used to improve patient care. However, the percentage reported in our study, although low, is higher than that reported in a similar study involving patients with Medicare insurance, which showed that only 14% of patient safety events were reported.13 These results demonstrate that further interventions are necessary in order to ensure that a PASS report is filed each time a patient safety event occurs.
Another finding of note is that the majority of residents also feel that the process of filing a PASS report is too time consuming. The majority of residents who have completed a PASS report stated that it took them between 10 and 20 minutes to complete a PASS report, but those same individuals also feel that it should take < 10 minutes to complete a PASS report. This is an important issue for hospital systems to address. Reducing the time it takes to file a PASS report may facilitate an increase in the amount of PASS reports filed.
We administered our surveys using email outreach to residents asking them to complete an anonymous online survey regarding the PASS report system using the REDCap software system. Researchers have various ways of administering surveys, ranging from paper surveys, emails, and even mobile apps. One study showed that online surveys tend to have higher response rates compared to non-online surveys, such as paper surveys and telephone surveys, which is likely due to the ease of use of online surveys.14 At the same time, unsolicited email surveys have been shown to have a negative influence on response rates. Mobile apps are a new way of administering surveys. However, research has not found any significant difference in the time required to complete the survey using mobile apps compared to other forms of administering surveys. In addition, surveys using mobile apps did not have increased response rates compared to other forms of administering surveys.15
To increase the response rate of our surveys, we offered gift cards to the study population for completing the survey. Studies have shown that surveys that offer incentives tend to have higher response rates than surveys that do not.16 Also, in addition to serving as a method for gathering data from our study population, we used our surveys as an intervention to increase awareness of PASS reporting, as reported in other studies. For example, another study used the HABITS questionnaire to not only gather information about children’s diet, but also to promote behavioral change towards healthy eating habits.17
This study had several limitations. First, the study was conducted using an anonymous online survey, which means we could not clarify questions that residents found confusing or needed further explanation. For example, 17 residents indicated in the first survey that they knew how to PASS report, but 19 residents indicated in the same survey that they have filed a PASS report in the past.
A second limitation of the study was that fewer residents completed the second survey (29 of 54 eligible residents) compared to the first survey (54 of 80 eligible residents). This may have impacted the results of the analysis, as certain findings were not statistically significant, despite trends in the data.
A third limitation of the study is that not all of the residents that completed the first and second surveys completed the entire intervention. For example, some residents did not attend the didactic lecture discussing PASS reports, and as such may not have received the appropriate training prior to completing the second survey.
The findings from this study can be used by the residency programs at UH-RH and by residency programs across the country to improve the involvement and attitudes of residents in reporting errors in patient care. Hospital staff need to be encouraged and educated on how to better report patient errors and the importance of reporting these errors. It would benefit hospital systems to provide continued and targeted training to familiarize physicians with the process of reporting patient errors, and take steps to reduce the time it takes to report patient errors. By increasing the reporting of errors, hospitals will be able to improve patient care through initiatives aimed at preventing errors.
Conclusion
Residents play an important role in providing high-quality care for patients. Part of providing high-quality care is the reporting of errors in patient care when they occur. Physicians, and in particular, residents, have historically underreported errors in patient care. Part of this underreporting results from residents not knowing or understanding the process of filing a report and feeling that the reports could be used as a form of retaliation. For hospital systems to continue to improve patient care, it is important for residents to not only know how to report errors in patient care but to feel comfortable doing so.
Corresponding author: Andrew J. Chin, DO, MS, MPH, Department of Internal Medicine, Adelante Healthcare, 1705 W Main St, Mesa, AZ 85201; [email protected].
Financial disclosures: None.
Funding: This study was funded by a research grant provided by Lake Eric College of Osteopathic Medicine to Andrew J. Chin and Anish Bhakta.
1. Zallman L, Ma J, Xiao L, Lasser KE. Quality of US primary care delivered by resident and staff physicians. J Gen Intern Med. 2010;25(11):1193-1197.
2. Bagain JP. The future of graduate medical education: a systems-based approach to ensure patient safety. Acad Med. 2015;90(9):1199-1202.
3. Kachalia A, Kaufman SR, Boothman R, et al. Liability claims and costs before and after implementation of a medical disclosure program. Ann Intern Med. 2010;153(4):213-221.
4. Kaldjian LC, Jones EW, Wu BJ, et al. Reporting medical errors to improve patient safety: a survey of physicians in teaching hospitals. Arch Intern Med. 2008;168(1):40-46.
5. Rowin EJ, Lucier D, Pauker SG, et al. Does error and adverse event reporting by physicians and nurses differ? Jt Comm J Qual Patient Saf. 2008;34(9):537-545.
6. Turner DA, Bae J, Cheely G, et al. Improving resident and fellow engagement in patient safety through a graduate medical education incentive program. J Grad Med Educ. 2018;10(6):671-675.
7. Macht R, Balen A, McAneny D, Hess D. A multifaceted intervention to increase surgery resident engagement in reporting adverse events. J Surg Educ. 2015;72(6):e117-e122.
8. Scott DR, Weimer M, English C, et al. A novel approach to increase residents’ involvement in reporting adverse events. Acad Med. 2011;86(6):742-746.
9. Stewart DA, Junn J, Adams MA, et al. House staff participation in patient safety reporting: identification of predominant barriers and implementation of a pilot program. South Med J. 2016;109(7):395-400.
10. Vidyarthi AR, Green AL, Rosenbluth G, Baron RB. Engaging residents and fellows to improve institution-wide quality: the first six years of a novel financial incentive program. Acad Med. 2014;89(3):460-468.
11. Fok MC, Wong RY. Impact of a competency based curriculum on quality improvement among internal medicine residents. BMC Med Educ. 2014;14:252.
12. Wijesekera TP, Sanders L, Windish DM. Education and reporting of diagnostic errors among physicians in internal medicine training programs. JAMA Intern Med. 2018;178(11):1548-1549.
13. Levinson DR. Hospital incident reporting systems do not capture most patient harm. Washington, D.C.: U.S. Department of Health and Human Services Office of the Inspector General. January 2012. Report No. OEI-06-09-00091.
14. Evans JR, Mathur A. The value of online surveys. Internet Research. 2005;15(2):192-219.
15. Marcano Belisario JS, Jamsek J, Huckvale K, et al. Comparison of self‐administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database of Syst Rev. 2015;7:MR000042.
16. Manfreda KL, Batagelj Z, Vehovar V. Design of web survey questionnaires: three basic experiments. J Comput Mediat Commun. 2002;7(3):JCMC731.
17. Wright ND, Groisman‐Perelstein AE, Wylie‐Rosett J, et al. A lifestyle assessment and intervention tool for pediatric weight management: the HABITS questionnaire. J Hum Nutr Diet. 2011;24(1):96-100.
1. Zallman L, Ma J, Xiao L, Lasser KE. Quality of US primary care delivered by resident and staff physicians. J Gen Intern Med. 2010;25(11):1193-1197.
2. Bagain JP. The future of graduate medical education: a systems-based approach to ensure patient safety. Acad Med. 2015;90(9):1199-1202.
3. Kachalia A, Kaufman SR, Boothman R, et al. Liability claims and costs before and after implementation of a medical disclosure program. Ann Intern Med. 2010;153(4):213-221.
4. Kaldjian LC, Jones EW, Wu BJ, et al. Reporting medical errors to improve patient safety: a survey of physicians in teaching hospitals. Arch Intern Med. 2008;168(1):40-46.
5. Rowin EJ, Lucier D, Pauker SG, et al. Does error and adverse event reporting by physicians and nurses differ? Jt Comm J Qual Patient Saf. 2008;34(9):537-545.
6. Turner DA, Bae J, Cheely G, et al. Improving resident and fellow engagement in patient safety through a graduate medical education incentive program. J Grad Med Educ. 2018;10(6):671-675.
7. Macht R, Balen A, McAneny D, Hess D. A multifaceted intervention to increase surgery resident engagement in reporting adverse events. J Surg Educ. 2015;72(6):e117-e122.
8. Scott DR, Weimer M, English C, et al. A novel approach to increase residents’ involvement in reporting adverse events. Acad Med. 2011;86(6):742-746.
9. Stewart DA, Junn J, Adams MA, et al. House staff participation in patient safety reporting: identification of predominant barriers and implementation of a pilot program. South Med J. 2016;109(7):395-400.
10. Vidyarthi AR, Green AL, Rosenbluth G, Baron RB. Engaging residents and fellows to improve institution-wide quality: the first six years of a novel financial incentive program. Acad Med. 2014;89(3):460-468.
11. Fok MC, Wong RY. Impact of a competency based curriculum on quality improvement among internal medicine residents. BMC Med Educ. 2014;14:252.
12. Wijesekera TP, Sanders L, Windish DM. Education and reporting of diagnostic errors among physicians in internal medicine training programs. JAMA Intern Med. 2018;178(11):1548-1549.
13. Levinson DR. Hospital incident reporting systems do not capture most patient harm. Washington, D.C.: U.S. Department of Health and Human Services Office of the Inspector General. January 2012. Report No. OEI-06-09-00091.
14. Evans JR, Mathur A. The value of online surveys. Internet Research. 2005;15(2):192-219.
15. Marcano Belisario JS, Jamsek J, Huckvale K, et al. Comparison of self‐administered survey questionnaire responses collected using mobile apps versus other methods. Cochrane Database of Syst Rev. 2015;7:MR000042.
16. Manfreda KL, Batagelj Z, Vehovar V. Design of web survey questionnaires: three basic experiments. J Comput Mediat Commun. 2002;7(3):JCMC731.
17. Wright ND, Groisman‐Perelstein AE, Wylie‐Rosett J, et al. A lifestyle assessment and intervention tool for pediatric weight management: the HABITS questionnaire. J Hum Nutr Diet. 2011;24(1):96-100.
Implementing the AMI READMITS Risk Assessment Score to Increase Referrals Among Patients With Type I Myocardial Infarction
From The Johns Hopkins Hospital, Baltimore, MD (Dr. Muganlinskaya and Dr. Skojec, retired); The George Washington University, Washington, DC (Dr. Posey); and Johns Hopkins University, Baltimore, MD (Dr. Resar).
Abstract
Objective: Assessing the risk characteristics of patients with acute myocardial infarction (MI) can help providers make appropriate referral decisions. This quality improvement project sought to improve timely, appropriate referrals among patients with type I MI by adding a risk assessment, the AMI READMITS score, to the existing referral protocol.
Methods: Patients’ chart data were analyzed to assess changes in referrals and timely follow-up appointments from pre-intervention to intervention. A survey assessed providers’ satisfaction with the new referral protocol.
Results: Among 57 patients (n = 29 preintervention; n = 28 intervention), documented referrals increased significantly from 66% to 89% (χ2 = 4.571, df = 1, P = 0.033); and timely appointments increased by 10%, which was not significant (χ2 = 3.550, df = 2, P = 0.169). Most providers agreed that the new protocol was easy to use, useful in making referral decisions, and improved the referral process. All agreed the risk score should be incorporated into electronic clinical notes. Provider opinions related to implementing the risk score in clinical practice were mixed. Qualitative feedback suggests this was due to limited validation of the AMI READMITS score in reducing readmissions.
Conclusions: Our risk-based referral protocol helped to increase appropriate referrals among patients with type I MI. Provider adoption may be enhanced by incorporating the protocol into electronic clinical notes. Research to further validate the accuracy of the AMI READMITS score in predicting readmissions may support adoption of the protocol in clinical practice.
Keywords: quality improvement; type I myocardial infarction; referral process; readmission risk; risk assessment; chart review.
Early follow-up after discharge is an important strategy to reduce the risk of unplanned hospital readmissions among patients with various conditions.1-3 While patient confounding factors, such as chronic health problems, environment, socioeconomic status, and literacy, make it difficult to avoid all unplanned readmissions, early follow-up may help providers identify and appropriately manage some health-related issues, and as such is a pivotal element of a readmission prevention strategy.4 There is evidence that patients with non-ST elevation myocardial infarction (NSTEMI) who have an outpatient appointment with a physician within 7 days after discharge have a lower risk of 30-day readmission.5
Our hospital’s postmyocardial infarction clinic was created to prevent unplanned readmissions within 30 days after discharge among patients with type I myocardial infarction (MI). Since inception, the number of referrals has been much lower than expected. In 2018, the total number of patients discharged from the hospital with type I MI and any troponin I level above 0.40 ng/mL was 313. Most of these patients were discharged from the hospital’s cardiac units; however, only 91 referrals were made. To increase referrals, the cardiology nurse practitioners (NPs) developed a post-MI referral protocol (Figure 1). However, this protocol was not consistently used and referrals to the clinic remained low.
Evidence-based risk assessment tools have the potential to increase effective patient management. For example, cardiology providers at the hospital utilize various scores, such as CHA2DS2-VASc6 and the Society of Thoracic Surgery risk score,7 to plan patient management. Among the scores used to predict unplanned readmissions for MI patients, the most promising is the AMI READMITS score.8 Unlike other nonspecific prediction models, the AMI READMITS score was developed based on variables extracted from the electronic health records (EHRs) of patients who were hospitalized for MI and readmitted within 30 days after discharge. Recognizing the potential to increase referrals by integrating an MI-specific risk assessment, this quality improvement study modified the existing referral protocol to include the patients’ AMI READMITS score and recommendations for follow-up.
Currently, there are no clear recommendations on how soon after discharge patients with MI should undergo follow-up. As research data vary, we selected 7 days follow-up for patients from high risk groups based on the “See you in 7” initiative for patients with heart failure (HF) and MI,9,10 as well as evidence that patients with NSTEMI have a lower risk of 30-day readmission if they have follow-up within 7 days after discharge5; and we selected 14 days follow-up for patients from low-risk groups based on evidence that postdischarge follow-up within 14 days reduces risk of 30-day readmission in patients with acute myocardial infarction (AMI) and/or acutely decompensated HF.11
Methods
This project was designed to answer the following question: For adult patients with type I MI, does implementation of a readmission risk assessment referral protocol increase the percentage of referrals and appointments scheduled within a recommended time? Anticipated outcomes included: (1) increased referrals to a cardiologist or the post-MI clinic; (2) increased scheduled follow-up appointments within 7 to 14 days; (3) provider satisfaction with the usability and usefulness of the new protocol; and (4) consistent provider adoption of the new risk assessment referral protocol.
To evaluate the degree to which these outcomes were achieved, we reviewed patient charts for 2 months prior and 2 months during implementation of the new referral protocol. As shown in Figure 2, the new protocol added the following process steps to the existing protocol: calculation of the AMI READMITS score, recommendations for follow-up based on patients’ risk score, and guidance to refer patients to the post-MI clinic if patients did not have an appointment with a cardiologist within 7 to 14 days after discharge. Patients’ risk assessment scores were obtained from forms completed by clinicians during the intervention. Clinician’s perceptions related to the usability and usefulness of the new protocol and feedback related to its long-term adoption were assessed using a descriptive survey.
The institutional review board classified this project as a quality improvement project. To avoid potential loss of patient privacy, no identifiable data were collected, a unique identifier unrelated to patients’ records was generated for each patient, and data were saved on a password-protected cardiology office computer.
Population
The project population included all adult patients (≥ 18 years old) with type I MI who were admitted or transferred to the hospital, had a percutaneous coronary intervention (PCI), or were managed without PCI and discharged from the hospital’s cardiac care unit (CCU) and progressive cardiac care unit (PCCU). The criteria for type I MI included the “detection of a rise and/or fall of cardiac troponin with at least 1 value above the 99th percentile and with at least 1 of the following: symptoms of acute myocardial ischemia; new ischemic electrocardiographic (ECG) changes; development of new pathological Q waves; imaging evidence of new loss of viable myocardium or new regional wall motion abnormality in a pattern consistent with an ischemic etiology; identification of a coronary thrombus by angiography including intracoronary imaging or by autopsy.”12 The study excluded patients with type I MI who were referred for coronary bypass surgery.
Intervention
The revised risk assessment protocol was implemented within the CCU and PCCU. The lead investigator met with each provider to discuss the role of the post-MI clinic, current referral rates, the purpose of the project, and the new referral process to be completed during the project for each patient discharged with type I MI. Cardiology NPs, fellows, and residents were asked to use the risk-assessment form to calculate patients’ risk for readmission, and refer patients to the post-MI clinic if an appointment with a cardiologist was not available within 7 to 14 days after discharge. Every week during the intervention phase, the investigator sent reminder emails to ensure form completion. Providers were asked to calculate and write the score, the discharge and referral dates, where referrals were made (a cardiologist or the post-MI clinic), date of appointment, and reason for not scheduling an appointment or not referring on the risk assessment form, and to drop the completed forms in specific labeled boxes located at the CCU and PCCU work stations. The investigator collected the completed forms weekly. When the number of discharged patients did not match the number of completed forms, the investigator followed up with discharging providers to understand why.
Data and Data Collection
Data to determine whether the use of the new protocol increased discharge referrals among patients with type I MI within the recommended timeframes were collected by electronic chart review. Data included discharging unit, patients’ age, gender, admission and discharge date, diagnosis, referral to a cardiologist and the post-MI clinic, and appointment date. Clinical data needed to calculate the AMI READMITS score was also collected: PCI within 24 hours, serum creatinine, systolic blood pressure (SBP), brain natriuretic peptide (BNP), and diabetes status.
Data to assess provider satisfaction with the usability and usefulness of the new protocol were gathered through an online survey. The survey included 1 question related to the providers’ role, 1 question asking whether they used the risk assessment for each patient, and 5 Likert-items assessing the ease of usage. An additional open-ended question asked providers to share feedback related to integrating the AMI READMITS risk assessment score to the post-MI referral protocol long term.
To evaluate how consistently providers utilized the new referral protocol when discharging patients with type I MI, the number of completed forms was compared with the number of those patients who were discharged.
Statistical Analysis
Descriptive statistics were used to summarize patient demographics and to calculate the frequency of referrals before and during the intervention. Chi-square statistics were calculated to determine whether the change in percentage of referrals and timely referrals was significant. Descriptive statistics were used to determine the level of provider satisfaction related to each survey item. A content analysis method was used to synthesize themes from the open-ended question asking clinicians to share their feedback related to the new protocol.
Results
Fifty-seven patients met the study inclusion criteria: 29 patients during the preintervention phase and 28 patients during the intervention phase. There were 35 male (61.4%) and 22 female (38.6%) patients. Twenty-five patients (43.9%) were from age groups 41 through 60 years and 61 through 80 years, respectively, representing the majority of included patients. Seven patients (12.3%) were from the 81 years and older age group. There were no patients in the age group 18 through 40 years. Based on the AMI READMITS score calculation, 57.9% (n = 33) patients were from a low-risk group (includes extremely low and low risk for readmission) and 42.1% (n = 24) were from a high-risk group (includes moderate, high, and extremely high risk for readmission).
Provider adoption of the new protocol during the intervention was high. Referral forms were completed for 82% (n = 23) of the 28 patients during the intervention. Analysis findings showed a statistically significant increase in documented referrals after implementing the new referral protocol. During the preintervention phase, 66% (n = 19) of patients with type I MI were referred to see a cardiologist or an NP at a post-MI clinic and there was no documented referral for 34% (n = 10) of patients. During the intervention phase, 89% (n = 25) of patients were referred and there was no documented referral for 11% (n = 3) of patients. Chi-square results indicated that the increase in referrals was significant (χ2 = 4.571, df = 1, P = 0.033).
Data analysis examined whether patient referrals fell within the recommended timeframe of 7 days for the high-risk group (included moderate-to-extremely high risk) and 14 days for the low-risk group (included low-to-extremely low risk). During the preintervention phase, 31% (n = 9) of patient referrals were scheduled as recommended; 28% (n = 8) of patient referrals were scheduled but delayed; and there was no referral date documented for 41% (n = 12) of patients. During the intervention phase, referrals scheduled as recommended increased to 53% (n = 15); 25% (n = 7) of referrals were scheduled but delayed; and there was no referral date documented for 21.4% (n = 6) of patients. The change in appointments scheduled as recommended was not significant (χ2 = 3.550, df = 2, P = 0.169).
Surveys were emailed to 25 cardiology fellows and 3 cardiology NPs who participated in this study. Eighteen of the 28 clinicians (15 cardiology fellows and 3 cardiology NPs) responded for a response rate of 64%. One of several residents who rotated through the CCU and PCCU during the intervention also completed the survey, for a total of 19 participants. When asked if the protocol was easy to use, 79% agreed or strongly agreed. Eighteen of the 19 participants (95%) agreed or strongly agreed that the protocol was useful in making referral decisions. Sixty-eight percent agreed or strongly agreed that the AMI READMITS risk assessment score improves referral process. All participants agreed or strongly agreed that there should be an option to incorporate the AMI READMITS risk assessment score into electronic clinical notes. When asked whether the AMI READMITS risk score should be implemented in clinical practice, responses were mixed (Figure 3). A common theme among the 4 participants who responded with comments was the need for additional data to validate the usefulness of the AMI READMITS to reduce readmissions. In addition, 1 participant commented that “manual calculation [of the risk score] is not ideal.”
Discussion
This project demonstrated that implementing an evidence-based referral protocol integrating the AMI-READMITS score can increase timely postdischarge referrals among patients with type I MI. The percentage of appropriately scheduled appointments increased during the intervention phase; however, a relatively high number of appointments were scheduled outside of the recommended timeframe, similar to preintervention. Thus, while the new protocol increased referrals and provider documentation of these referrals, it appears that challenges in scheduling timely referral appointments remained. This project did not examine the reasons for delayed appointments.
The survey findings indicated that providers were generally satisfied with the usability and usefulness of the new risk assessment protocol. A large majority agreed or strongly agreed that it was easy to use and useful in making referral decisions, and most agreed or strongly agreed that it improves the referral process. Mixed opinions regarding implementing the AMI READMITS score in clinical practice, combined with qualitative findings, suggest that a lack of external validation of the AMI READMITS presents a barrier to its long-term adoption. All providers who participated in the survey agreed or strongly agreed that the risk assessment should be incorporated into electronic clinical notes. We have begun the process of working with the EHR vendor to automate the AMI risk-assessment within the referral work-flow, which will provide an opportunity for a follow-up quality improvement study.
This quality improvement project has several limitations. First, it implemented a small change in 2 inpatient units at 1 hospital using a simple pre- posttest design. Therefore, the findings are not generalizable to other settings. Prior to the intervention, some referrals may have been made without documentation. While the authors were able to trace undocumented referrals for patients who were referred to the post-MI clinic or to a cardiologist affiliated with the hospital, some patients may have been referred to cardiologists who were not affiliated with the hospital. Another limitation was that the self-created provider survey used was not tested in other clinical settings; thus, it cannot be determined whether the sensitivity and specificity of the survey questions are high. In addition, the clinical providers who participated in the study knew the study team, which may have influenced their behavior during the study period. Furthermore, the identified improvement in clinicians’ referral practices may not be sustainable due to the complexity and effort required to manually calculate the risk score. This limitation could be eliminated by integrating the risk score calculation into the EHR.
Conclusion
Early follow-up after discharge plays an important role in supporting patients’ self-management of some risk factors (ie, diet, weight, and smoking) and identifying gaps in postdischarge care which may lead to readmission. This project provides evidence that integrating the AMI READMITS risk assessment score into the referral process can help to guide discharge decision-making and increase timely, appropriate referrals for patients with MI. Integration of a specific risk assessment, such as the AMI READMITS, within the post-MI referral protocol may help clinicians make more efficient, educated referral decisions. Future studies should explore more specifically how and why the new protocol impacts clinicians’ decision-making and behavior related to post-MI referrals. In addition, future studies should investigate challenges associated with scheduling postdischarge appointments. It will be important to investigate how integration of the new protocol within the EHR may increase efficiency, consistency, and provider satisfaction with the new referral process. Additional research investigating the effects of the AMI READMITS score on readmissions reduction will be important to promote long-term adoption of the improved referral protocol in clinical practice.
Acknowledgments: The authors thank Shelly Conaway, ANP-BC, MSN, Angela Street, ANP-BC, MSN, Andrew Geis, ACNP-BC, MSN, Richard P. Jones II, MD, Eunice Young, MD, Joy Rothwell, MSN, RN-BC, Allison Olazo, MBA, MSN, RN-BC, Elizabeth Heck, RN-BC, and Matthew Trojanowski, MHA, MS, RRT, CSSBB for their support of this study.
Corresponding author: Nailya Muganlinskaya, DNP, MPH, ACNP-BC, MSN, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD 21287; [email protected].
Financial disclosures: None.
1. Why it is important to improve care transitions? Society of Hospital Medicine. Accessed June 15, 2020. https://www.hospitalmedicine.org/clinical-topics/care-transitions/
2. Tong L, Arnold T, Yang J, et al. The association between outpatient follow-up visits and all-cause non-elective 30-day readmissions: a retrospective observational cohort study. PloS One. 2018;13(7):e0200691.
3. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-22.
4. Health Research & Educational Trust. Preventable Readmissions Change Package. American Hospital Association. Updated December 2015. Accessed June 10, 2020. https://www.aha.org/sites/default/files/hiin/HRETHEN_ChangePackage_Readmissions.pd
5. Tung Y-C, Chang G-M, Chang H-Y, Yu T-H. Relationship between early physician follow-up and 30-day readmission after acute myocardial infarction and heart failure. Plos One. 2017;12(1):e0170061.
6. Kaplan RM, Koehler J, Zieger PD, et al. Stroke risk as a function of atrial fibrillation duration and CHA2DS2-VASc score. Circulation. 2019;140(20):1639-46.
7. Balan P, Zhao Y, Johnson S, et al. The Society of Thoracic Surgery Risk Score as a predictor of 30-day mortality in transcatheter vs surgical aortic valve replacement: a single-center experience and its implications for the development of a TAVR risk-prediction model. J Invasive Cardiol. 2017;29(3):109-14.
8. Smith LN, Makam AN, Darden D, et al. Acute myocardial infarction readmission risk prediction models: A systematic review of model performance. Circ Cardiovasc Qual Outcomes9.9. 2018;11(1):e003885.
9. Baker H, Oliver-McNeil S, Deng L, Hummel SL. See you in 7: regional hospital collaboration and outcomes in Medicare heart failure patients. JACC Heart Fail. 2015;3(10):765-73.
10. Batten A, Jaeger C, Griffen D, et al. See you in 7: improving acute myocardial infarction follow-up care. BMJ Open Qual. 2018;7(2):e000296.
11. Lee DW, Armistead L, Coleman H, et al. Abstract 15387: Post-discharge follow-up within 14 days reduces 30-day hospital readmission rates in patients with acute myocardial infarction and/or acutely decompensated heart failure. Circulation. 2018;134 (1):A 15387.
12. Thygesen K, Alpert JS, Jaffe AS, et al. Fourth universal definition of myocardial infarction. Circulation. 2018;138 (20):e:618-51.
From The Johns Hopkins Hospital, Baltimore, MD (Dr. Muganlinskaya and Dr. Skojec, retired); The George Washington University, Washington, DC (Dr. Posey); and Johns Hopkins University, Baltimore, MD (Dr. Resar).
Abstract
Objective: Assessing the risk characteristics of patients with acute myocardial infarction (MI) can help providers make appropriate referral decisions. This quality improvement project sought to improve timely, appropriate referrals among patients with type I MI by adding a risk assessment, the AMI READMITS score, to the existing referral protocol.
Methods: Patients’ chart data were analyzed to assess changes in referrals and timely follow-up appointments from pre-intervention to intervention. A survey assessed providers’ satisfaction with the new referral protocol.
Results: Among 57 patients (n = 29 preintervention; n = 28 intervention), documented referrals increased significantly from 66% to 89% (χ2 = 4.571, df = 1, P = 0.033); and timely appointments increased by 10%, which was not significant (χ2 = 3.550, df = 2, P = 0.169). Most providers agreed that the new protocol was easy to use, useful in making referral decisions, and improved the referral process. All agreed the risk score should be incorporated into electronic clinical notes. Provider opinions related to implementing the risk score in clinical practice were mixed. Qualitative feedback suggests this was due to limited validation of the AMI READMITS score in reducing readmissions.
Conclusions: Our risk-based referral protocol helped to increase appropriate referrals among patients with type I MI. Provider adoption may be enhanced by incorporating the protocol into electronic clinical notes. Research to further validate the accuracy of the AMI READMITS score in predicting readmissions may support adoption of the protocol in clinical practice.
Keywords: quality improvement; type I myocardial infarction; referral process; readmission risk; risk assessment; chart review.
Early follow-up after discharge is an important strategy to reduce the risk of unplanned hospital readmissions among patients with various conditions.1-3 While patient confounding factors, such as chronic health problems, environment, socioeconomic status, and literacy, make it difficult to avoid all unplanned readmissions, early follow-up may help providers identify and appropriately manage some health-related issues, and as such is a pivotal element of a readmission prevention strategy.4 There is evidence that patients with non-ST elevation myocardial infarction (NSTEMI) who have an outpatient appointment with a physician within 7 days after discharge have a lower risk of 30-day readmission.5
Our hospital’s postmyocardial infarction clinic was created to prevent unplanned readmissions within 30 days after discharge among patients with type I myocardial infarction (MI). Since inception, the number of referrals has been much lower than expected. In 2018, the total number of patients discharged from the hospital with type I MI and any troponin I level above 0.40 ng/mL was 313. Most of these patients were discharged from the hospital’s cardiac units; however, only 91 referrals were made. To increase referrals, the cardiology nurse practitioners (NPs) developed a post-MI referral protocol (Figure 1). However, this protocol was not consistently used and referrals to the clinic remained low.
Evidence-based risk assessment tools have the potential to increase effective patient management. For example, cardiology providers at the hospital utilize various scores, such as CHA2DS2-VASc6 and the Society of Thoracic Surgery risk score,7 to plan patient management. Among the scores used to predict unplanned readmissions for MI patients, the most promising is the AMI READMITS score.8 Unlike other nonspecific prediction models, the AMI READMITS score was developed based on variables extracted from the electronic health records (EHRs) of patients who were hospitalized for MI and readmitted within 30 days after discharge. Recognizing the potential to increase referrals by integrating an MI-specific risk assessment, this quality improvement study modified the existing referral protocol to include the patients’ AMI READMITS score and recommendations for follow-up.
Currently, there are no clear recommendations on how soon after discharge patients with MI should undergo follow-up. As research data vary, we selected 7 days follow-up for patients from high risk groups based on the “See you in 7” initiative for patients with heart failure (HF) and MI,9,10 as well as evidence that patients with NSTEMI have a lower risk of 30-day readmission if they have follow-up within 7 days after discharge5; and we selected 14 days follow-up for patients from low-risk groups based on evidence that postdischarge follow-up within 14 days reduces risk of 30-day readmission in patients with acute myocardial infarction (AMI) and/or acutely decompensated HF.11
Methods
This project was designed to answer the following question: For adult patients with type I MI, does implementation of a readmission risk assessment referral protocol increase the percentage of referrals and appointments scheduled within a recommended time? Anticipated outcomes included: (1) increased referrals to a cardiologist or the post-MI clinic; (2) increased scheduled follow-up appointments within 7 to 14 days; (3) provider satisfaction with the usability and usefulness of the new protocol; and (4) consistent provider adoption of the new risk assessment referral protocol.
To evaluate the degree to which these outcomes were achieved, we reviewed patient charts for 2 months prior and 2 months during implementation of the new referral protocol. As shown in Figure 2, the new protocol added the following process steps to the existing protocol: calculation of the AMI READMITS score, recommendations for follow-up based on patients’ risk score, and guidance to refer patients to the post-MI clinic if patients did not have an appointment with a cardiologist within 7 to 14 days after discharge. Patients’ risk assessment scores were obtained from forms completed by clinicians during the intervention. Clinician’s perceptions related to the usability and usefulness of the new protocol and feedback related to its long-term adoption were assessed using a descriptive survey.
The institutional review board classified this project as a quality improvement project. To avoid potential loss of patient privacy, no identifiable data were collected, a unique identifier unrelated to patients’ records was generated for each patient, and data were saved on a password-protected cardiology office computer.
Population
The project population included all adult patients (≥ 18 years old) with type I MI who were admitted or transferred to the hospital, had a percutaneous coronary intervention (PCI), or were managed without PCI and discharged from the hospital’s cardiac care unit (CCU) and progressive cardiac care unit (PCCU). The criteria for type I MI included the “detection of a rise and/or fall of cardiac troponin with at least 1 value above the 99th percentile and with at least 1 of the following: symptoms of acute myocardial ischemia; new ischemic electrocardiographic (ECG) changes; development of new pathological Q waves; imaging evidence of new loss of viable myocardium or new regional wall motion abnormality in a pattern consistent with an ischemic etiology; identification of a coronary thrombus by angiography including intracoronary imaging or by autopsy.”12 The study excluded patients with type I MI who were referred for coronary bypass surgery.
Intervention
The revised risk assessment protocol was implemented within the CCU and PCCU. The lead investigator met with each provider to discuss the role of the post-MI clinic, current referral rates, the purpose of the project, and the new referral process to be completed during the project for each patient discharged with type I MI. Cardiology NPs, fellows, and residents were asked to use the risk-assessment form to calculate patients’ risk for readmission, and refer patients to the post-MI clinic if an appointment with a cardiologist was not available within 7 to 14 days after discharge. Every week during the intervention phase, the investigator sent reminder emails to ensure form completion. Providers were asked to calculate and write the score, the discharge and referral dates, where referrals were made (a cardiologist or the post-MI clinic), date of appointment, and reason for not scheduling an appointment or not referring on the risk assessment form, and to drop the completed forms in specific labeled boxes located at the CCU and PCCU work stations. The investigator collected the completed forms weekly. When the number of discharged patients did not match the number of completed forms, the investigator followed up with discharging providers to understand why.
Data and Data Collection
Data to determine whether the use of the new protocol increased discharge referrals among patients with type I MI within the recommended timeframes were collected by electronic chart review. Data included discharging unit, patients’ age, gender, admission and discharge date, diagnosis, referral to a cardiologist and the post-MI clinic, and appointment date. Clinical data needed to calculate the AMI READMITS score was also collected: PCI within 24 hours, serum creatinine, systolic blood pressure (SBP), brain natriuretic peptide (BNP), and diabetes status.
Data to assess provider satisfaction with the usability and usefulness of the new protocol were gathered through an online survey. The survey included 1 question related to the providers’ role, 1 question asking whether they used the risk assessment for each patient, and 5 Likert-items assessing the ease of usage. An additional open-ended question asked providers to share feedback related to integrating the AMI READMITS risk assessment score to the post-MI referral protocol long term.
To evaluate how consistently providers utilized the new referral protocol when discharging patients with type I MI, the number of completed forms was compared with the number of those patients who were discharged.
Statistical Analysis
Descriptive statistics were used to summarize patient demographics and to calculate the frequency of referrals before and during the intervention. Chi-square statistics were calculated to determine whether the change in percentage of referrals and timely referrals was significant. Descriptive statistics were used to determine the level of provider satisfaction related to each survey item. A content analysis method was used to synthesize themes from the open-ended question asking clinicians to share their feedback related to the new protocol.
Results
Fifty-seven patients met the study inclusion criteria: 29 patients during the preintervention phase and 28 patients during the intervention phase. There were 35 male (61.4%) and 22 female (38.6%) patients. Twenty-five patients (43.9%) were from age groups 41 through 60 years and 61 through 80 years, respectively, representing the majority of included patients. Seven patients (12.3%) were from the 81 years and older age group. There were no patients in the age group 18 through 40 years. Based on the AMI READMITS score calculation, 57.9% (n = 33) patients were from a low-risk group (includes extremely low and low risk for readmission) and 42.1% (n = 24) were from a high-risk group (includes moderate, high, and extremely high risk for readmission).
Provider adoption of the new protocol during the intervention was high. Referral forms were completed for 82% (n = 23) of the 28 patients during the intervention. Analysis findings showed a statistically significant increase in documented referrals after implementing the new referral protocol. During the preintervention phase, 66% (n = 19) of patients with type I MI were referred to see a cardiologist or an NP at a post-MI clinic and there was no documented referral for 34% (n = 10) of patients. During the intervention phase, 89% (n = 25) of patients were referred and there was no documented referral for 11% (n = 3) of patients. Chi-square results indicated that the increase in referrals was significant (χ2 = 4.571, df = 1, P = 0.033).
Data analysis examined whether patient referrals fell within the recommended timeframe of 7 days for the high-risk group (included moderate-to-extremely high risk) and 14 days for the low-risk group (included low-to-extremely low risk). During the preintervention phase, 31% (n = 9) of patient referrals were scheduled as recommended; 28% (n = 8) of patient referrals were scheduled but delayed; and there was no referral date documented for 41% (n = 12) of patients. During the intervention phase, referrals scheduled as recommended increased to 53% (n = 15); 25% (n = 7) of referrals were scheduled but delayed; and there was no referral date documented for 21.4% (n = 6) of patients. The change in appointments scheduled as recommended was not significant (χ2 = 3.550, df = 2, P = 0.169).
Surveys were emailed to 25 cardiology fellows and 3 cardiology NPs who participated in this study. Eighteen of the 28 clinicians (15 cardiology fellows and 3 cardiology NPs) responded for a response rate of 64%. One of several residents who rotated through the CCU and PCCU during the intervention also completed the survey, for a total of 19 participants. When asked if the protocol was easy to use, 79% agreed or strongly agreed. Eighteen of the 19 participants (95%) agreed or strongly agreed that the protocol was useful in making referral decisions. Sixty-eight percent agreed or strongly agreed that the AMI READMITS risk assessment score improves referral process. All participants agreed or strongly agreed that there should be an option to incorporate the AMI READMITS risk assessment score into electronic clinical notes. When asked whether the AMI READMITS risk score should be implemented in clinical practice, responses were mixed (Figure 3). A common theme among the 4 participants who responded with comments was the need for additional data to validate the usefulness of the AMI READMITS to reduce readmissions. In addition, 1 participant commented that “manual calculation [of the risk score] is not ideal.”
Discussion
This project demonstrated that implementing an evidence-based referral protocol integrating the AMI-READMITS score can increase timely postdischarge referrals among patients with type I MI. The percentage of appropriately scheduled appointments increased during the intervention phase; however, a relatively high number of appointments were scheduled outside of the recommended timeframe, similar to preintervention. Thus, while the new protocol increased referrals and provider documentation of these referrals, it appears that challenges in scheduling timely referral appointments remained. This project did not examine the reasons for delayed appointments.
The survey findings indicated that providers were generally satisfied with the usability and usefulness of the new risk assessment protocol. A large majority agreed or strongly agreed that it was easy to use and useful in making referral decisions, and most agreed or strongly agreed that it improves the referral process. Mixed opinions regarding implementing the AMI READMITS score in clinical practice, combined with qualitative findings, suggest that a lack of external validation of the AMI READMITS presents a barrier to its long-term adoption. All providers who participated in the survey agreed or strongly agreed that the risk assessment should be incorporated into electronic clinical notes. We have begun the process of working with the EHR vendor to automate the AMI risk-assessment within the referral work-flow, which will provide an opportunity for a follow-up quality improvement study.
This quality improvement project has several limitations. First, it implemented a small change in 2 inpatient units at 1 hospital using a simple pre- posttest design. Therefore, the findings are not generalizable to other settings. Prior to the intervention, some referrals may have been made without documentation. While the authors were able to trace undocumented referrals for patients who were referred to the post-MI clinic or to a cardiologist affiliated with the hospital, some patients may have been referred to cardiologists who were not affiliated with the hospital. Another limitation was that the self-created provider survey used was not tested in other clinical settings; thus, it cannot be determined whether the sensitivity and specificity of the survey questions are high. In addition, the clinical providers who participated in the study knew the study team, which may have influenced their behavior during the study period. Furthermore, the identified improvement in clinicians’ referral practices may not be sustainable due to the complexity and effort required to manually calculate the risk score. This limitation could be eliminated by integrating the risk score calculation into the EHR.
Conclusion
Early follow-up after discharge plays an important role in supporting patients’ self-management of some risk factors (ie, diet, weight, and smoking) and identifying gaps in postdischarge care which may lead to readmission. This project provides evidence that integrating the AMI READMITS risk assessment score into the referral process can help to guide discharge decision-making and increase timely, appropriate referrals for patients with MI. Integration of a specific risk assessment, such as the AMI READMITS, within the post-MI referral protocol may help clinicians make more efficient, educated referral decisions. Future studies should explore more specifically how and why the new protocol impacts clinicians’ decision-making and behavior related to post-MI referrals. In addition, future studies should investigate challenges associated with scheduling postdischarge appointments. It will be important to investigate how integration of the new protocol within the EHR may increase efficiency, consistency, and provider satisfaction with the new referral process. Additional research investigating the effects of the AMI READMITS score on readmissions reduction will be important to promote long-term adoption of the improved referral protocol in clinical practice.
Acknowledgments: The authors thank Shelly Conaway, ANP-BC, MSN, Angela Street, ANP-BC, MSN, Andrew Geis, ACNP-BC, MSN, Richard P. Jones II, MD, Eunice Young, MD, Joy Rothwell, MSN, RN-BC, Allison Olazo, MBA, MSN, RN-BC, Elizabeth Heck, RN-BC, and Matthew Trojanowski, MHA, MS, RRT, CSSBB for their support of this study.
Corresponding author: Nailya Muganlinskaya, DNP, MPH, ACNP-BC, MSN, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD 21287; [email protected].
Financial disclosures: None.
From The Johns Hopkins Hospital, Baltimore, MD (Dr. Muganlinskaya and Dr. Skojec, retired); The George Washington University, Washington, DC (Dr. Posey); and Johns Hopkins University, Baltimore, MD (Dr. Resar).
Abstract
Objective: Assessing the risk characteristics of patients with acute myocardial infarction (MI) can help providers make appropriate referral decisions. This quality improvement project sought to improve timely, appropriate referrals among patients with type I MI by adding a risk assessment, the AMI READMITS score, to the existing referral protocol.
Methods: Patients’ chart data were analyzed to assess changes in referrals and timely follow-up appointments from pre-intervention to intervention. A survey assessed providers’ satisfaction with the new referral protocol.
Results: Among 57 patients (n = 29 preintervention; n = 28 intervention), documented referrals increased significantly from 66% to 89% (χ2 = 4.571, df = 1, P = 0.033); and timely appointments increased by 10%, which was not significant (χ2 = 3.550, df = 2, P = 0.169). Most providers agreed that the new protocol was easy to use, useful in making referral decisions, and improved the referral process. All agreed the risk score should be incorporated into electronic clinical notes. Provider opinions related to implementing the risk score in clinical practice were mixed. Qualitative feedback suggests this was due to limited validation of the AMI READMITS score in reducing readmissions.
Conclusions: Our risk-based referral protocol helped to increase appropriate referrals among patients with type I MI. Provider adoption may be enhanced by incorporating the protocol into electronic clinical notes. Research to further validate the accuracy of the AMI READMITS score in predicting readmissions may support adoption of the protocol in clinical practice.
Keywords: quality improvement; type I myocardial infarction; referral process; readmission risk; risk assessment; chart review.
Early follow-up after discharge is an important strategy to reduce the risk of unplanned hospital readmissions among patients with various conditions.1-3 While patient confounding factors, such as chronic health problems, environment, socioeconomic status, and literacy, make it difficult to avoid all unplanned readmissions, early follow-up may help providers identify and appropriately manage some health-related issues, and as such is a pivotal element of a readmission prevention strategy.4 There is evidence that patients with non-ST elevation myocardial infarction (NSTEMI) who have an outpatient appointment with a physician within 7 days after discharge have a lower risk of 30-day readmission.5
Our hospital’s postmyocardial infarction clinic was created to prevent unplanned readmissions within 30 days after discharge among patients with type I myocardial infarction (MI). Since inception, the number of referrals has been much lower than expected. In 2018, the total number of patients discharged from the hospital with type I MI and any troponin I level above 0.40 ng/mL was 313. Most of these patients were discharged from the hospital’s cardiac units; however, only 91 referrals were made. To increase referrals, the cardiology nurse practitioners (NPs) developed a post-MI referral protocol (Figure 1). However, this protocol was not consistently used and referrals to the clinic remained low.
Evidence-based risk assessment tools have the potential to increase effective patient management. For example, cardiology providers at the hospital utilize various scores, such as CHA2DS2-VASc6 and the Society of Thoracic Surgery risk score,7 to plan patient management. Among the scores used to predict unplanned readmissions for MI patients, the most promising is the AMI READMITS score.8 Unlike other nonspecific prediction models, the AMI READMITS score was developed based on variables extracted from the electronic health records (EHRs) of patients who were hospitalized for MI and readmitted within 30 days after discharge. Recognizing the potential to increase referrals by integrating an MI-specific risk assessment, this quality improvement study modified the existing referral protocol to include the patients’ AMI READMITS score and recommendations for follow-up.
Currently, there are no clear recommendations on how soon after discharge patients with MI should undergo follow-up. As research data vary, we selected 7 days follow-up for patients from high risk groups based on the “See you in 7” initiative for patients with heart failure (HF) and MI,9,10 as well as evidence that patients with NSTEMI have a lower risk of 30-day readmission if they have follow-up within 7 days after discharge5; and we selected 14 days follow-up for patients from low-risk groups based on evidence that postdischarge follow-up within 14 days reduces risk of 30-day readmission in patients with acute myocardial infarction (AMI) and/or acutely decompensated HF.11
Methods
This project was designed to answer the following question: For adult patients with type I MI, does implementation of a readmission risk assessment referral protocol increase the percentage of referrals and appointments scheduled within a recommended time? Anticipated outcomes included: (1) increased referrals to a cardiologist or the post-MI clinic; (2) increased scheduled follow-up appointments within 7 to 14 days; (3) provider satisfaction with the usability and usefulness of the new protocol; and (4) consistent provider adoption of the new risk assessment referral protocol.
To evaluate the degree to which these outcomes were achieved, we reviewed patient charts for 2 months prior and 2 months during implementation of the new referral protocol. As shown in Figure 2, the new protocol added the following process steps to the existing protocol: calculation of the AMI READMITS score, recommendations for follow-up based on patients’ risk score, and guidance to refer patients to the post-MI clinic if patients did not have an appointment with a cardiologist within 7 to 14 days after discharge. Patients’ risk assessment scores were obtained from forms completed by clinicians during the intervention. Clinician’s perceptions related to the usability and usefulness of the new protocol and feedback related to its long-term adoption were assessed using a descriptive survey.
The institutional review board classified this project as a quality improvement project. To avoid potential loss of patient privacy, no identifiable data were collected, a unique identifier unrelated to patients’ records was generated for each patient, and data were saved on a password-protected cardiology office computer.
Population
The project population included all adult patients (≥ 18 years old) with type I MI who were admitted or transferred to the hospital, had a percutaneous coronary intervention (PCI), or were managed without PCI and discharged from the hospital’s cardiac care unit (CCU) and progressive cardiac care unit (PCCU). The criteria for type I MI included the “detection of a rise and/or fall of cardiac troponin with at least 1 value above the 99th percentile and with at least 1 of the following: symptoms of acute myocardial ischemia; new ischemic electrocardiographic (ECG) changes; development of new pathological Q waves; imaging evidence of new loss of viable myocardium or new regional wall motion abnormality in a pattern consistent with an ischemic etiology; identification of a coronary thrombus by angiography including intracoronary imaging or by autopsy.”12 The study excluded patients with type I MI who were referred for coronary bypass surgery.
Intervention
The revised risk assessment protocol was implemented within the CCU and PCCU. The lead investigator met with each provider to discuss the role of the post-MI clinic, current referral rates, the purpose of the project, and the new referral process to be completed during the project for each patient discharged with type I MI. Cardiology NPs, fellows, and residents were asked to use the risk-assessment form to calculate patients’ risk for readmission, and refer patients to the post-MI clinic if an appointment with a cardiologist was not available within 7 to 14 days after discharge. Every week during the intervention phase, the investigator sent reminder emails to ensure form completion. Providers were asked to calculate and write the score, the discharge and referral dates, where referrals were made (a cardiologist or the post-MI clinic), date of appointment, and reason for not scheduling an appointment or not referring on the risk assessment form, and to drop the completed forms in specific labeled boxes located at the CCU and PCCU work stations. The investigator collected the completed forms weekly. When the number of discharged patients did not match the number of completed forms, the investigator followed up with discharging providers to understand why.
Data and Data Collection
Data to determine whether the use of the new protocol increased discharge referrals among patients with type I MI within the recommended timeframes were collected by electronic chart review. Data included discharging unit, patients’ age, gender, admission and discharge date, diagnosis, referral to a cardiologist and the post-MI clinic, and appointment date. Clinical data needed to calculate the AMI READMITS score was also collected: PCI within 24 hours, serum creatinine, systolic blood pressure (SBP), brain natriuretic peptide (BNP), and diabetes status.
Data to assess provider satisfaction with the usability and usefulness of the new protocol were gathered through an online survey. The survey included 1 question related to the providers’ role, 1 question asking whether they used the risk assessment for each patient, and 5 Likert-items assessing the ease of usage. An additional open-ended question asked providers to share feedback related to integrating the AMI READMITS risk assessment score to the post-MI referral protocol long term.
To evaluate how consistently providers utilized the new referral protocol when discharging patients with type I MI, the number of completed forms was compared with the number of those patients who were discharged.
Statistical Analysis
Descriptive statistics were used to summarize patient demographics and to calculate the frequency of referrals before and during the intervention. Chi-square statistics were calculated to determine whether the change in percentage of referrals and timely referrals was significant. Descriptive statistics were used to determine the level of provider satisfaction related to each survey item. A content analysis method was used to synthesize themes from the open-ended question asking clinicians to share their feedback related to the new protocol.
Results
Fifty-seven patients met the study inclusion criteria: 29 patients during the preintervention phase and 28 patients during the intervention phase. There were 35 male (61.4%) and 22 female (38.6%) patients. Twenty-five patients (43.9%) were from age groups 41 through 60 years and 61 through 80 years, respectively, representing the majority of included patients. Seven patients (12.3%) were from the 81 years and older age group. There were no patients in the age group 18 through 40 years. Based on the AMI READMITS score calculation, 57.9% (n = 33) patients were from a low-risk group (includes extremely low and low risk for readmission) and 42.1% (n = 24) were from a high-risk group (includes moderate, high, and extremely high risk for readmission).
Provider adoption of the new protocol during the intervention was high. Referral forms were completed for 82% (n = 23) of the 28 patients during the intervention. Analysis findings showed a statistically significant increase in documented referrals after implementing the new referral protocol. During the preintervention phase, 66% (n = 19) of patients with type I MI were referred to see a cardiologist or an NP at a post-MI clinic and there was no documented referral for 34% (n = 10) of patients. During the intervention phase, 89% (n = 25) of patients were referred and there was no documented referral for 11% (n = 3) of patients. Chi-square results indicated that the increase in referrals was significant (χ2 = 4.571, df = 1, P = 0.033).
Data analysis examined whether patient referrals fell within the recommended timeframe of 7 days for the high-risk group (included moderate-to-extremely high risk) and 14 days for the low-risk group (included low-to-extremely low risk). During the preintervention phase, 31% (n = 9) of patient referrals were scheduled as recommended; 28% (n = 8) of patient referrals were scheduled but delayed; and there was no referral date documented for 41% (n = 12) of patients. During the intervention phase, referrals scheduled as recommended increased to 53% (n = 15); 25% (n = 7) of referrals were scheduled but delayed; and there was no referral date documented for 21.4% (n = 6) of patients. The change in appointments scheduled as recommended was not significant (χ2 = 3.550, df = 2, P = 0.169).
Surveys were emailed to 25 cardiology fellows and 3 cardiology NPs who participated in this study. Eighteen of the 28 clinicians (15 cardiology fellows and 3 cardiology NPs) responded for a response rate of 64%. One of several residents who rotated through the CCU and PCCU during the intervention also completed the survey, for a total of 19 participants. When asked if the protocol was easy to use, 79% agreed or strongly agreed. Eighteen of the 19 participants (95%) agreed or strongly agreed that the protocol was useful in making referral decisions. Sixty-eight percent agreed or strongly agreed that the AMI READMITS risk assessment score improves referral process. All participants agreed or strongly agreed that there should be an option to incorporate the AMI READMITS risk assessment score into electronic clinical notes. When asked whether the AMI READMITS risk score should be implemented in clinical practice, responses were mixed (Figure 3). A common theme among the 4 participants who responded with comments was the need for additional data to validate the usefulness of the AMI READMITS to reduce readmissions. In addition, 1 participant commented that “manual calculation [of the risk score] is not ideal.”
Discussion
This project demonstrated that implementing an evidence-based referral protocol integrating the AMI-READMITS score can increase timely postdischarge referrals among patients with type I MI. The percentage of appropriately scheduled appointments increased during the intervention phase; however, a relatively high number of appointments were scheduled outside of the recommended timeframe, similar to preintervention. Thus, while the new protocol increased referrals and provider documentation of these referrals, it appears that challenges in scheduling timely referral appointments remained. This project did not examine the reasons for delayed appointments.
The survey findings indicated that providers were generally satisfied with the usability and usefulness of the new risk assessment protocol. A large majority agreed or strongly agreed that it was easy to use and useful in making referral decisions, and most agreed or strongly agreed that it improves the referral process. Mixed opinions regarding implementing the AMI READMITS score in clinical practice, combined with qualitative findings, suggest that a lack of external validation of the AMI READMITS presents a barrier to its long-term adoption. All providers who participated in the survey agreed or strongly agreed that the risk assessment should be incorporated into electronic clinical notes. We have begun the process of working with the EHR vendor to automate the AMI risk-assessment within the referral work-flow, which will provide an opportunity for a follow-up quality improvement study.
This quality improvement project has several limitations. First, it implemented a small change in 2 inpatient units at 1 hospital using a simple pre- posttest design. Therefore, the findings are not generalizable to other settings. Prior to the intervention, some referrals may have been made without documentation. While the authors were able to trace undocumented referrals for patients who were referred to the post-MI clinic or to a cardiologist affiliated with the hospital, some patients may have been referred to cardiologists who were not affiliated with the hospital. Another limitation was that the self-created provider survey used was not tested in other clinical settings; thus, it cannot be determined whether the sensitivity and specificity of the survey questions are high. In addition, the clinical providers who participated in the study knew the study team, which may have influenced their behavior during the study period. Furthermore, the identified improvement in clinicians’ referral practices may not be sustainable due to the complexity and effort required to manually calculate the risk score. This limitation could be eliminated by integrating the risk score calculation into the EHR.
Conclusion
Early follow-up after discharge plays an important role in supporting patients’ self-management of some risk factors (ie, diet, weight, and smoking) and identifying gaps in postdischarge care which may lead to readmission. This project provides evidence that integrating the AMI READMITS risk assessment score into the referral process can help to guide discharge decision-making and increase timely, appropriate referrals for patients with MI. Integration of a specific risk assessment, such as the AMI READMITS, within the post-MI referral protocol may help clinicians make more efficient, educated referral decisions. Future studies should explore more specifically how and why the new protocol impacts clinicians’ decision-making and behavior related to post-MI referrals. In addition, future studies should investigate challenges associated with scheduling postdischarge appointments. It will be important to investigate how integration of the new protocol within the EHR may increase efficiency, consistency, and provider satisfaction with the new referral process. Additional research investigating the effects of the AMI READMITS score on readmissions reduction will be important to promote long-term adoption of the improved referral protocol in clinical practice.
Acknowledgments: The authors thank Shelly Conaway, ANP-BC, MSN, Angela Street, ANP-BC, MSN, Andrew Geis, ACNP-BC, MSN, Richard P. Jones II, MD, Eunice Young, MD, Joy Rothwell, MSN, RN-BC, Allison Olazo, MBA, MSN, RN-BC, Elizabeth Heck, RN-BC, and Matthew Trojanowski, MHA, MS, RRT, CSSBB for their support of this study.
Corresponding author: Nailya Muganlinskaya, DNP, MPH, ACNP-BC, MSN, The Johns Hopkins Hospital, 1800 Orleans St, Baltimore, MD 21287; [email protected].
Financial disclosures: None.
1. Why it is important to improve care transitions? Society of Hospital Medicine. Accessed June 15, 2020. https://www.hospitalmedicine.org/clinical-topics/care-transitions/
2. Tong L, Arnold T, Yang J, et al. The association between outpatient follow-up visits and all-cause non-elective 30-day readmissions: a retrospective observational cohort study. PloS One. 2018;13(7):e0200691.
3. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-22.
4. Health Research & Educational Trust. Preventable Readmissions Change Package. American Hospital Association. Updated December 2015. Accessed June 10, 2020. https://www.aha.org/sites/default/files/hiin/HRETHEN_ChangePackage_Readmissions.pd
5. Tung Y-C, Chang G-M, Chang H-Y, Yu T-H. Relationship between early physician follow-up and 30-day readmission after acute myocardial infarction and heart failure. Plos One. 2017;12(1):e0170061.
6. Kaplan RM, Koehler J, Zieger PD, et al. Stroke risk as a function of atrial fibrillation duration and CHA2DS2-VASc score. Circulation. 2019;140(20):1639-46.
7. Balan P, Zhao Y, Johnson S, et al. The Society of Thoracic Surgery Risk Score as a predictor of 30-day mortality in transcatheter vs surgical aortic valve replacement: a single-center experience and its implications for the development of a TAVR risk-prediction model. J Invasive Cardiol. 2017;29(3):109-14.
8. Smith LN, Makam AN, Darden D, et al. Acute myocardial infarction readmission risk prediction models: A systematic review of model performance. Circ Cardiovasc Qual Outcomes9.9. 2018;11(1):e003885.
9. Baker H, Oliver-McNeil S, Deng L, Hummel SL. See you in 7: regional hospital collaboration and outcomes in Medicare heart failure patients. JACC Heart Fail. 2015;3(10):765-73.
10. Batten A, Jaeger C, Griffen D, et al. See you in 7: improving acute myocardial infarction follow-up care. BMJ Open Qual. 2018;7(2):e000296.
11. Lee DW, Armistead L, Coleman H, et al. Abstract 15387: Post-discharge follow-up within 14 days reduces 30-day hospital readmission rates in patients with acute myocardial infarction and/or acutely decompensated heart failure. Circulation. 2018;134 (1):A 15387.
12. Thygesen K, Alpert JS, Jaffe AS, et al. Fourth universal definition of myocardial infarction. Circulation. 2018;138 (20):e:618-51.
1. Why it is important to improve care transitions? Society of Hospital Medicine. Accessed June 15, 2020. https://www.hospitalmedicine.org/clinical-topics/care-transitions/
2. Tong L, Arnold T, Yang J, et al. The association between outpatient follow-up visits and all-cause non-elective 30-day readmissions: a retrospective observational cohort study. PloS One. 2018;13(7):e0200691.
3. Jackson C, Shahsahebi M, Wedlake T, DuBard CA. Timeliness of outpatient follow-up: an evidence-based approach for planning after hospital discharge. Ann Fam Med. 2015;13(2):115-22.
4. Health Research & Educational Trust. Preventable Readmissions Change Package. American Hospital Association. Updated December 2015. Accessed June 10, 2020. https://www.aha.org/sites/default/files/hiin/HRETHEN_ChangePackage_Readmissions.pd
5. Tung Y-C, Chang G-M, Chang H-Y, Yu T-H. Relationship between early physician follow-up and 30-day readmission after acute myocardial infarction and heart failure. Plos One. 2017;12(1):e0170061.
6. Kaplan RM, Koehler J, Zieger PD, et al. Stroke risk as a function of atrial fibrillation duration and CHA2DS2-VASc score. Circulation. 2019;140(20):1639-46.
7. Balan P, Zhao Y, Johnson S, et al. The Society of Thoracic Surgery Risk Score as a predictor of 30-day mortality in transcatheter vs surgical aortic valve replacement: a single-center experience and its implications for the development of a TAVR risk-prediction model. J Invasive Cardiol. 2017;29(3):109-14.
8. Smith LN, Makam AN, Darden D, et al. Acute myocardial infarction readmission risk prediction models: A systematic review of model performance. Circ Cardiovasc Qual Outcomes9.9. 2018;11(1):e003885.
9. Baker H, Oliver-McNeil S, Deng L, Hummel SL. See you in 7: regional hospital collaboration and outcomes in Medicare heart failure patients. JACC Heart Fail. 2015;3(10):765-73.
10. Batten A, Jaeger C, Griffen D, et al. See you in 7: improving acute myocardial infarction follow-up care. BMJ Open Qual. 2018;7(2):e000296.
11. Lee DW, Armistead L, Coleman H, et al. Abstract 15387: Post-discharge follow-up within 14 days reduces 30-day hospital readmission rates in patients with acute myocardial infarction and/or acutely decompensated heart failure. Circulation. 2018;134 (1):A 15387.
12. Thygesen K, Alpert JS, Jaffe AS, et al. Fourth universal definition of myocardial infarction. Circulation. 2018;138 (20):e:618-51.
Senate confirms Murthy as Surgeon General
Seven Republicans – Bill Cassidy (La.), Susan Collins (Maine), Roger Marshall (Kan.), Susan Murkowski (Alaska), Rob Portman (Ohio), Mitt Romney (Utah), and Dan Sullivan (Alaska) – joined all the Democrats and independents in the 57-43 vote approving Dr. Murthy’s nomination.
Dr. Murthy, 43, previously served as the 19th Surgeon General, from December 2014 to April 2017, when he was asked to step down by President Donald J. Trump.
Surgeons General serve 4-year terms.
During his first tenure, Dr. Murthy issued the first-ever Surgeon General’s report on the crisis of addiction and issued a call to action to doctors to help battle the opioid crisis.
When Dr. Murthy was nominated by President-elect Joseph R. Biden Jr. in December, he was acting as cochair of the incoming administration’s COVID-19 transition advisory board.
Early in 2020, before the COVID-19 pandemic hit, Dr. Murthy published a timely book: “Together: The Healing Power of Human Connection in a Sometimes Lonely World”.
He earned his bachelor’s degree from Harvard and his MD and MBA degrees from Yale. He completed his internal medicine residency at Brigham and Women’s Hospital in Boston, where he also served as a hospitalist, and later joined Harvard Medical School as a faculty member in internal medicine.
He is married to Alice Chen, MD. The couple have two children.
A version of this article first appeared on WebMD.com.
Seven Republicans – Bill Cassidy (La.), Susan Collins (Maine), Roger Marshall (Kan.), Susan Murkowski (Alaska), Rob Portman (Ohio), Mitt Romney (Utah), and Dan Sullivan (Alaska) – joined all the Democrats and independents in the 57-43 vote approving Dr. Murthy’s nomination.
Dr. Murthy, 43, previously served as the 19th Surgeon General, from December 2014 to April 2017, when he was asked to step down by President Donald J. Trump.
Surgeons General serve 4-year terms.
During his first tenure, Dr. Murthy issued the first-ever Surgeon General’s report on the crisis of addiction and issued a call to action to doctors to help battle the opioid crisis.
When Dr. Murthy was nominated by President-elect Joseph R. Biden Jr. in December, he was acting as cochair of the incoming administration’s COVID-19 transition advisory board.
Early in 2020, before the COVID-19 pandemic hit, Dr. Murthy published a timely book: “Together: The Healing Power of Human Connection in a Sometimes Lonely World”.
He earned his bachelor’s degree from Harvard and his MD and MBA degrees from Yale. He completed his internal medicine residency at Brigham and Women’s Hospital in Boston, where he also served as a hospitalist, and later joined Harvard Medical School as a faculty member in internal medicine.
He is married to Alice Chen, MD. The couple have two children.
A version of this article first appeared on WebMD.com.
Seven Republicans – Bill Cassidy (La.), Susan Collins (Maine), Roger Marshall (Kan.), Susan Murkowski (Alaska), Rob Portman (Ohio), Mitt Romney (Utah), and Dan Sullivan (Alaska) – joined all the Democrats and independents in the 57-43 vote approving Dr. Murthy’s nomination.
Dr. Murthy, 43, previously served as the 19th Surgeon General, from December 2014 to April 2017, when he was asked to step down by President Donald J. Trump.
Surgeons General serve 4-year terms.
During his first tenure, Dr. Murthy issued the first-ever Surgeon General’s report on the crisis of addiction and issued a call to action to doctors to help battle the opioid crisis.
When Dr. Murthy was nominated by President-elect Joseph R. Biden Jr. in December, he was acting as cochair of the incoming administration’s COVID-19 transition advisory board.
Early in 2020, before the COVID-19 pandemic hit, Dr. Murthy published a timely book: “Together: The Healing Power of Human Connection in a Sometimes Lonely World”.
He earned his bachelor’s degree from Harvard and his MD and MBA degrees from Yale. He completed his internal medicine residency at Brigham and Women’s Hospital in Boston, where he also served as a hospitalist, and later joined Harvard Medical School as a faculty member in internal medicine.
He is married to Alice Chen, MD. The couple have two children.
A version of this article first appeared on WebMD.com.
Change is hard: Lessons from an EHR conversion
During this “go-live,” 5 hospitals and approximately 300 ambulatory service and physician practice locations made the transition, consolidating over 100 disparate electronic systems and dozens of interfaces into one world-class medical record.
If you’ve ever been part of such an event, you know it is anything but simple. On the contrary, it requires an enormous financial investment along with years of planning, hours of meetings, and months of training. No matter how much preparation goes into it, there are sure to be bumps along the way. It is a traumatic and stressful time for all involved, but the end result is well worth the effort. Still, there are lessons to be learned and wisdom to be gleaned, and this month we’d like to share a few that we found most important. We believe that many of these are useful lessons even to those who will never live through a go-live.
Safety always comes first
Patient safety is a term so often used that it has a tendency to be taken for granted. Health systems build processes and procedures to ensure safety – some even win awards and recognition for their efforts. But the best (and safest) health care institutions build patient safety into their cultures. More than just being taught to use checklists or buzzwords, the staff at these institutions are encouraged to put the welfare of patients first, making all other activities secondary to this pursuit. We had the opportunity to witness the benefits of such a culture during this go-live and were incredibly impressed with the results.
To be successful in an EHR transition of any magnitude, an organization needs to hold patient safety as a core value and provide its employees with the tools to execute on that value. This enables staff to prepare adequately and to identify risks and opportunities before the conversion takes place. Once go-live occurs, staff also must feel empowered to speak up when they identify problem areas that might jeopardize patients’ care. They also must be given a clear escalation path to ensure their voices can be heard. Most importantly, everyone must understand that the electronic health record itself is just one piece of a major operational change.
As workflows are modified to adapt to the new technology, unsafe processes should be called out and fixed quickly. While the EHR may offer the latest in decision support and system integration, no advancement in technology can make up for bad outcomes, nor justify processes that lead to patient harm.
Training is no substitute for good support
It takes a long time to train thousands of employees, especially when that training must occur during the era of social distancing in the midst of a pandemic. Still, even in the best of times, education should be married to hands-on experience in order to have a real impact. Unfortunately, this is extremely challenging.
Trainees forget much of what they’ve learned in the weeks or months between education and go-live, so they must be given immediately accessible support to bridge the gap. This is known as “at-the-elbow” (ATE) support, and as the name implies, it consists of individuals who are familiar with the new system and are always available to end users, answering their questions and helping them navigate. Since health care never sleeps, this support needs to be offered 24/7, and it should also be flexible and plentiful.
There are many areas that will require more support than anticipated to accommodate the number of clinical and other staff who will use the system, so support staff must be nimble and available for redeployment. In addition, ensuring high-quality support is essential. As many ATE experts are hired contractors, their knowledge base and communications skills can vary widely. Accountability is key, and end users should feel empowered to identify gaps in coverage and deficits in knowledge base in the ATE.
As employees become more familiar with the new system, the need for ATE will wane, but there will still be questions that arise for many weeks to months, and new EHR users will also be added all the time. A good after–go-live support system should remain available so clinical and clerical employees can get just-in-time assistance whenever they need it.
Users should be given clear expectations
Clinicians going through an EHR conversion may be frustrated to discover that the data transferred from their old system into the new one is not quite what they expected. While structured elements such as allergies and immunizations may transfer, unstructured patient histories may not come over at all.
There may be gaps in data, or the opposite may even be true: an overabundance of useless information may transfer over, leaving doctors with dozens of meaningless data points to sift through and eliminate to clean up the chart. This can be extremely time-consuming and discouraging and may jeopardize the success of the go-live.
Providers deserve clear expectations prior to conversion. They should be told what will and will not transfer and be informed that there will be extra work required for documentation at the outset. They may also want the option to preemptively reduce patient volumes to accommodate the additional effort involved in preparing charts. No matter what, this will be a heavy lift, and physicians should understand the implications long before go-live to prepare accordingly.
Old habits die hard
One of the most common complaints we’ve heard following EHR conversions is that “things just worked better in the old system.” We always respond with a question: “Were things better, or just different?” The truth may lie somewhere in the middle, but there is no question that muscle memory develops over many years, and change is difficult no matter how much better the new system is. Still, appropriate expectations, access to just-in-time support, and a continual focus on safety will ensure that the long-term benefits of a patient-centered and integrated electronic record will far outweigh the initial challenges of go-live.
Dr. Notte is a family physician and chief medical officer of Abington (Pa.) Hospital–Jefferson Health. Dr. Skolnik is professor of family and community medicine at Sidney Kimmel Medical College, Philadelphia, and associate director of the family medicine residency program at Abington Hospital–Jefferson Health. They have no conflicts related to the content of this piece.
During this “go-live,” 5 hospitals and approximately 300 ambulatory service and physician practice locations made the transition, consolidating over 100 disparate electronic systems and dozens of interfaces into one world-class medical record.
If you’ve ever been part of such an event, you know it is anything but simple. On the contrary, it requires an enormous financial investment along with years of planning, hours of meetings, and months of training. No matter how much preparation goes into it, there are sure to be bumps along the way. It is a traumatic and stressful time for all involved, but the end result is well worth the effort. Still, there are lessons to be learned and wisdom to be gleaned, and this month we’d like to share a few that we found most important. We believe that many of these are useful lessons even to those who will never live through a go-live.
Safety always comes first
Patient safety is a term so often used that it has a tendency to be taken for granted. Health systems build processes and procedures to ensure safety – some even win awards and recognition for their efforts. But the best (and safest) health care institutions build patient safety into their cultures. More than just being taught to use checklists or buzzwords, the staff at these institutions are encouraged to put the welfare of patients first, making all other activities secondary to this pursuit. We had the opportunity to witness the benefits of such a culture during this go-live and were incredibly impressed with the results.
To be successful in an EHR transition of any magnitude, an organization needs to hold patient safety as a core value and provide its employees with the tools to execute on that value. This enables staff to prepare adequately and to identify risks and opportunities before the conversion takes place. Once go-live occurs, staff also must feel empowered to speak up when they identify problem areas that might jeopardize patients’ care. They also must be given a clear escalation path to ensure their voices can be heard. Most importantly, everyone must understand that the electronic health record itself is just one piece of a major operational change.
As workflows are modified to adapt to the new technology, unsafe processes should be called out and fixed quickly. While the EHR may offer the latest in decision support and system integration, no advancement in technology can make up for bad outcomes, nor justify processes that lead to patient harm.
Training is no substitute for good support
It takes a long time to train thousands of employees, especially when that training must occur during the era of social distancing in the midst of a pandemic. Still, even in the best of times, education should be married to hands-on experience in order to have a real impact. Unfortunately, this is extremely challenging.
Trainees forget much of what they’ve learned in the weeks or months between education and go-live, so they must be given immediately accessible support to bridge the gap. This is known as “at-the-elbow” (ATE) support, and as the name implies, it consists of individuals who are familiar with the new system and are always available to end users, answering their questions and helping them navigate. Since health care never sleeps, this support needs to be offered 24/7, and it should also be flexible and plentiful.
There are many areas that will require more support than anticipated to accommodate the number of clinical and other staff who will use the system, so support staff must be nimble and available for redeployment. In addition, ensuring high-quality support is essential. As many ATE experts are hired contractors, their knowledge base and communications skills can vary widely. Accountability is key, and end users should feel empowered to identify gaps in coverage and deficits in knowledge base in the ATE.
As employees become more familiar with the new system, the need for ATE will wane, but there will still be questions that arise for many weeks to months, and new EHR users will also be added all the time. A good after–go-live support system should remain available so clinical and clerical employees can get just-in-time assistance whenever they need it.
Users should be given clear expectations
Clinicians going through an EHR conversion may be frustrated to discover that the data transferred from their old system into the new one is not quite what they expected. While structured elements such as allergies and immunizations may transfer, unstructured patient histories may not come over at all.
There may be gaps in data, or the opposite may even be true: an overabundance of useless information may transfer over, leaving doctors with dozens of meaningless data points to sift through and eliminate to clean up the chart. This can be extremely time-consuming and discouraging and may jeopardize the success of the go-live.
Providers deserve clear expectations prior to conversion. They should be told what will and will not transfer and be informed that there will be extra work required for documentation at the outset. They may also want the option to preemptively reduce patient volumes to accommodate the additional effort involved in preparing charts. No matter what, this will be a heavy lift, and physicians should understand the implications long before go-live to prepare accordingly.
Old habits die hard
One of the most common complaints we’ve heard following EHR conversions is that “things just worked better in the old system.” We always respond with a question: “Were things better, or just different?” The truth may lie somewhere in the middle, but there is no question that muscle memory develops over many years, and change is difficult no matter how much better the new system is. Still, appropriate expectations, access to just-in-time support, and a continual focus on safety will ensure that the long-term benefits of a patient-centered and integrated electronic record will far outweigh the initial challenges of go-live.
Dr. Notte is a family physician and chief medical officer of Abington (Pa.) Hospital–Jefferson Health. Dr. Skolnik is professor of family and community medicine at Sidney Kimmel Medical College, Philadelphia, and associate director of the family medicine residency program at Abington Hospital–Jefferson Health. They have no conflicts related to the content of this piece.
During this “go-live,” 5 hospitals and approximately 300 ambulatory service and physician practice locations made the transition, consolidating over 100 disparate electronic systems and dozens of interfaces into one world-class medical record.
If you’ve ever been part of such an event, you know it is anything but simple. On the contrary, it requires an enormous financial investment along with years of planning, hours of meetings, and months of training. No matter how much preparation goes into it, there are sure to be bumps along the way. It is a traumatic and stressful time for all involved, but the end result is well worth the effort. Still, there are lessons to be learned and wisdom to be gleaned, and this month we’d like to share a few that we found most important. We believe that many of these are useful lessons even to those who will never live through a go-live.
Safety always comes first
Patient safety is a term so often used that it has a tendency to be taken for granted. Health systems build processes and procedures to ensure safety – some even win awards and recognition for their efforts. But the best (and safest) health care institutions build patient safety into their cultures. More than just being taught to use checklists or buzzwords, the staff at these institutions are encouraged to put the welfare of patients first, making all other activities secondary to this pursuit. We had the opportunity to witness the benefits of such a culture during this go-live and were incredibly impressed with the results.
To be successful in an EHR transition of any magnitude, an organization needs to hold patient safety as a core value and provide its employees with the tools to execute on that value. This enables staff to prepare adequately and to identify risks and opportunities before the conversion takes place. Once go-live occurs, staff also must feel empowered to speak up when they identify problem areas that might jeopardize patients’ care. They also must be given a clear escalation path to ensure their voices can be heard. Most importantly, everyone must understand that the electronic health record itself is just one piece of a major operational change.
As workflows are modified to adapt to the new technology, unsafe processes should be called out and fixed quickly. While the EHR may offer the latest in decision support and system integration, no advancement in technology can make up for bad outcomes, nor justify processes that lead to patient harm.
Training is no substitute for good support
It takes a long time to train thousands of employees, especially when that training must occur during the era of social distancing in the midst of a pandemic. Still, even in the best of times, education should be married to hands-on experience in order to have a real impact. Unfortunately, this is extremely challenging.
Trainees forget much of what they’ve learned in the weeks or months between education and go-live, so they must be given immediately accessible support to bridge the gap. This is known as “at-the-elbow” (ATE) support, and as the name implies, it consists of individuals who are familiar with the new system and are always available to end users, answering their questions and helping them navigate. Since health care never sleeps, this support needs to be offered 24/7, and it should also be flexible and plentiful.
There are many areas that will require more support than anticipated to accommodate the number of clinical and other staff who will use the system, so support staff must be nimble and available for redeployment. In addition, ensuring high-quality support is essential. As many ATE experts are hired contractors, their knowledge base and communications skills can vary widely. Accountability is key, and end users should feel empowered to identify gaps in coverage and deficits in knowledge base in the ATE.
As employees become more familiar with the new system, the need for ATE will wane, but there will still be questions that arise for many weeks to months, and new EHR users will also be added all the time. A good after–go-live support system should remain available so clinical and clerical employees can get just-in-time assistance whenever they need it.
Users should be given clear expectations
Clinicians going through an EHR conversion may be frustrated to discover that the data transferred from their old system into the new one is not quite what they expected. While structured elements such as allergies and immunizations may transfer, unstructured patient histories may not come over at all.
There may be gaps in data, or the opposite may even be true: an overabundance of useless information may transfer over, leaving doctors with dozens of meaningless data points to sift through and eliminate to clean up the chart. This can be extremely time-consuming and discouraging and may jeopardize the success of the go-live.
Providers deserve clear expectations prior to conversion. They should be told what will and will not transfer and be informed that there will be extra work required for documentation at the outset. They may also want the option to preemptively reduce patient volumes to accommodate the additional effort involved in preparing charts. No matter what, this will be a heavy lift, and physicians should understand the implications long before go-live to prepare accordingly.
Old habits die hard
One of the most common complaints we’ve heard following EHR conversions is that “things just worked better in the old system.” We always respond with a question: “Were things better, or just different?” The truth may lie somewhere in the middle, but there is no question that muscle memory develops over many years, and change is difficult no matter how much better the new system is. Still, appropriate expectations, access to just-in-time support, and a continual focus on safety will ensure that the long-term benefits of a patient-centered and integrated electronic record will far outweigh the initial challenges of go-live.
Dr. Notte is a family physician and chief medical officer of Abington (Pa.) Hospital–Jefferson Health. Dr. Skolnik is professor of family and community medicine at Sidney Kimmel Medical College, Philadelphia, and associate director of the family medicine residency program at Abington Hospital–Jefferson Health. They have no conflicts related to the content of this piece.
2021 match sets records: Who matched and who didn’t?
A total of 38,106 positions were offered, up 850 spots (2.3%) from 2020. Of those, 35,194 were first-year (PGY-1) positions, which was 928 more than the previous year (2.7%). A record 5,915 programs were part of the Match, 88 more than 2020.
“The application and recruitment cycle was upended as a result of the pandemic, yet the results of the Match continue to demonstrate strong and consistent outcomes for participants,” Donna L. Lamb, DHSc, MBA, BSN, NRMP president and CEO, said in a new release.
The report comes amid a year of Zoom interview fatigue, canceled testing, and virus fears and work-arounds, which the NMRP has never had to wrestle with since it was established in 1952.
Despite challenges, fill rates increased across the board. Of the 38,106 total positions offered, 36,179 were filled, representing a 2.6% increase over 2020. Of the 35,194 first-year positions available, 33,535 were filled, representing a 2.9% increase.
Those rates drove the percentage of all positions filled to 94.9% (up from 94.6%) and the percentage of PGY-1 positions filled to 94.8% (also up from 94.6%). There were 1,927 unfilled positions, a decline of 71 (3.6%) from 2020.
Primary care results strong
Of the first-year positions offered, 17,649 (49.6%) were in family medicine, internal medicine, and pediatrics. That’s an increase of 514 positions (3%) over 2020.
Of first-year positions offered in 2021, 16,860 (95.5%) were filled. U.S. seniors took 11,013 (65.3%) of those slots; that represents a slight decline (0.3%) from 2020. Family medicine saw a gain of 63 U.S. MD seniors who matched, and internal medicine saw a gain of 93 U.S. DO seniors who matched.
Some specialties filled all positions
PGY-1 specialties with 30 positions or more that filled all available positions include dermatology, medicine – emergency medicine, medicine – pediatrics, neurologic surgery, otolaryngology, integrated plastic surgery, and vascular surgery.*
PGY-1 specialties with 30 positions or more that filled more than 90% with U.S. seniors include dermatology (100%), medicine – emergency medicine (93.6%), medicine – pediatrics (93.5%), otolaryngology (93.2%), orthopedic surgery (92.8%), and integrated plastic surgery (90.4%).*
PGY-1 specialties with at least 30 positions that filled less than 50% with U.S. seniors include pathology (41.4 %) and surgery–preliminary (28%).
The number of U.S. citizen international medical graduates who submitted rank-ordered lists was 5,295, an increase of 128 (2.5%) over 2020 and the highest in 6 years; 3,152 of them matched to first-year positions, down two PGY-1 matched applicants over last year.
Full data are available on the NRMP’s website.
Correction, 3/22/21: An earlier version of this article misstated the affected specialties.
A version of this article first appeared on Medscape.com.
A total of 38,106 positions were offered, up 850 spots (2.3%) from 2020. Of those, 35,194 were first-year (PGY-1) positions, which was 928 more than the previous year (2.7%). A record 5,915 programs were part of the Match, 88 more than 2020.
“The application and recruitment cycle was upended as a result of the pandemic, yet the results of the Match continue to demonstrate strong and consistent outcomes for participants,” Donna L. Lamb, DHSc, MBA, BSN, NRMP president and CEO, said in a new release.
The report comes amid a year of Zoom interview fatigue, canceled testing, and virus fears and work-arounds, which the NMRP has never had to wrestle with since it was established in 1952.
Despite challenges, fill rates increased across the board. Of the 38,106 total positions offered, 36,179 were filled, representing a 2.6% increase over 2020. Of the 35,194 first-year positions available, 33,535 were filled, representing a 2.9% increase.
Those rates drove the percentage of all positions filled to 94.9% (up from 94.6%) and the percentage of PGY-1 positions filled to 94.8% (also up from 94.6%). There were 1,927 unfilled positions, a decline of 71 (3.6%) from 2020.
Primary care results strong
Of the first-year positions offered, 17,649 (49.6%) were in family medicine, internal medicine, and pediatrics. That’s an increase of 514 positions (3%) over 2020.
Of first-year positions offered in 2021, 16,860 (95.5%) were filled. U.S. seniors took 11,013 (65.3%) of those slots; that represents a slight decline (0.3%) from 2020. Family medicine saw a gain of 63 U.S. MD seniors who matched, and internal medicine saw a gain of 93 U.S. DO seniors who matched.
Some specialties filled all positions
PGY-1 specialties with 30 positions or more that filled all available positions include dermatology, medicine – emergency medicine, medicine – pediatrics, neurologic surgery, otolaryngology, integrated plastic surgery, and vascular surgery.*
PGY-1 specialties with 30 positions or more that filled more than 90% with U.S. seniors include dermatology (100%), medicine – emergency medicine (93.6%), medicine – pediatrics (93.5%), otolaryngology (93.2%), orthopedic surgery (92.8%), and integrated plastic surgery (90.4%).*
PGY-1 specialties with at least 30 positions that filled less than 50% with U.S. seniors include pathology (41.4 %) and surgery–preliminary (28%).
The number of U.S. citizen international medical graduates who submitted rank-ordered lists was 5,295, an increase of 128 (2.5%) over 2020 and the highest in 6 years; 3,152 of them matched to first-year positions, down two PGY-1 matched applicants over last year.
Full data are available on the NRMP’s website.
Correction, 3/22/21: An earlier version of this article misstated the affected specialties.
A version of this article first appeared on Medscape.com.
A total of 38,106 positions were offered, up 850 spots (2.3%) from 2020. Of those, 35,194 were first-year (PGY-1) positions, which was 928 more than the previous year (2.7%). A record 5,915 programs were part of the Match, 88 more than 2020.
“The application and recruitment cycle was upended as a result of the pandemic, yet the results of the Match continue to demonstrate strong and consistent outcomes for participants,” Donna L. Lamb, DHSc, MBA, BSN, NRMP president and CEO, said in a new release.
The report comes amid a year of Zoom interview fatigue, canceled testing, and virus fears and work-arounds, which the NMRP has never had to wrestle with since it was established in 1952.
Despite challenges, fill rates increased across the board. Of the 38,106 total positions offered, 36,179 were filled, representing a 2.6% increase over 2020. Of the 35,194 first-year positions available, 33,535 were filled, representing a 2.9% increase.
Those rates drove the percentage of all positions filled to 94.9% (up from 94.6%) and the percentage of PGY-1 positions filled to 94.8% (also up from 94.6%). There were 1,927 unfilled positions, a decline of 71 (3.6%) from 2020.
Primary care results strong
Of the first-year positions offered, 17,649 (49.6%) were in family medicine, internal medicine, and pediatrics. That’s an increase of 514 positions (3%) over 2020.
Of first-year positions offered in 2021, 16,860 (95.5%) were filled. U.S. seniors took 11,013 (65.3%) of those slots; that represents a slight decline (0.3%) from 2020. Family medicine saw a gain of 63 U.S. MD seniors who matched, and internal medicine saw a gain of 93 U.S. DO seniors who matched.
Some specialties filled all positions
PGY-1 specialties with 30 positions or more that filled all available positions include dermatology, medicine – emergency medicine, medicine – pediatrics, neurologic surgery, otolaryngology, integrated plastic surgery, and vascular surgery.*
PGY-1 specialties with 30 positions or more that filled more than 90% with U.S. seniors include dermatology (100%), medicine – emergency medicine (93.6%), medicine – pediatrics (93.5%), otolaryngology (93.2%), orthopedic surgery (92.8%), and integrated plastic surgery (90.4%).*
PGY-1 specialties with at least 30 positions that filled less than 50% with U.S. seniors include pathology (41.4 %) and surgery–preliminary (28%).
The number of U.S. citizen international medical graduates who submitted rank-ordered lists was 5,295, an increase of 128 (2.5%) over 2020 and the highest in 6 years; 3,152 of them matched to first-year positions, down two PGY-1 matched applicants over last year.
Full data are available on the NRMP’s website.
Correction, 3/22/21: An earlier version of this article misstated the affected specialties.
A version of this article first appeared on Medscape.com.
Is pediatric subspecialty training financially worth it?
Pursuing fellowship training is often financially costly in terms of lifetime earnings, compared with starting a career as a general pediatrician immediately after residency, a report suggests.
Researchers found that most pediatric subspecialists – including those practicing neurology, pulmonology, and adolescent medicine – do not see a financial return from additional training because of the delays in receiving increased compensation and the repayment of educational debt.
“Most pediatric subspecialists don’t experience a relative increase in compensation after training compared to a general pediatrician, so there isn’t a financial benefit to additional training,” lead author Eva Catenaccio, MD, from the division of pediatric neurology, department of neurology, Johns Hopkins University, Baltimore, told this news organization.
The findings, published online March 8 in Pediatrics, contribute to the ongoing debate about the length of pediatric fellowship training programs. The data also provide evidence for the potential effect of a pediatric subspecialty loan repayment program.
Pediatric subspecialty training rarely pays off
However, not all practitioners in pediatric subspecialties would find themselves in the red relative to their generalist peers. Three subspecialties had a positive financial return: cardiology, critical care, and neonatology. Dr. Catenaccio explained that this may be because these subspecialties tend to be “inpatient procedure oriented, which are often more [lucrative] than outpatient cognitive–oriented subspecialties, such as pediatric infectious diseases, endocrinology, or adolescent medicine.”
Enrolling in a pediatric fellowship program resulted in lifetime financial returns that ranged from an increase of $852,129 for cardiology, relative to general pediatrics, to a loss of $1,594,366 for adolescent medicine, researchers found.
For the study, researchers calculated the financial returns of 15 pediatric subspecialties – emergency medicine, neurology, cardiology, critical care, neonatology, hematology and oncology, pulmonology, hospitalist medicine, allergy and immunology, gastroenterology, rheumatology, nephrology, adolescent medicine, infectious diseases, and endocrinology – in comparison with returns of private practice general pediatrics on the basis of 2018-2019 data on fellowship stipends, compensation, and educational debt.
They obtained most of the data from the Association of American Medical Colleges Survey of Resident/Fellow Stipends and Benefits, AAMC’s annual Medical School Faculty Salary Report, and the AAMC Medical School Graduation Questionnaire.
Richard Mink, MD, department of pediatrics, Harbor-UCLA Medical Center, Torrance, Calif., noted that it would have been helpful to have also compared the lifetime earnings of practitioners in pediatric subspecialties to academic general pediatricians and not just those in private practice.
The financial gap has worsened
To better understand which aspects of fellowship training have the greatest effect on lifetime compensation, Dr. Catenaccio and colleagues evaluated the potential effects of shortening fellowship length, eliminating school debt, and implementing a federal loan repayment plan. These changes enhanced the returns of cardiology, critical care, and neonatology – subspecialties that had already seen financial returns before these changes – and resulted in a positive financial return for emergency medicine.
The changes also narrowed the financial gap between subspecialties and general pediatrics. However, the remaining subspecialties still earned less than private practice pediatrics.
The new study is an update to a 2011 report, which reflected 2007-2008 data for 11 subspecialties. This time around, the researchers included the subspecialty of hospitalist medicine, which was approved as a board-certified subspecialty by the American Board of Pediatrics in 2014, as well as neurology, allergy and immunology, and adolescent medicine.
“I was most surprised that the additional pediatric subspecialties we included since the 2011 report followed the same general trend, with pediatric subspecialty training having a lower lifetime earning potential than general pediatrics,” Dr. Catenaccio said.
Comparing results from the two study periods showed that the financial gap between general pediatrics and subspecialty pediatrics worsened over time. For example, the financial return for pediatric endocrinology decreased an additional $500,000 between 2007 and 2018.
The researchers believe a combination of increased educational debt burden, slow growth in compensation, and changing interest rates over time have caused the financial differences between general pediatrics and subspecialty pediatrics to become more pronounced.
‘Pediatric subspecialty training is worth it!’
Despite the financial gaps, Dr. Catenaccio and colleagues say pediatric subspecialty training is still worthwhile but that policymakers should address these financial differences to help guide workforce distribution in a way that meets the needs of patients.
“I think pediatric subspecialty training is worth it,” said Dr. Catenaccio, who’s pursuing pediatric subspecialty training. “There are so many factors that go into choosing a specialty or subspecialty in medicine, including the desire to care for a particular patient population, interest in certain diseases or organ systems, lifestyle considerations, and research opportunities.”
But it’s also important for trainees to be aware of economic considerations in their decision-making.
Dr. Mink, who wrote an accompanying commentary, agrees that young clinicians should not make career decisions on the basis of metrics such as lifetime earning measures.
“I think people who go into pediatrics have decided that money is not the driving force,” said Dr. Mink. He noted that pediatricians are usually not paid well, compared with other specialists. “To me the important thing is you have to like what you’re doing.”
A 2020 study found that trainees who chose a career in pediatric pulmonology, a subspecialty, said that financial considerations were not the driving factor in their decision-making. Nevertheless, Dr. Mink also believes young clinicians should take into account their educational debt.
The further widening of the financial gap between general pediatrics and pediatric subspecialties could lead to shortages in the pediatric subspecialty workforce.
The authors and Dr. Mink have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Pursuing fellowship training is often financially costly in terms of lifetime earnings, compared with starting a career as a general pediatrician immediately after residency, a report suggests.
Researchers found that most pediatric subspecialists – including those practicing neurology, pulmonology, and adolescent medicine – do not see a financial return from additional training because of the delays in receiving increased compensation and the repayment of educational debt.
“Most pediatric subspecialists don’t experience a relative increase in compensation after training compared to a general pediatrician, so there isn’t a financial benefit to additional training,” lead author Eva Catenaccio, MD, from the division of pediatric neurology, department of neurology, Johns Hopkins University, Baltimore, told this news organization.
The findings, published online March 8 in Pediatrics, contribute to the ongoing debate about the length of pediatric fellowship training programs. The data also provide evidence for the potential effect of a pediatric subspecialty loan repayment program.
Pediatric subspecialty training rarely pays off
However, not all practitioners in pediatric subspecialties would find themselves in the red relative to their generalist peers. Three subspecialties had a positive financial return: cardiology, critical care, and neonatology. Dr. Catenaccio explained that this may be because these subspecialties tend to be “inpatient procedure oriented, which are often more [lucrative] than outpatient cognitive–oriented subspecialties, such as pediatric infectious diseases, endocrinology, or adolescent medicine.”
Enrolling in a pediatric fellowship program resulted in lifetime financial returns that ranged from an increase of $852,129 for cardiology, relative to general pediatrics, to a loss of $1,594,366 for adolescent medicine, researchers found.
For the study, researchers calculated the financial returns of 15 pediatric subspecialties – emergency medicine, neurology, cardiology, critical care, neonatology, hematology and oncology, pulmonology, hospitalist medicine, allergy and immunology, gastroenterology, rheumatology, nephrology, adolescent medicine, infectious diseases, and endocrinology – in comparison with returns of private practice general pediatrics on the basis of 2018-2019 data on fellowship stipends, compensation, and educational debt.
They obtained most of the data from the Association of American Medical Colleges Survey of Resident/Fellow Stipends and Benefits, AAMC’s annual Medical School Faculty Salary Report, and the AAMC Medical School Graduation Questionnaire.
Richard Mink, MD, department of pediatrics, Harbor-UCLA Medical Center, Torrance, Calif., noted that it would have been helpful to have also compared the lifetime earnings of practitioners in pediatric subspecialties to academic general pediatricians and not just those in private practice.
The financial gap has worsened
To better understand which aspects of fellowship training have the greatest effect on lifetime compensation, Dr. Catenaccio and colleagues evaluated the potential effects of shortening fellowship length, eliminating school debt, and implementing a federal loan repayment plan. These changes enhanced the returns of cardiology, critical care, and neonatology – subspecialties that had already seen financial returns before these changes – and resulted in a positive financial return for emergency medicine.
The changes also narrowed the financial gap between subspecialties and general pediatrics. However, the remaining subspecialties still earned less than private practice pediatrics.
The new study is an update to a 2011 report, which reflected 2007-2008 data for 11 subspecialties. This time around, the researchers included the subspecialty of hospitalist medicine, which was approved as a board-certified subspecialty by the American Board of Pediatrics in 2014, as well as neurology, allergy and immunology, and adolescent medicine.
“I was most surprised that the additional pediatric subspecialties we included since the 2011 report followed the same general trend, with pediatric subspecialty training having a lower lifetime earning potential than general pediatrics,” Dr. Catenaccio said.
Comparing results from the two study periods showed that the financial gap between general pediatrics and subspecialty pediatrics worsened over time. For example, the financial return for pediatric endocrinology decreased an additional $500,000 between 2007 and 2018.
The researchers believe a combination of increased educational debt burden, slow growth in compensation, and changing interest rates over time have caused the financial differences between general pediatrics and subspecialty pediatrics to become more pronounced.
‘Pediatric subspecialty training is worth it!’
Despite the financial gaps, Dr. Catenaccio and colleagues say pediatric subspecialty training is still worthwhile but that policymakers should address these financial differences to help guide workforce distribution in a way that meets the needs of patients.
“I think pediatric subspecialty training is worth it,” said Dr. Catenaccio, who’s pursuing pediatric subspecialty training. “There are so many factors that go into choosing a specialty or subspecialty in medicine, including the desire to care for a particular patient population, interest in certain diseases or organ systems, lifestyle considerations, and research opportunities.”
But it’s also important for trainees to be aware of economic considerations in their decision-making.
Dr. Mink, who wrote an accompanying commentary, agrees that young clinicians should not make career decisions on the basis of metrics such as lifetime earning measures.
“I think people who go into pediatrics have decided that money is not the driving force,” said Dr. Mink. He noted that pediatricians are usually not paid well, compared with other specialists. “To me the important thing is you have to like what you’re doing.”
A 2020 study found that trainees who chose a career in pediatric pulmonology, a subspecialty, said that financial considerations were not the driving factor in their decision-making. Nevertheless, Dr. Mink also believes young clinicians should take into account their educational debt.
The further widening of the financial gap between general pediatrics and pediatric subspecialties could lead to shortages in the pediatric subspecialty workforce.
The authors and Dr. Mink have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Pursuing fellowship training is often financially costly in terms of lifetime earnings, compared with starting a career as a general pediatrician immediately after residency, a report suggests.
Researchers found that most pediatric subspecialists – including those practicing neurology, pulmonology, and adolescent medicine – do not see a financial return from additional training because of the delays in receiving increased compensation and the repayment of educational debt.
“Most pediatric subspecialists don’t experience a relative increase in compensation after training compared to a general pediatrician, so there isn’t a financial benefit to additional training,” lead author Eva Catenaccio, MD, from the division of pediatric neurology, department of neurology, Johns Hopkins University, Baltimore, told this news organization.
The findings, published online March 8 in Pediatrics, contribute to the ongoing debate about the length of pediatric fellowship training programs. The data also provide evidence for the potential effect of a pediatric subspecialty loan repayment program.
Pediatric subspecialty training rarely pays off
However, not all practitioners in pediatric subspecialties would find themselves in the red relative to their generalist peers. Three subspecialties had a positive financial return: cardiology, critical care, and neonatology. Dr. Catenaccio explained that this may be because these subspecialties tend to be “inpatient procedure oriented, which are often more [lucrative] than outpatient cognitive–oriented subspecialties, such as pediatric infectious diseases, endocrinology, or adolescent medicine.”
Enrolling in a pediatric fellowship program resulted in lifetime financial returns that ranged from an increase of $852,129 for cardiology, relative to general pediatrics, to a loss of $1,594,366 for adolescent medicine, researchers found.
For the study, researchers calculated the financial returns of 15 pediatric subspecialties – emergency medicine, neurology, cardiology, critical care, neonatology, hematology and oncology, pulmonology, hospitalist medicine, allergy and immunology, gastroenterology, rheumatology, nephrology, adolescent medicine, infectious diseases, and endocrinology – in comparison with returns of private practice general pediatrics on the basis of 2018-2019 data on fellowship stipends, compensation, and educational debt.
They obtained most of the data from the Association of American Medical Colleges Survey of Resident/Fellow Stipends and Benefits, AAMC’s annual Medical School Faculty Salary Report, and the AAMC Medical School Graduation Questionnaire.
Richard Mink, MD, department of pediatrics, Harbor-UCLA Medical Center, Torrance, Calif., noted that it would have been helpful to have also compared the lifetime earnings of practitioners in pediatric subspecialties to academic general pediatricians and not just those in private practice.
The financial gap has worsened
To better understand which aspects of fellowship training have the greatest effect on lifetime compensation, Dr. Catenaccio and colleagues evaluated the potential effects of shortening fellowship length, eliminating school debt, and implementing a federal loan repayment plan. These changes enhanced the returns of cardiology, critical care, and neonatology – subspecialties that had already seen financial returns before these changes – and resulted in a positive financial return for emergency medicine.
The changes also narrowed the financial gap between subspecialties and general pediatrics. However, the remaining subspecialties still earned less than private practice pediatrics.
The new study is an update to a 2011 report, which reflected 2007-2008 data for 11 subspecialties. This time around, the researchers included the subspecialty of hospitalist medicine, which was approved as a board-certified subspecialty by the American Board of Pediatrics in 2014, as well as neurology, allergy and immunology, and adolescent medicine.
“I was most surprised that the additional pediatric subspecialties we included since the 2011 report followed the same general trend, with pediatric subspecialty training having a lower lifetime earning potential than general pediatrics,” Dr. Catenaccio said.
Comparing results from the two study periods showed that the financial gap between general pediatrics and subspecialty pediatrics worsened over time. For example, the financial return for pediatric endocrinology decreased an additional $500,000 between 2007 and 2018.
The researchers believe a combination of increased educational debt burden, slow growth in compensation, and changing interest rates over time have caused the financial differences between general pediatrics and subspecialty pediatrics to become more pronounced.
‘Pediatric subspecialty training is worth it!’
Despite the financial gaps, Dr. Catenaccio and colleagues say pediatric subspecialty training is still worthwhile but that policymakers should address these financial differences to help guide workforce distribution in a way that meets the needs of patients.
“I think pediatric subspecialty training is worth it,” said Dr. Catenaccio, who’s pursuing pediatric subspecialty training. “There are so many factors that go into choosing a specialty or subspecialty in medicine, including the desire to care for a particular patient population, interest in certain diseases or organ systems, lifestyle considerations, and research opportunities.”
But it’s also important for trainees to be aware of economic considerations in their decision-making.
Dr. Mink, who wrote an accompanying commentary, agrees that young clinicians should not make career decisions on the basis of metrics such as lifetime earning measures.
“I think people who go into pediatrics have decided that money is not the driving force,” said Dr. Mink. He noted that pediatricians are usually not paid well, compared with other specialists. “To me the important thing is you have to like what you’re doing.”
A 2020 study found that trainees who chose a career in pediatric pulmonology, a subspecialty, said that financial considerations were not the driving factor in their decision-making. Nevertheless, Dr. Mink also believes young clinicians should take into account their educational debt.
The further widening of the financial gap between general pediatrics and pediatric subspecialties could lead to shortages in the pediatric subspecialty workforce.
The authors and Dr. Mink have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Have you used ambulatory cervical ripening in your practice?
[polldaddy:10771173]
[polldaddy:10771173]
[polldaddy:10771173]
A return of holism? It never left osteopathic medicine
I enjoyed Dr. Jonas’s article, “A new model of care to return holism to family medicine” (J Fam Pract. 2020;69:493-498).
However, I wanted to point out that for more than 100 years the concept of the patient-centered medical home, and the outgrowth of that, has been part of osteopathic medical education, founded by A.T. Still, MD, in the 1800s.
Congratulations to the allopathic medicine profession for recognizing its significance.
Steven Shapiro, DO
Fenton, MI
I enjoyed Dr. Jonas’s article, “A new model of care to return holism to family medicine” (J Fam Pract. 2020;69:493-498).
However, I wanted to point out that for more than 100 years the concept of the patient-centered medical home, and the outgrowth of that, has been part of osteopathic medical education, founded by A.T. Still, MD, in the 1800s.
Congratulations to the allopathic medicine profession for recognizing its significance.
Steven Shapiro, DO
Fenton, MI
I enjoyed Dr. Jonas’s article, “A new model of care to return holism to family medicine” (J Fam Pract. 2020;69:493-498).
However, I wanted to point out that for more than 100 years the concept of the patient-centered medical home, and the outgrowth of that, has been part of osteopathic medical education, founded by A.T. Still, MD, in the 1800s.
Congratulations to the allopathic medicine profession for recognizing its significance.
Steven Shapiro, DO
Fenton, MI
Let’s apply the lessons from the AIDS crisis to the COVID-19 pandemic
In 2020, COVID-19 disrupted our medical system, and life in general. In the 1980s, the AIDS epidemic devastated communities and overwhelmed hospitals. There were lessons learned from the AIDS epidemic that can be applied to the current situation.
Patients with HIV-spectrum illness faced stigmatization and societal indifference, including rejection by family members, increased rates of suicide, fears of sexual and/or intrauterine transmission, substance abuse issues, and alterations of body image for those with wasting syndromes and disfiguring Kaposi lesions. AIDS prevention strategies such as the provision of condoms and needle exchange programs were controversial, and many caregivers exposed to contaminated fluids had to endure months of antiretroviral treatment.
Similar to the AIDS epidemic, the COVID-19 pandemic has had significant psychological implications for patients and caregivers. Patients with COVID-19 infections also face feelings of guilt over potentially exposing a family member to the virus; devastating socioeconomic issues; restrictive hospital visitation policies for family members; disease news oversaturation; and feelings of hopelessness. People with AIDS in the 1980s faced the possibility of dying alone, and there was initial skepticism about medications to treat HIV—just as some individuals are now uneasy about recently introduced coronavirus vaccines.
The similarities of both diseases allow us some foresight on how to deal with current COVID-19 issues. Looking back on the AIDS epidemic should teach us to prioritize attending to the mental health of sufferers and caregivers, creating advocacy and support groups for when a patient’s family is unavailable, instilling public confidence in treatment options, maintaining staff morale, addressing substance abuse (due to COVID-related stress), and depoliticizing prevention strategies. Addressing these issues is especially critical for minority populations.
As respected medical care leaders, we can provide and draw extra attention to the needs of patients’ family members and health care personnel during this COVID-19 pandemic. Hopefully, the distribution of vaccines will shorten some of our communal and professional distress.
Robert Frierson, MD
Steven Lippmann, MD
Louisville, KY
In 2020, COVID-19 disrupted our medical system, and life in general. In the 1980s, the AIDS epidemic devastated communities and overwhelmed hospitals. There were lessons learned from the AIDS epidemic that can be applied to the current situation.
Patients with HIV-spectrum illness faced stigmatization and societal indifference, including rejection by family members, increased rates of suicide, fears of sexual and/or intrauterine transmission, substance abuse issues, and alterations of body image for those with wasting syndromes and disfiguring Kaposi lesions. AIDS prevention strategies such as the provision of condoms and needle exchange programs were controversial, and many caregivers exposed to contaminated fluids had to endure months of antiretroviral treatment.
Similar to the AIDS epidemic, the COVID-19 pandemic has had significant psychological implications for patients and caregivers. Patients with COVID-19 infections also face feelings of guilt over potentially exposing a family member to the virus; devastating socioeconomic issues; restrictive hospital visitation policies for family members; disease news oversaturation; and feelings of hopelessness. People with AIDS in the 1980s faced the possibility of dying alone, and there was initial skepticism about medications to treat HIV—just as some individuals are now uneasy about recently introduced coronavirus vaccines.
The similarities of both diseases allow us some foresight on how to deal with current COVID-19 issues. Looking back on the AIDS epidemic should teach us to prioritize attending to the mental health of sufferers and caregivers, creating advocacy and support groups for when a patient’s family is unavailable, instilling public confidence in treatment options, maintaining staff morale, addressing substance abuse (due to COVID-related stress), and depoliticizing prevention strategies. Addressing these issues is especially critical for minority populations.
As respected medical care leaders, we can provide and draw extra attention to the needs of patients’ family members and health care personnel during this COVID-19 pandemic. Hopefully, the distribution of vaccines will shorten some of our communal and professional distress.
Robert Frierson, MD
Steven Lippmann, MD
Louisville, KY
In 2020, COVID-19 disrupted our medical system, and life in general. In the 1980s, the AIDS epidemic devastated communities and overwhelmed hospitals. There were lessons learned from the AIDS epidemic that can be applied to the current situation.
Patients with HIV-spectrum illness faced stigmatization and societal indifference, including rejection by family members, increased rates of suicide, fears of sexual and/or intrauterine transmission, substance abuse issues, and alterations of body image for those with wasting syndromes and disfiguring Kaposi lesions. AIDS prevention strategies such as the provision of condoms and needle exchange programs were controversial, and many caregivers exposed to contaminated fluids had to endure months of antiretroviral treatment.
Similar to the AIDS epidemic, the COVID-19 pandemic has had significant psychological implications for patients and caregivers. Patients with COVID-19 infections also face feelings of guilt over potentially exposing a family member to the virus; devastating socioeconomic issues; restrictive hospital visitation policies for family members; disease news oversaturation; and feelings of hopelessness. People with AIDS in the 1980s faced the possibility of dying alone, and there was initial skepticism about medications to treat HIV—just as some individuals are now uneasy about recently introduced coronavirus vaccines.
The similarities of both diseases allow us some foresight on how to deal with current COVID-19 issues. Looking back on the AIDS epidemic should teach us to prioritize attending to the mental health of sufferers and caregivers, creating advocacy and support groups for when a patient’s family is unavailable, instilling public confidence in treatment options, maintaining staff morale, addressing substance abuse (due to COVID-related stress), and depoliticizing prevention strategies. Addressing these issues is especially critical for minority populations.
As respected medical care leaders, we can provide and draw extra attention to the needs of patients’ family members and health care personnel during this COVID-19 pandemic. Hopefully, the distribution of vaccines will shorten some of our communal and professional distress.
Robert Frierson, MD
Steven Lippmann, MD
Louisville, KY
mCODE: Improving data sharing to enhance cancer care
An initiative designed to improve sharing of patient data may provide “tremendous benefits” in cancer care and research, according to authors of a review article.
The goals of the initiative, called Minimal Common Oncology Data Elements (mCODE), were to identify the data elements in electronic health records that are “essential” for making treatment decisions and create “a standardized computable data format” that would improve the exchange of data across EHRs, according to the mCODE website.
Travis J. Osterman, DO, of Vanderbilt University Medical Center in Nashville, Tenn., and colleagues described the mCODE initiative in a review published in JCO Clinical Cancer Informatics.
At present, commercially available EHRs are poorly designed to support modern oncology workflow, requiring laborious data entry and lacking a common library of oncology-specific discrete data elements. As an example, most EHRs poorly support the needs of precision oncology and clinical genetics, since next-generation sequencing and genetic test results are almost universally reported in PDF files.
In addition, basic, operational oncology data (e.g., cancer staging, adverse event documentation, response to treatment, etc.) are captured in EHRs primarily as an unstructured narrative.
Computable, analytical data are found for only the small percentage of patients in clinical trials. Even then, some degree of manual data abstraction is regularly required.
Interoperability of EHRs between practices and health care institutions is often so poor that the transfer of basic cancer-related information as analyzable data is difficult or even impossible.
Making progress: The 21st Century Cures Act
The American Society of Clinical Oncology has a more than 15-year history of developing oncology data standards. Unfortunately, progress in implementing these standards has been glacially slow. Impediments have included:
- A lack of conformance with clinical workflows.
- Failure to test standards on specific-use cases during pilot testing.
- A focus on data exchange, rather than the practical impediments to data entry.
- Poor engagement with EHR vendors in distributing clinical information modules with an oncology-specific focus
- Instability of data interoperability technologies.
The 21st Century Cures Act, which became law in December 2016, mandated improvement in the interoperability of health information through the development of data standards and application programming interfaces.
In early 2020, final rules for implementation required technology vendors to employ application programming interfaces using a single interoperability resource. In addition, payers were required to use the United States Core Data for Interoperability Standard for data exchange. These requirements were intended to provide patients with access to their own health care data “without special effort.”
As a fortunate byproduct, since EHR vendors are required to implement application program interfaces using the Health Level Seven International (HL7) Fast Healthcare Interoperability Resource (FHIR) Specification, the final rules could enable systems like mCODE to be more easily integrated with existing EHRs.
Lessons from CancerLinQ
ASCO created the health technology platform CancerLinQ in 2014, envisioning that it could become an oncology-focused learning health system – a system in which internal data and experience are systematically integrated with external evidence, allowing knowledge to be put into practice.
CancerLinQ extracts data from EHRs and other sources via direct software connections. CancerLinQ then aggregates, harmonizes, and normalizes the data in a cloud-based environment.
The data are available to participating practices for quality improvement in patient care and secondary research. In 2020, records of cancer patients in the CancerLinQ database surpassed 2 million.
CancerLinQ has been successful. However, because of the nature of the EHR ecosystem and the scope and variability of data capture by clinicians, supporting a true learning health system has proven to be a formidable task. Postprocessing manual review using trained human curators is laborious and unsustainable.
The CancerLinQ experience illustrated that basic cancer-pertinent data should be standardized in the EHR and collected prospectively.
The mCODE model
The mCODE initiative seeks to facilitate progress in care quality, clinical research, and health care policy by developing and maintaining a standard, computable, interoperable data format.
Guiding principles that were adopted early in mCODE’s development included:
- A collaborative, noncommercial, use case–driven developmental model.
- Iterative processes.
- User-driven development, refinement, and maintenance.
- Low ongoing maintenance requirements.
A foundational moment in mCODE’s development involved achieving consensus among stakeholders that the project would fail if EHR vendors required additional data entry by users.
After pilot work, a real-world endpoints project, working-group deliberation, public comment, and refinement, the final data standard included six primary domains: patient, disease, laboratory data/vital signs, genomics, treatment, and outcome.
Each domain is further divided into several concepts with specific associated data elements. The data elements are modeled into value sets that specify the possible values for the data element.
To test mCODE, eight organizations representing oncology EHR vendors, standards developers, and research organizations participated in a cancer interoperability track. The comments helped refine mCODE version 1.0, which was released in March 2020 and is accessible via the mCODE website.
Additions will likely be reviewed by a technical review group after external piloting of new use cases.
Innovation, not regulation
Every interaction between a patient and care provider yields information that could lead to improved safety and better outcomes. To be successful, the information must be collected in a computable format so it can be aggregated with data from other patients, analyzed without manual curation, and shared through interoperable systems. Those data should also be secure enough to protect the privacy of individual patients.
mCODE is a consensus data standard for oncology that provides an infrastructure to share patient data between oncology practices and health care systems while promising little to no additional data entry on the part of clinicians. Adoption by sites will be critical, however.
Publishing the standard through the HL7 FHIR technology demonstrated to EHR vendors and regulatory agencies the stability of HL7, an essential requirement for its incorporation into software.
EHR vendors and others are engaged in the CodeX HL7 FHIR Accelerator to design projects to expand and/or modify mCODE. Their creativity and innovativeness via the external advisory mCODE council and/or CodeX will be encouraged to help mCODE reach its full potential.
As part of CodeX, the Community of Practice, an open forum for end users, was established to provide regular updates about mCODE-related initiatives and use cases to solicit in-progress input, according to Robert S. Miller, MD, medical director of CancerLinQ and an author of the mCODE review.
For mCODE to be embraced by all stakeholders, there should be no additional regulations. By engaging stakeholders in an enterprise that supports innovation and collaboration – without additional regulation – mCODE could maximize the potential of EHRs that, until now, have assisted us only marginally in accomplishing those goals.
mCODE is a joint venture of ASCO/CancerLinQ, the Alliance for Clinical Trials in Oncology Foundation, the MITRE Corporation, the American Society for Radiation Oncology, and the Society of Surgical Oncology.
Dr. Osterman disclosed a grant from the National Cancer Institute and relationships with Infostratix, eHealth, AstraZeneca, Outcomes Insights, Biodesix, MD Outlook, GenomOncology, Cota Healthcare, GE Healthcare, and Microsoft. Dr. Miller and the third review author disclosed no conflicts of interest.
Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.
An initiative designed to improve sharing of patient data may provide “tremendous benefits” in cancer care and research, according to authors of a review article.
The goals of the initiative, called Minimal Common Oncology Data Elements (mCODE), were to identify the data elements in electronic health records that are “essential” for making treatment decisions and create “a standardized computable data format” that would improve the exchange of data across EHRs, according to the mCODE website.
Travis J. Osterman, DO, of Vanderbilt University Medical Center in Nashville, Tenn., and colleagues described the mCODE initiative in a review published in JCO Clinical Cancer Informatics.
At present, commercially available EHRs are poorly designed to support modern oncology workflow, requiring laborious data entry and lacking a common library of oncology-specific discrete data elements. As an example, most EHRs poorly support the needs of precision oncology and clinical genetics, since next-generation sequencing and genetic test results are almost universally reported in PDF files.
In addition, basic, operational oncology data (e.g., cancer staging, adverse event documentation, response to treatment, etc.) are captured in EHRs primarily as an unstructured narrative.
Computable, analytical data are found for only the small percentage of patients in clinical trials. Even then, some degree of manual data abstraction is regularly required.
Interoperability of EHRs between practices and health care institutions is often so poor that the transfer of basic cancer-related information as analyzable data is difficult or even impossible.
Making progress: The 21st Century Cures Act
The American Society of Clinical Oncology has a more than 15-year history of developing oncology data standards. Unfortunately, progress in implementing these standards has been glacially slow. Impediments have included:
- A lack of conformance with clinical workflows.
- Failure to test standards on specific-use cases during pilot testing.
- A focus on data exchange, rather than the practical impediments to data entry.
- Poor engagement with EHR vendors in distributing clinical information modules with an oncology-specific focus
- Instability of data interoperability technologies.
The 21st Century Cures Act, which became law in December 2016, mandated improvement in the interoperability of health information through the development of data standards and application programming interfaces.
In early 2020, final rules for implementation required technology vendors to employ application programming interfaces using a single interoperability resource. In addition, payers were required to use the United States Core Data for Interoperability Standard for data exchange. These requirements were intended to provide patients with access to their own health care data “without special effort.”
As a fortunate byproduct, since EHR vendors are required to implement application program interfaces using the Health Level Seven International (HL7) Fast Healthcare Interoperability Resource (FHIR) Specification, the final rules could enable systems like mCODE to be more easily integrated with existing EHRs.
Lessons from CancerLinQ
ASCO created the health technology platform CancerLinQ in 2014, envisioning that it could become an oncology-focused learning health system – a system in which internal data and experience are systematically integrated with external evidence, allowing knowledge to be put into practice.
CancerLinQ extracts data from EHRs and other sources via direct software connections. CancerLinQ then aggregates, harmonizes, and normalizes the data in a cloud-based environment.
The data are available to participating practices for quality improvement in patient care and secondary research. In 2020, records of cancer patients in the CancerLinQ database surpassed 2 million.
CancerLinQ has been successful. However, because of the nature of the EHR ecosystem and the scope and variability of data capture by clinicians, supporting a true learning health system has proven to be a formidable task. Postprocessing manual review using trained human curators is laborious and unsustainable.
The CancerLinQ experience illustrated that basic cancer-pertinent data should be standardized in the EHR and collected prospectively.
The mCODE model
The mCODE initiative seeks to facilitate progress in care quality, clinical research, and health care policy by developing and maintaining a standard, computable, interoperable data format.
Guiding principles that were adopted early in mCODE’s development included:
- A collaborative, noncommercial, use case–driven developmental model.
- Iterative processes.
- User-driven development, refinement, and maintenance.
- Low ongoing maintenance requirements.
A foundational moment in mCODE’s development involved achieving consensus among stakeholders that the project would fail if EHR vendors required additional data entry by users.
After pilot work, a real-world endpoints project, working-group deliberation, public comment, and refinement, the final data standard included six primary domains: patient, disease, laboratory data/vital signs, genomics, treatment, and outcome.
Each domain is further divided into several concepts with specific associated data elements. The data elements are modeled into value sets that specify the possible values for the data element.
To test mCODE, eight organizations representing oncology EHR vendors, standards developers, and research organizations participated in a cancer interoperability track. The comments helped refine mCODE version 1.0, which was released in March 2020 and is accessible via the mCODE website.
Additions will likely be reviewed by a technical review group after external piloting of new use cases.
Innovation, not regulation
Every interaction between a patient and care provider yields information that could lead to improved safety and better outcomes. To be successful, the information must be collected in a computable format so it can be aggregated with data from other patients, analyzed without manual curation, and shared through interoperable systems. Those data should also be secure enough to protect the privacy of individual patients.
mCODE is a consensus data standard for oncology that provides an infrastructure to share patient data between oncology practices and health care systems while promising little to no additional data entry on the part of clinicians. Adoption by sites will be critical, however.
Publishing the standard through the HL7 FHIR technology demonstrated to EHR vendors and regulatory agencies the stability of HL7, an essential requirement for its incorporation into software.
EHR vendors and others are engaged in the CodeX HL7 FHIR Accelerator to design projects to expand and/or modify mCODE. Their creativity and innovativeness via the external advisory mCODE council and/or CodeX will be encouraged to help mCODE reach its full potential.
As part of CodeX, the Community of Practice, an open forum for end users, was established to provide regular updates about mCODE-related initiatives and use cases to solicit in-progress input, according to Robert S. Miller, MD, medical director of CancerLinQ and an author of the mCODE review.
For mCODE to be embraced by all stakeholders, there should be no additional regulations. By engaging stakeholders in an enterprise that supports innovation and collaboration – without additional regulation – mCODE could maximize the potential of EHRs that, until now, have assisted us only marginally in accomplishing those goals.
mCODE is a joint venture of ASCO/CancerLinQ, the Alliance for Clinical Trials in Oncology Foundation, the MITRE Corporation, the American Society for Radiation Oncology, and the Society of Surgical Oncology.
Dr. Osterman disclosed a grant from the National Cancer Institute and relationships with Infostratix, eHealth, AstraZeneca, Outcomes Insights, Biodesix, MD Outlook, GenomOncology, Cota Healthcare, GE Healthcare, and Microsoft. Dr. Miller and the third review author disclosed no conflicts of interest.
Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.
An initiative designed to improve sharing of patient data may provide “tremendous benefits” in cancer care and research, according to authors of a review article.
The goals of the initiative, called Minimal Common Oncology Data Elements (mCODE), were to identify the data elements in electronic health records that are “essential” for making treatment decisions and create “a standardized computable data format” that would improve the exchange of data across EHRs, according to the mCODE website.
Travis J. Osterman, DO, of Vanderbilt University Medical Center in Nashville, Tenn., and colleagues described the mCODE initiative in a review published in JCO Clinical Cancer Informatics.
At present, commercially available EHRs are poorly designed to support modern oncology workflow, requiring laborious data entry and lacking a common library of oncology-specific discrete data elements. As an example, most EHRs poorly support the needs of precision oncology and clinical genetics, since next-generation sequencing and genetic test results are almost universally reported in PDF files.
In addition, basic, operational oncology data (e.g., cancer staging, adverse event documentation, response to treatment, etc.) are captured in EHRs primarily as an unstructured narrative.
Computable, analytical data are found for only the small percentage of patients in clinical trials. Even then, some degree of manual data abstraction is regularly required.
Interoperability of EHRs between practices and health care institutions is often so poor that the transfer of basic cancer-related information as analyzable data is difficult or even impossible.
Making progress: The 21st Century Cures Act
The American Society of Clinical Oncology has a more than 15-year history of developing oncology data standards. Unfortunately, progress in implementing these standards has been glacially slow. Impediments have included:
- A lack of conformance with clinical workflows.
- Failure to test standards on specific-use cases during pilot testing.
- A focus on data exchange, rather than the practical impediments to data entry.
- Poor engagement with EHR vendors in distributing clinical information modules with an oncology-specific focus
- Instability of data interoperability technologies.
The 21st Century Cures Act, which became law in December 2016, mandated improvement in the interoperability of health information through the development of data standards and application programming interfaces.
In early 2020, final rules for implementation required technology vendors to employ application programming interfaces using a single interoperability resource. In addition, payers were required to use the United States Core Data for Interoperability Standard for data exchange. These requirements were intended to provide patients with access to their own health care data “without special effort.”
As a fortunate byproduct, since EHR vendors are required to implement application program interfaces using the Health Level Seven International (HL7) Fast Healthcare Interoperability Resource (FHIR) Specification, the final rules could enable systems like mCODE to be more easily integrated with existing EHRs.
Lessons from CancerLinQ
ASCO created the health technology platform CancerLinQ in 2014, envisioning that it could become an oncology-focused learning health system – a system in which internal data and experience are systematically integrated with external evidence, allowing knowledge to be put into practice.
CancerLinQ extracts data from EHRs and other sources via direct software connections. CancerLinQ then aggregates, harmonizes, and normalizes the data in a cloud-based environment.
The data are available to participating practices for quality improvement in patient care and secondary research. In 2020, records of cancer patients in the CancerLinQ database surpassed 2 million.
CancerLinQ has been successful. However, because of the nature of the EHR ecosystem and the scope and variability of data capture by clinicians, supporting a true learning health system has proven to be a formidable task. Postprocessing manual review using trained human curators is laborious and unsustainable.
The CancerLinQ experience illustrated that basic cancer-pertinent data should be standardized in the EHR and collected prospectively.
The mCODE model
The mCODE initiative seeks to facilitate progress in care quality, clinical research, and health care policy by developing and maintaining a standard, computable, interoperable data format.
Guiding principles that were adopted early in mCODE’s development included:
- A collaborative, noncommercial, use case–driven developmental model.
- Iterative processes.
- User-driven development, refinement, and maintenance.
- Low ongoing maintenance requirements.
A foundational moment in mCODE’s development involved achieving consensus among stakeholders that the project would fail if EHR vendors required additional data entry by users.
After pilot work, a real-world endpoints project, working-group deliberation, public comment, and refinement, the final data standard included six primary domains: patient, disease, laboratory data/vital signs, genomics, treatment, and outcome.
Each domain is further divided into several concepts with specific associated data elements. The data elements are modeled into value sets that specify the possible values for the data element.
To test mCODE, eight organizations representing oncology EHR vendors, standards developers, and research organizations participated in a cancer interoperability track. The comments helped refine mCODE version 1.0, which was released in March 2020 and is accessible via the mCODE website.
Additions will likely be reviewed by a technical review group after external piloting of new use cases.
Innovation, not regulation
Every interaction between a patient and care provider yields information that could lead to improved safety and better outcomes. To be successful, the information must be collected in a computable format so it can be aggregated with data from other patients, analyzed without manual curation, and shared through interoperable systems. Those data should also be secure enough to protect the privacy of individual patients.
mCODE is a consensus data standard for oncology that provides an infrastructure to share patient data between oncology practices and health care systems while promising little to no additional data entry on the part of clinicians. Adoption by sites will be critical, however.
Publishing the standard through the HL7 FHIR technology demonstrated to EHR vendors and regulatory agencies the stability of HL7, an essential requirement for its incorporation into software.
EHR vendors and others are engaged in the CodeX HL7 FHIR Accelerator to design projects to expand and/or modify mCODE. Their creativity and innovativeness via the external advisory mCODE council and/or CodeX will be encouraged to help mCODE reach its full potential.
As part of CodeX, the Community of Practice, an open forum for end users, was established to provide regular updates about mCODE-related initiatives and use cases to solicit in-progress input, according to Robert S. Miller, MD, medical director of CancerLinQ and an author of the mCODE review.
For mCODE to be embraced by all stakeholders, there should be no additional regulations. By engaging stakeholders in an enterprise that supports innovation and collaboration – without additional regulation – mCODE could maximize the potential of EHRs that, until now, have assisted us only marginally in accomplishing those goals.
mCODE is a joint venture of ASCO/CancerLinQ, the Alliance for Clinical Trials in Oncology Foundation, the MITRE Corporation, the American Society for Radiation Oncology, and the Society of Surgical Oncology.
Dr. Osterman disclosed a grant from the National Cancer Institute and relationships with Infostratix, eHealth, AstraZeneca, Outcomes Insights, Biodesix, MD Outlook, GenomOncology, Cota Healthcare, GE Healthcare, and Microsoft. Dr. Miller and the third review author disclosed no conflicts of interest.
Dr. Lyss was a community-based medical oncologist and clinical researcher for more than 35 years before his recent retirement. His clinical and research interests were focused on breast and lung cancers, as well as expanding clinical trial access to medically underserved populations. He is based in St. Louis. He has no conflicts of interest.
FROM JCO CLINICAL CANCER INFORMATICS