User login
SAN DIEGO – Researchers say that they’ve developed an easy and inexpensive way to instantly track divergences in thinking by faculty and students as they ponder cases presented in Mortality and Morbidity (M&M) conferences. They’ve already produced an intriguing early finding: Interns and junior residents hew more closely than do their elders to estimates provided by a surgical risk calculator.
The research has the potential to shed light on problems in the much-maligned M&M, says study leader Ira Leeds, MD, of Johns Hopkins University, Baltimore. He presented the study findings at the annual Clinical Congress of the American College of Surgeons.
“This project demonstrates that educational technologies can reveal important gaps in surgical education,” said Dr. Leeds, who made comments during his presentation and in an interview.
At issue: The M&M conference, a mainstay of medical education. “This has been defined as the ‘golden hour’ of surgical education,” Dr. Leeds said. “By discussing someone else’s complications, you can learn how to handle your own in the future.”
However, he added, “there’s very little evidence that we’re currently learning from M&M.”
Dr. Leeds and his colleagues are studying the M&M’s role in medical education to see if it can be improved. The new study, a prospective time-series analysis of weekly M&M conferences, aims to understand the potential value of a real-time feedback system. The idea is to develop a way to alert participants to discrepancies in their perceptions about cases.
The researchers turned to a company called Poll Everywhere, whose technology allowed them to collect instant opinions about M&M cases from those in attendance. During 2016-2017, 110 faculty, residents, and interns used Poll Everywhere’s smartphone app to do two things – make guesses about the root causes of adverse events and estimate the risk of complications from surgical procedures over the next 30 days.
“We can see all the results streaming in real time,” said Dr. Leeds, noting that the service cost $600 per year.
The participants, about two-thirds of whom were male, included faculty (35%), fellows and senior residents (28%), and interns and junior residents (37%). They’d been trained an average of 9 years.
The 34 M&M cases represented a mixture of surgical specialties, including oncology, trauma, transplant, and others.
In terms of the root cause analysis, the technology allowed researchers to instantly detect if the guesses of faculty and students were far apart.
The researchers also compared the risk estimates from the participants to those provided by the NSQIP Risk Calculator. They found that the participants tended to boost their estimate of risk, compared with the calculator, by an absolute mean difference of 7.7 percentage points.
“They were overestimating risk by nearly 8 percentage points,” Dr. Leeds said. This isn’t surprising, since other research has revealed a trend toward overestimation of risk by physicians, compared with calculators, he added.
There wasn’t a major difference between the general level of higher estimation of risk among faculty and senior residents (mean of 8.6 and 7.2 percentage points higher than the calculator, respectively). But interns and junior residents estimated risk higher than the calculator by a mean of 4.9 percentage points.
What’s going on? Are the less experienced staff members outperforming their teachers? Another possibility, Dr. Leeds said, is that “the senior surgeons are better picking up on nuances that aren’t being captured by predictive models or the underdeveloped intuition of a junior trainee.”
Rachel Dawn Aufforth, MD, of Johns Hopkins Medicine, who served as discussant for the presentation by Dr. Leeds, said she looks forward to seeing if this technology can improve resident education. She also wondered why estimates via the risk calculator were chosen as a baseline, especially considering that surgeons tend to estimate higher levels of risk.
“One of the things we’ve been trying to do is look at time-series differences,” Dr. Leeds said. “Are they getting better over an academic year? And does that vary by faculty, especially for interns? The calculator isn’t changing or learning on its own.”
In the big picture, the study shows that “collecting real-time risk estimates and root cause assignment is feasible and can be performed as part of routine M&M conferences,” he said.
The study was funded in part by Johns Hopkins University School of Medicine Institute for Excellence in Education. Dr. Leeds reports no relevant disclosures.
SAN DIEGO – Researchers say that they’ve developed an easy and inexpensive way to instantly track divergences in thinking by faculty and students as they ponder cases presented in Mortality and Morbidity (M&M) conferences. They’ve already produced an intriguing early finding: Interns and junior residents hew more closely than do their elders to estimates provided by a surgical risk calculator.
The research has the potential to shed light on problems in the much-maligned M&M, says study leader Ira Leeds, MD, of Johns Hopkins University, Baltimore. He presented the study findings at the annual Clinical Congress of the American College of Surgeons.
“This project demonstrates that educational technologies can reveal important gaps in surgical education,” said Dr. Leeds, who made comments during his presentation and in an interview.
At issue: The M&M conference, a mainstay of medical education. “This has been defined as the ‘golden hour’ of surgical education,” Dr. Leeds said. “By discussing someone else’s complications, you can learn how to handle your own in the future.”
However, he added, “there’s very little evidence that we’re currently learning from M&M.”
Dr. Leeds and his colleagues are studying the M&M’s role in medical education to see if it can be improved. The new study, a prospective time-series analysis of weekly M&M conferences, aims to understand the potential value of a real-time feedback system. The idea is to develop a way to alert participants to discrepancies in their perceptions about cases.
The researchers turned to a company called Poll Everywhere, whose technology allowed them to collect instant opinions about M&M cases from those in attendance. During 2016-2017, 110 faculty, residents, and interns used Poll Everywhere’s smartphone app to do two things – make guesses about the root causes of adverse events and estimate the risk of complications from surgical procedures over the next 30 days.
“We can see all the results streaming in real time,” said Dr. Leeds, noting that the service cost $600 per year.
The participants, about two-thirds of whom were male, included faculty (35%), fellows and senior residents (28%), and interns and junior residents (37%). They’d been trained an average of 9 years.
The 34 M&M cases represented a mixture of surgical specialties, including oncology, trauma, transplant, and others.
In terms of the root cause analysis, the technology allowed researchers to instantly detect if the guesses of faculty and students were far apart.
The researchers also compared the risk estimates from the participants to those provided by the NSQIP Risk Calculator. They found that the participants tended to boost their estimate of risk, compared with the calculator, by an absolute mean difference of 7.7 percentage points.
“They were overestimating risk by nearly 8 percentage points,” Dr. Leeds said. This isn’t surprising, since other research has revealed a trend toward overestimation of risk by physicians, compared with calculators, he added.
There wasn’t a major difference between the general level of higher estimation of risk among faculty and senior residents (mean of 8.6 and 7.2 percentage points higher than the calculator, respectively). But interns and junior residents estimated risk higher than the calculator by a mean of 4.9 percentage points.
What’s going on? Are the less experienced staff members outperforming their teachers? Another possibility, Dr. Leeds said, is that “the senior surgeons are better picking up on nuances that aren’t being captured by predictive models or the underdeveloped intuition of a junior trainee.”
Rachel Dawn Aufforth, MD, of Johns Hopkins Medicine, who served as discussant for the presentation by Dr. Leeds, said she looks forward to seeing if this technology can improve resident education. She also wondered why estimates via the risk calculator were chosen as a baseline, especially considering that surgeons tend to estimate higher levels of risk.
“One of the things we’ve been trying to do is look at time-series differences,” Dr. Leeds said. “Are they getting better over an academic year? And does that vary by faculty, especially for interns? The calculator isn’t changing or learning on its own.”
In the big picture, the study shows that “collecting real-time risk estimates and root cause assignment is feasible and can be performed as part of routine M&M conferences,” he said.
The study was funded in part by Johns Hopkins University School of Medicine Institute for Excellence in Education. Dr. Leeds reports no relevant disclosures.
SAN DIEGO – Researchers say that they’ve developed an easy and inexpensive way to instantly track divergences in thinking by faculty and students as they ponder cases presented in Mortality and Morbidity (M&M) conferences. They’ve already produced an intriguing early finding: Interns and junior residents hew more closely than do their elders to estimates provided by a surgical risk calculator.
The research has the potential to shed light on problems in the much-maligned M&M, says study leader Ira Leeds, MD, of Johns Hopkins University, Baltimore. He presented the study findings at the annual Clinical Congress of the American College of Surgeons.
“This project demonstrates that educational technologies can reveal important gaps in surgical education,” said Dr. Leeds, who made comments during his presentation and in an interview.
At issue: The M&M conference, a mainstay of medical education. “This has been defined as the ‘golden hour’ of surgical education,” Dr. Leeds said. “By discussing someone else’s complications, you can learn how to handle your own in the future.”
However, he added, “there’s very little evidence that we’re currently learning from M&M.”
Dr. Leeds and his colleagues are studying the M&M’s role in medical education to see if it can be improved. The new study, a prospective time-series analysis of weekly M&M conferences, aims to understand the potential value of a real-time feedback system. The idea is to develop a way to alert participants to discrepancies in their perceptions about cases.
The researchers turned to a company called Poll Everywhere, whose technology allowed them to collect instant opinions about M&M cases from those in attendance. During 2016-2017, 110 faculty, residents, and interns used Poll Everywhere’s smartphone app to do two things – make guesses about the root causes of adverse events and estimate the risk of complications from surgical procedures over the next 30 days.
“We can see all the results streaming in real time,” said Dr. Leeds, noting that the service cost $600 per year.
The participants, about two-thirds of whom were male, included faculty (35%), fellows and senior residents (28%), and interns and junior residents (37%). They’d been trained an average of 9 years.
The 34 M&M cases represented a mixture of surgical specialties, including oncology, trauma, transplant, and others.
In terms of the root cause analysis, the technology allowed researchers to instantly detect if the guesses of faculty and students were far apart.
The researchers also compared the risk estimates from the participants to those provided by the NSQIP Risk Calculator. They found that the participants tended to boost their estimate of risk, compared with the calculator, by an absolute mean difference of 7.7 percentage points.
“They were overestimating risk by nearly 8 percentage points,” Dr. Leeds said. This isn’t surprising, since other research has revealed a trend toward overestimation of risk by physicians, compared with calculators, he added.
There wasn’t a major difference between the general level of higher estimation of risk among faculty and senior residents (mean of 8.6 and 7.2 percentage points higher than the calculator, respectively). But interns and junior residents estimated risk higher than the calculator by a mean of 4.9 percentage points.
What’s going on? Are the less experienced staff members outperforming their teachers? Another possibility, Dr. Leeds said, is that “the senior surgeons are better picking up on nuances that aren’t being captured by predictive models or the underdeveloped intuition of a junior trainee.”
Rachel Dawn Aufforth, MD, of Johns Hopkins Medicine, who served as discussant for the presentation by Dr. Leeds, said she looks forward to seeing if this technology can improve resident education. She also wondered why estimates via the risk calculator were chosen as a baseline, especially considering that surgeons tend to estimate higher levels of risk.
“One of the things we’ve been trying to do is look at time-series differences,” Dr. Leeds said. “Are they getting better over an academic year? And does that vary by faculty, especially for interns? The calculator isn’t changing or learning on its own.”
In the big picture, the study shows that “collecting real-time risk estimates and root cause assignment is feasible and can be performed as part of routine M&M conferences,” he said.
The study was funded in part by Johns Hopkins University School of Medicine Institute for Excellence in Education. Dr. Leeds reports no relevant disclosures.
AT THE ACS CLINICAL CONGRESS