User login
How Medicare Reimbursement Trends Could Affect Breast Surgeries
These were findings of new research presented by Terry P. Gao, MD, at the American Society of Breast Surgeons annual meeting.
Medicare reimbursements often set a benchmark that is followed by private insurers, and the impact of changes on various breast surgeries have not been examined, Dr. Gao, a research resident at Temple University Hospital, Philadelphia, said during a press briefing in advance of the meeting.
“This study is important because it is the first to analyze trends in Medicare reimbursement for breast cancer surgery over a long period,” Dr. Gao said during an interview. The findings highlight a critical issue that could impact access to quality care, especially for vulnerable populations, she said.
How Were the Data Analyzed?
Dr. Gao and colleagues reviewed percent changes in reimbursement procedures over a 20-year period and compared them to changes in the consumer price index (CPI) to show the real-life impact of inflation.
The study examined reimbursements based on the Medicare Physician Fee Schedule Look-Up Tool from 2003 to 2023 for 10 procedures. The procedures were core needle biopsy, open incisional breast biopsy, open excisional breast biopsy, lumpectomy, lumpectomy with axillary lymph node dissection (ALND), simple mastectomy, radical mastectomy, modified radical mastectomy, biopsy/removal of lymph nodes, and sentinel lymph node biopsy.
What Does the New Study Show?
“Reimbursements did not keep pace with the price of goods and services,” Dr. Gao said during the press briefing.
After the researchers corrected data for inflation, the overall mean Medicare reimbursement for breast cancer surgeries decreased by approximately 21%, based in part on the 69% increase in the CPI over the study period, Dr. Gao said. The greatest change was in core needle biopsy, for which reimbursement decreased by 36%.
After inflation adjustment, reimbursement increases were seen for only two procedures, lumpectomy and simple mastectomy, of 0.37% and 3.58%, respectively, but these do not represent meaningful gains, Dr. Gao said.
The researchers also used a model to estimate the real-life impact of decreased reimbursement on clinicians. They subtracted the actual 2023 compensation from expected 2023 compensation based on inflation for a breast cancer case incidence of 297,790 patients who underwent axillary surgery, breast lumpectomy, or simple mastectomy. The calculated potential real-world compensation loss for that year was $107,604,444.
What are the Clinical Implications?
The current study is the first to put specific numbers on the trend in declining breast cancer payments, and the findings should encourage physicians to advocate for equitable policies, Dr. Gao noted during the briefing.
The substantial decrease in inflation-adjusted reimbursement rates was significant, she said during the interview. Although the decrease reflects similar trends seen in other specialties, the magnitude is a potential cause for concern, she said.
Declining reimbursements could disproportionately hurt safety-net hospitals serving vulnerable populations by limiting their ability to invest in better care and potentially worsening existing racial disparities, Dr. Gao told this publication. “Additionally, surgeons may opt out of Medicare networks due to low rates, leading to access issues and longer wait times. Finally, these trends could discourage future generations from specializing in breast cancer surgery.”
The study findings should be considered in the context of the complex and rapidly changing clinical landscape in which breast cancer care is evolving, Mediget Teshome, MD, chief of breast surgery at UCLA Health, said during an interview.
“Surgery remains a critically important aspect to curative treatment,” Dr. Teshome said.
Surgical decision-making tailored to each patient’s goals involves coordination from a multidisciplinary team as well as skill and attention from surgeons, she added.
“This degree of specialization and nuance is not always captured in reimbursement models for breast surgery,” Dr. Teshome emphasized. The policy implications of any changes in Medicare reimbursement will be important given the American Cancer Society reports breast cancer as the most commonly diagnosed cancer in women in the United States, and as the second leading cause of cancer death in US women, she noted.
What Additional Research Is Needed?
Research is needed to understand how declining reimbursements affect patients’ access to care, treatment choices, and long-term outcomes, Dr. Gao said in the interview. Future studies also are needed to examine provider overhead costs, staffing structures, and profit margins to offer a more comprehensive understanding of financial sustainability.
Dr. Gao and Dr. Teshome had no financial conflicts to disclose.
These were findings of new research presented by Terry P. Gao, MD, at the American Society of Breast Surgeons annual meeting.
Medicare reimbursements often set a benchmark that is followed by private insurers, and the impact of changes on various breast surgeries have not been examined, Dr. Gao, a research resident at Temple University Hospital, Philadelphia, said during a press briefing in advance of the meeting.
“This study is important because it is the first to analyze trends in Medicare reimbursement for breast cancer surgery over a long period,” Dr. Gao said during an interview. The findings highlight a critical issue that could impact access to quality care, especially for vulnerable populations, she said.
How Were the Data Analyzed?
Dr. Gao and colleagues reviewed percent changes in reimbursement procedures over a 20-year period and compared them to changes in the consumer price index (CPI) to show the real-life impact of inflation.
The study examined reimbursements based on the Medicare Physician Fee Schedule Look-Up Tool from 2003 to 2023 for 10 procedures. The procedures were core needle biopsy, open incisional breast biopsy, open excisional breast biopsy, lumpectomy, lumpectomy with axillary lymph node dissection (ALND), simple mastectomy, radical mastectomy, modified radical mastectomy, biopsy/removal of lymph nodes, and sentinel lymph node biopsy.
What Does the New Study Show?
“Reimbursements did not keep pace with the price of goods and services,” Dr. Gao said during the press briefing.
After the researchers corrected data for inflation, the overall mean Medicare reimbursement for breast cancer surgeries decreased by approximately 21%, based in part on the 69% increase in the CPI over the study period, Dr. Gao said. The greatest change was in core needle biopsy, for which reimbursement decreased by 36%.
After inflation adjustment, reimbursement increases were seen for only two procedures, lumpectomy and simple mastectomy, of 0.37% and 3.58%, respectively, but these do not represent meaningful gains, Dr. Gao said.
The researchers also used a model to estimate the real-life impact of decreased reimbursement on clinicians. They subtracted the actual 2023 compensation from expected 2023 compensation based on inflation for a breast cancer case incidence of 297,790 patients who underwent axillary surgery, breast lumpectomy, or simple mastectomy. The calculated potential real-world compensation loss for that year was $107,604,444.
What are the Clinical Implications?
The current study is the first to put specific numbers on the trend in declining breast cancer payments, and the findings should encourage physicians to advocate for equitable policies, Dr. Gao noted during the briefing.
The substantial decrease in inflation-adjusted reimbursement rates was significant, she said during the interview. Although the decrease reflects similar trends seen in other specialties, the magnitude is a potential cause for concern, she said.
Declining reimbursements could disproportionately hurt safety-net hospitals serving vulnerable populations by limiting their ability to invest in better care and potentially worsening existing racial disparities, Dr. Gao told this publication. “Additionally, surgeons may opt out of Medicare networks due to low rates, leading to access issues and longer wait times. Finally, these trends could discourage future generations from specializing in breast cancer surgery.”
The study findings should be considered in the context of the complex and rapidly changing clinical landscape in which breast cancer care is evolving, Mediget Teshome, MD, chief of breast surgery at UCLA Health, said during an interview.
“Surgery remains a critically important aspect to curative treatment,” Dr. Teshome said.
Surgical decision-making tailored to each patient’s goals involves coordination from a multidisciplinary team as well as skill and attention from surgeons, she added.
“This degree of specialization and nuance is not always captured in reimbursement models for breast surgery,” Dr. Teshome emphasized. The policy implications of any changes in Medicare reimbursement will be important given the American Cancer Society reports breast cancer as the most commonly diagnosed cancer in women in the United States, and as the second leading cause of cancer death in US women, she noted.
What Additional Research Is Needed?
Research is needed to understand how declining reimbursements affect patients’ access to care, treatment choices, and long-term outcomes, Dr. Gao said in the interview. Future studies also are needed to examine provider overhead costs, staffing structures, and profit margins to offer a more comprehensive understanding of financial sustainability.
Dr. Gao and Dr. Teshome had no financial conflicts to disclose.
These were findings of new research presented by Terry P. Gao, MD, at the American Society of Breast Surgeons annual meeting.
Medicare reimbursements often set a benchmark that is followed by private insurers, and the impact of changes on various breast surgeries have not been examined, Dr. Gao, a research resident at Temple University Hospital, Philadelphia, said during a press briefing in advance of the meeting.
“This study is important because it is the first to analyze trends in Medicare reimbursement for breast cancer surgery over a long period,” Dr. Gao said during an interview. The findings highlight a critical issue that could impact access to quality care, especially for vulnerable populations, she said.
How Were the Data Analyzed?
Dr. Gao and colleagues reviewed percent changes in reimbursement procedures over a 20-year period and compared them to changes in the consumer price index (CPI) to show the real-life impact of inflation.
The study examined reimbursements based on the Medicare Physician Fee Schedule Look-Up Tool from 2003 to 2023 for 10 procedures. The procedures were core needle biopsy, open incisional breast biopsy, open excisional breast biopsy, lumpectomy, lumpectomy with axillary lymph node dissection (ALND), simple mastectomy, radical mastectomy, modified radical mastectomy, biopsy/removal of lymph nodes, and sentinel lymph node biopsy.
What Does the New Study Show?
“Reimbursements did not keep pace with the price of goods and services,” Dr. Gao said during the press briefing.
After the researchers corrected data for inflation, the overall mean Medicare reimbursement for breast cancer surgeries decreased by approximately 21%, based in part on the 69% increase in the CPI over the study period, Dr. Gao said. The greatest change was in core needle biopsy, for which reimbursement decreased by 36%.
After inflation adjustment, reimbursement increases were seen for only two procedures, lumpectomy and simple mastectomy, of 0.37% and 3.58%, respectively, but these do not represent meaningful gains, Dr. Gao said.
The researchers also used a model to estimate the real-life impact of decreased reimbursement on clinicians. They subtracted the actual 2023 compensation from expected 2023 compensation based on inflation for a breast cancer case incidence of 297,790 patients who underwent axillary surgery, breast lumpectomy, or simple mastectomy. The calculated potential real-world compensation loss for that year was $107,604,444.
What are the Clinical Implications?
The current study is the first to put specific numbers on the trend in declining breast cancer payments, and the findings should encourage physicians to advocate for equitable policies, Dr. Gao noted during the briefing.
The substantial decrease in inflation-adjusted reimbursement rates was significant, she said during the interview. Although the decrease reflects similar trends seen in other specialties, the magnitude is a potential cause for concern, she said.
Declining reimbursements could disproportionately hurt safety-net hospitals serving vulnerable populations by limiting their ability to invest in better care and potentially worsening existing racial disparities, Dr. Gao told this publication. “Additionally, surgeons may opt out of Medicare networks due to low rates, leading to access issues and longer wait times. Finally, these trends could discourage future generations from specializing in breast cancer surgery.”
The study findings should be considered in the context of the complex and rapidly changing clinical landscape in which breast cancer care is evolving, Mediget Teshome, MD, chief of breast surgery at UCLA Health, said during an interview.
“Surgery remains a critically important aspect to curative treatment,” Dr. Teshome said.
Surgical decision-making tailored to each patient’s goals involves coordination from a multidisciplinary team as well as skill and attention from surgeons, she added.
“This degree of specialization and nuance is not always captured in reimbursement models for breast surgery,” Dr. Teshome emphasized. The policy implications of any changes in Medicare reimbursement will be important given the American Cancer Society reports breast cancer as the most commonly diagnosed cancer in women in the United States, and as the second leading cause of cancer death in US women, she noted.
What Additional Research Is Needed?
Research is needed to understand how declining reimbursements affect patients’ access to care, treatment choices, and long-term outcomes, Dr. Gao said in the interview. Future studies also are needed to examine provider overhead costs, staffing structures, and profit margins to offer a more comprehensive understanding of financial sustainability.
Dr. Gao and Dr. Teshome had no financial conflicts to disclose.
FROM THE AMERICAN SOCIETY OF BREAST SURGEONS ANNUAL MEETING
No Routine Cancer Screening Option? New MCED Tests May Help
Analyses presented during a session at the American Association for Cancer Research annual meeting, revealed that three new MCED tests — CanScan, MERCURY, and OncoSeek — could detect a range of cancers and recognize the tissue of origin with high accuracy. One — OncoSeek — could also provide an affordable cancer screening option for individuals living in lower-income countries.
The need for these noninvasive liquid biopsy tests that can accurately identify multiple cancer types with a single blood draw, especially cancers without routine screening strategies, is pressing. “We know that the current cancer standard of care screening will identify less than 50% of all cancers, while more than 50% of all cancer deaths occur in types of cancer with no recommended screening,” said co-moderator Marie E. Wood, MD, of the University of Colorado Anschutz Medical Campus, in Aurora, Colorado.
That being said, “the clinical utility of multicancer detection tests has not been established and we’re concerned about issues of overdiagnosis and overtreatment,” she noted.
The Early Data
One new MCED test called CanScan, developed by Geneseeq Technology, uses plasma cell-free DNA fragment patterns to detect cancer signals as well as identify the tissue of origin across 13 cancer types.
Overall, the CanScan test covers cancer types that contribute to two thirds of new cancer cases and 74% of morality globally, said presenter Shanshan Yang, of Geneseeq Research Institute, in Nanjing, China.
However, only five of these cancer types have screening recommendations issued by the US Preventive Services Task Force (USPSTF), Dr. Yang added.
The interim data comes from an ongoing large-scale prospective study evaluating the MCED test in a cohort of asymptomatic individuals between ages 45 and 75 years with an average risk for cancer and no cancer-related symptoms on enrollment.
Patients at baseline had their blood collected for the CanScan test and subsequently received annual routine physical exams once a year for 3 consecutive years, with an additional 2 years of follow-up.
The analysis included 3724 participants with analyzable samples at the data cutoff in September 2023. Among the 3724 participants, 29 had confirmed cancer diagnoses. Among these cases, 14 patients had their cancer confirmed through USPSTF recommended screening and 15 were detected through outside of standard USPSTF screening, such as a thyroid ultrasound, Dr. Yang explained.
Almost 90% of the cancers (26 of 29) were detected in the stage I or II, and eight (27.5%) were not one of the test’s 13 targeted cancer types.
The CanScan test had a sensitivity of 55.2%, identifying 16 of 29 of the patients with cancer, including 10 of 21 individuals with stage I (47.6%), and two of three with stage II (66.7%).
The test had a high specificity of 97.9%, meaning out of 100 people screened, only two had false negative findings.
Among the 15 patients who had their cancer detected outside of USPSTF screening recommendations, eight (53.3%) were found using a CanScan test, including patients with liver and endometrial cancers.
Compared with a positive predictive value of (PPV) of 1.6% with screening or physical exam methods alone, the CanScan test had a PPV of 17.4%, Dr. Yang reported.
“The MCED test holds significant potential for early cancer screening in asymptomatic populations,” Dr. Yang and colleagues concluded.
Another new MCED test called MERCURY, also developed by Geneseeq Technology and presented during the session, used a similar method to detect cancer signals and predict the tissue of origin across 13 cancer types.
The researchers initially validated the test using 3076 patients with cancer and 3477 healthy controls with a target specificity of 99%. In this group, researchers reported a sensitivity of 0.865 and a specificity of 0.989.
The team then performed an independent validation analysis with 1465 participants, 732 with cancer and 733 with no cancer, and confirmed a high sensitivity and specificity of 0.874 and 0.978, respectively. The sensitivity increased incrementally by cancer stage — 0.768 for stage I, 0.840 for stage II, 0.923 for stage III, and 0.971 for stage IV.
The test identified the tissue of origin with high accuracy, the researchers noted, but cautioned that the test needs “to be further validated in a prospective cohort study.”
MCED in Low-Income Settings
The session also featured findings on a new affordable MCED test called OncoSeek, which could provide greater access to cancer testing in low- and middle-income countries.
The OncoSeek algorithm identifies the presence of cancer using seven protein tumor markers alongside clinical information, such as gender and age. Like other tests, the test also predicts the possible tissue of origin.
The test can be run on clinical protein assay instruments that are already widely available, such as Roche cobas analyzer, Mao Mao, MD, PhD, the founder and CEO of SeekIn, of Shenzhen, China, told this news organization.
This “feature makes the test accessible worldwide, even in low- and middle-income countries,” he said. “These instruments are fully-automated and part of today’s clinical practice. Therefore, the test does not require additional infrastructure building and lab personal training.”
Another notable advantage: the OncoSeek test only costs about $20, compared with other MCED tests, which can cost anywhere from $200 to $1000.
To validate the technology in a large, diverse cohort, Dr. Mao and colleagues enrolled approximately 10,000 participants, including 2003 cancer cases and 7888 non-cancer cases.
Peripheral blood was collected from each participant and analyzed using a panel of the seven protein tumor markers — AFP, CA125, CA15-3, CA19-9, CA72-4, CEA, and CYFRA 21-1.
To reduce the risk for false positive findings, the team designed the OncoSeek algorithm to achieve a specificity of 93%. Dr. Mao and colleagues found a sensitivity of 51.7%, resulting in an overall accuracy of 84.6%.
The performance was consistent in additional validation cohorts in Brazil, China, and the United States, with sensitivities ranging from 39.0% to 77.6% for detecting nine common cancer types, including breast, colorectal, liver, lung, lymphoma, esophagus, ovary, pancreas, and stomach. The sensitivity for pancreatic cancer was at the high end of 77.6%.
The test could predict the tissue of origin in about two thirds of cases.
Given its low cost, OncoSeek represents an affordable and accessible option for cancer screening, the authors concluded.
Overall, “I think MCEDs have the potential to enhance cancer screening,” Dr. Wood told this news organization.
Still, questions remain about the optimal use of these tests, such as whether they are best for average-risk or higher risk populations, and how to integrate them into standard screening, she said.
Dr. Wood also cautioned that the studies presented in the session represent early data, and it is likely that the numbers, such as sensitivity and specificity, will change with further prospective analyses.
And ultimately, these tests should complement, not replace, standard screening. “A negative testing should not be taken as a sign to avoid standard screening,” Dr. Wood said.
Dr. Yang is an employee of Geneseeq Technology, Inc., and Dr. Mao is an employee of SeekIn. Dr. Wood had no disclosures to report.
A version of this article appeared on Medscape.com.
Analyses presented during a session at the American Association for Cancer Research annual meeting, revealed that three new MCED tests — CanScan, MERCURY, and OncoSeek — could detect a range of cancers and recognize the tissue of origin with high accuracy. One — OncoSeek — could also provide an affordable cancer screening option for individuals living in lower-income countries.
The need for these noninvasive liquid biopsy tests that can accurately identify multiple cancer types with a single blood draw, especially cancers without routine screening strategies, is pressing. “We know that the current cancer standard of care screening will identify less than 50% of all cancers, while more than 50% of all cancer deaths occur in types of cancer with no recommended screening,” said co-moderator Marie E. Wood, MD, of the University of Colorado Anschutz Medical Campus, in Aurora, Colorado.
That being said, “the clinical utility of multicancer detection tests has not been established and we’re concerned about issues of overdiagnosis and overtreatment,” she noted.
The Early Data
One new MCED test called CanScan, developed by Geneseeq Technology, uses plasma cell-free DNA fragment patterns to detect cancer signals as well as identify the tissue of origin across 13 cancer types.
Overall, the CanScan test covers cancer types that contribute to two thirds of new cancer cases and 74% of morality globally, said presenter Shanshan Yang, of Geneseeq Research Institute, in Nanjing, China.
However, only five of these cancer types have screening recommendations issued by the US Preventive Services Task Force (USPSTF), Dr. Yang added.
The interim data comes from an ongoing large-scale prospective study evaluating the MCED test in a cohort of asymptomatic individuals between ages 45 and 75 years with an average risk for cancer and no cancer-related symptoms on enrollment.
Patients at baseline had their blood collected for the CanScan test and subsequently received annual routine physical exams once a year for 3 consecutive years, with an additional 2 years of follow-up.
The analysis included 3724 participants with analyzable samples at the data cutoff in September 2023. Among the 3724 participants, 29 had confirmed cancer diagnoses. Among these cases, 14 patients had their cancer confirmed through USPSTF recommended screening and 15 were detected through outside of standard USPSTF screening, such as a thyroid ultrasound, Dr. Yang explained.
Almost 90% of the cancers (26 of 29) were detected in the stage I or II, and eight (27.5%) were not one of the test’s 13 targeted cancer types.
The CanScan test had a sensitivity of 55.2%, identifying 16 of 29 of the patients with cancer, including 10 of 21 individuals with stage I (47.6%), and two of three with stage II (66.7%).
The test had a high specificity of 97.9%, meaning out of 100 people screened, only two had false negative findings.
Among the 15 patients who had their cancer detected outside of USPSTF screening recommendations, eight (53.3%) were found using a CanScan test, including patients with liver and endometrial cancers.
Compared with a positive predictive value of (PPV) of 1.6% with screening or physical exam methods alone, the CanScan test had a PPV of 17.4%, Dr. Yang reported.
“The MCED test holds significant potential for early cancer screening in asymptomatic populations,” Dr. Yang and colleagues concluded.
Another new MCED test called MERCURY, also developed by Geneseeq Technology and presented during the session, used a similar method to detect cancer signals and predict the tissue of origin across 13 cancer types.
The researchers initially validated the test using 3076 patients with cancer and 3477 healthy controls with a target specificity of 99%. In this group, researchers reported a sensitivity of 0.865 and a specificity of 0.989.
The team then performed an independent validation analysis with 1465 participants, 732 with cancer and 733 with no cancer, and confirmed a high sensitivity and specificity of 0.874 and 0.978, respectively. The sensitivity increased incrementally by cancer stage — 0.768 for stage I, 0.840 for stage II, 0.923 for stage III, and 0.971 for stage IV.
The test identified the tissue of origin with high accuracy, the researchers noted, but cautioned that the test needs “to be further validated in a prospective cohort study.”
MCED in Low-Income Settings
The session also featured findings on a new affordable MCED test called OncoSeek, which could provide greater access to cancer testing in low- and middle-income countries.
The OncoSeek algorithm identifies the presence of cancer using seven protein tumor markers alongside clinical information, such as gender and age. Like other tests, the test also predicts the possible tissue of origin.
The test can be run on clinical protein assay instruments that are already widely available, such as Roche cobas analyzer, Mao Mao, MD, PhD, the founder and CEO of SeekIn, of Shenzhen, China, told this news organization.
This “feature makes the test accessible worldwide, even in low- and middle-income countries,” he said. “These instruments are fully-automated and part of today’s clinical practice. Therefore, the test does not require additional infrastructure building and lab personal training.”
Another notable advantage: the OncoSeek test only costs about $20, compared with other MCED tests, which can cost anywhere from $200 to $1000.
To validate the technology in a large, diverse cohort, Dr. Mao and colleagues enrolled approximately 10,000 participants, including 2003 cancer cases and 7888 non-cancer cases.
Peripheral blood was collected from each participant and analyzed using a panel of the seven protein tumor markers — AFP, CA125, CA15-3, CA19-9, CA72-4, CEA, and CYFRA 21-1.
To reduce the risk for false positive findings, the team designed the OncoSeek algorithm to achieve a specificity of 93%. Dr. Mao and colleagues found a sensitivity of 51.7%, resulting in an overall accuracy of 84.6%.
The performance was consistent in additional validation cohorts in Brazil, China, and the United States, with sensitivities ranging from 39.0% to 77.6% for detecting nine common cancer types, including breast, colorectal, liver, lung, lymphoma, esophagus, ovary, pancreas, and stomach. The sensitivity for pancreatic cancer was at the high end of 77.6%.
The test could predict the tissue of origin in about two thirds of cases.
Given its low cost, OncoSeek represents an affordable and accessible option for cancer screening, the authors concluded.
Overall, “I think MCEDs have the potential to enhance cancer screening,” Dr. Wood told this news organization.
Still, questions remain about the optimal use of these tests, such as whether they are best for average-risk or higher risk populations, and how to integrate them into standard screening, she said.
Dr. Wood also cautioned that the studies presented in the session represent early data, and it is likely that the numbers, such as sensitivity and specificity, will change with further prospective analyses.
And ultimately, these tests should complement, not replace, standard screening. “A negative testing should not be taken as a sign to avoid standard screening,” Dr. Wood said.
Dr. Yang is an employee of Geneseeq Technology, Inc., and Dr. Mao is an employee of SeekIn. Dr. Wood had no disclosures to report.
A version of this article appeared on Medscape.com.
Analyses presented during a session at the American Association for Cancer Research annual meeting, revealed that three new MCED tests — CanScan, MERCURY, and OncoSeek — could detect a range of cancers and recognize the tissue of origin with high accuracy. One — OncoSeek — could also provide an affordable cancer screening option for individuals living in lower-income countries.
The need for these noninvasive liquid biopsy tests that can accurately identify multiple cancer types with a single blood draw, especially cancers without routine screening strategies, is pressing. “We know that the current cancer standard of care screening will identify less than 50% of all cancers, while more than 50% of all cancer deaths occur in types of cancer with no recommended screening,” said co-moderator Marie E. Wood, MD, of the University of Colorado Anschutz Medical Campus, in Aurora, Colorado.
That being said, “the clinical utility of multicancer detection tests has not been established and we’re concerned about issues of overdiagnosis and overtreatment,” she noted.
The Early Data
One new MCED test called CanScan, developed by Geneseeq Technology, uses plasma cell-free DNA fragment patterns to detect cancer signals as well as identify the tissue of origin across 13 cancer types.
Overall, the CanScan test covers cancer types that contribute to two thirds of new cancer cases and 74% of morality globally, said presenter Shanshan Yang, of Geneseeq Research Institute, in Nanjing, China.
However, only five of these cancer types have screening recommendations issued by the US Preventive Services Task Force (USPSTF), Dr. Yang added.
The interim data comes from an ongoing large-scale prospective study evaluating the MCED test in a cohort of asymptomatic individuals between ages 45 and 75 years with an average risk for cancer and no cancer-related symptoms on enrollment.
Patients at baseline had their blood collected for the CanScan test and subsequently received annual routine physical exams once a year for 3 consecutive years, with an additional 2 years of follow-up.
The analysis included 3724 participants with analyzable samples at the data cutoff in September 2023. Among the 3724 participants, 29 had confirmed cancer diagnoses. Among these cases, 14 patients had their cancer confirmed through USPSTF recommended screening and 15 were detected through outside of standard USPSTF screening, such as a thyroid ultrasound, Dr. Yang explained.
Almost 90% of the cancers (26 of 29) were detected in the stage I or II, and eight (27.5%) were not one of the test’s 13 targeted cancer types.
The CanScan test had a sensitivity of 55.2%, identifying 16 of 29 of the patients with cancer, including 10 of 21 individuals with stage I (47.6%), and two of three with stage II (66.7%).
The test had a high specificity of 97.9%, meaning out of 100 people screened, only two had false negative findings.
Among the 15 patients who had their cancer detected outside of USPSTF screening recommendations, eight (53.3%) were found using a CanScan test, including patients with liver and endometrial cancers.
Compared with a positive predictive value of (PPV) of 1.6% with screening or physical exam methods alone, the CanScan test had a PPV of 17.4%, Dr. Yang reported.
“The MCED test holds significant potential for early cancer screening in asymptomatic populations,” Dr. Yang and colleagues concluded.
Another new MCED test called MERCURY, also developed by Geneseeq Technology and presented during the session, used a similar method to detect cancer signals and predict the tissue of origin across 13 cancer types.
The researchers initially validated the test using 3076 patients with cancer and 3477 healthy controls with a target specificity of 99%. In this group, researchers reported a sensitivity of 0.865 and a specificity of 0.989.
The team then performed an independent validation analysis with 1465 participants, 732 with cancer and 733 with no cancer, and confirmed a high sensitivity and specificity of 0.874 and 0.978, respectively. The sensitivity increased incrementally by cancer stage — 0.768 for stage I, 0.840 for stage II, 0.923 for stage III, and 0.971 for stage IV.
The test identified the tissue of origin with high accuracy, the researchers noted, but cautioned that the test needs “to be further validated in a prospective cohort study.”
MCED in Low-Income Settings
The session also featured findings on a new affordable MCED test called OncoSeek, which could provide greater access to cancer testing in low- and middle-income countries.
The OncoSeek algorithm identifies the presence of cancer using seven protein tumor markers alongside clinical information, such as gender and age. Like other tests, the test also predicts the possible tissue of origin.
The test can be run on clinical protein assay instruments that are already widely available, such as Roche cobas analyzer, Mao Mao, MD, PhD, the founder and CEO of SeekIn, of Shenzhen, China, told this news organization.
This “feature makes the test accessible worldwide, even in low- and middle-income countries,” he said. “These instruments are fully-automated and part of today’s clinical practice. Therefore, the test does not require additional infrastructure building and lab personal training.”
Another notable advantage: the OncoSeek test only costs about $20, compared with other MCED tests, which can cost anywhere from $200 to $1000.
To validate the technology in a large, diverse cohort, Dr. Mao and colleagues enrolled approximately 10,000 participants, including 2003 cancer cases and 7888 non-cancer cases.
Peripheral blood was collected from each participant and analyzed using a panel of the seven protein tumor markers — AFP, CA125, CA15-3, CA19-9, CA72-4, CEA, and CYFRA 21-1.
To reduce the risk for false positive findings, the team designed the OncoSeek algorithm to achieve a specificity of 93%. Dr. Mao and colleagues found a sensitivity of 51.7%, resulting in an overall accuracy of 84.6%.
The performance was consistent in additional validation cohorts in Brazil, China, and the United States, with sensitivities ranging from 39.0% to 77.6% for detecting nine common cancer types, including breast, colorectal, liver, lung, lymphoma, esophagus, ovary, pancreas, and stomach. The sensitivity for pancreatic cancer was at the high end of 77.6%.
The test could predict the tissue of origin in about two thirds of cases.
Given its low cost, OncoSeek represents an affordable and accessible option for cancer screening, the authors concluded.
Overall, “I think MCEDs have the potential to enhance cancer screening,” Dr. Wood told this news organization.
Still, questions remain about the optimal use of these tests, such as whether they are best for average-risk or higher risk populations, and how to integrate them into standard screening, she said.
Dr. Wood also cautioned that the studies presented in the session represent early data, and it is likely that the numbers, such as sensitivity and specificity, will change with further prospective analyses.
And ultimately, these tests should complement, not replace, standard screening. “A negative testing should not be taken as a sign to avoid standard screening,” Dr. Wood said.
Dr. Yang is an employee of Geneseeq Technology, Inc., and Dr. Mao is an employee of SeekIn. Dr. Wood had no disclosures to report.
A version of this article appeared on Medscape.com.
Are You Ready for AI to Be a Better Doctor Than You?
In a 2023 study published in the Annals of Emergency Medicine, European researchers fed the AI system ChatGPT information on 30 ER patients. Details included physician notes on the patients’ symptoms, physical exams, and lab results. ChatGPT made the correct diagnosis in 97% of patients compared to 87% for human doctors.
AI 1, Physicians 0
JAMA Cardiology reported in 2021 that an AI trained on nearly a million ECGs performed comparably to or exceeded cardiologist clinical diagnoses and the MUSE (GE Healthcare) system›s automated ECG analysis for most diagnostic classes.
AI 2, Physicians 0
Google’s medically focused AI model (Med-PaLM2) scored 85%+ when answering US Medical Licensing Examination–style questions. That›s an «expert» physician level and far beyond the accuracy threshold needed to pass the actual exam.
AI 3, Physicians 0
A new AI tool that uses an online finger-tapping test outperformed primary care physicians when assessing the severity of Parkinson’s disease.
AI 4, Physicians 0
JAMA Ophthalmology reported in 2024 that a chatbot outperformed glaucoma specialists and matched retina specialists in diagnostic and treatment accuracy.
AI 5, Physicians 0
Should we stop? Because we could go on. In the last few years, these AI vs Physician studies have proliferated, and guess who’s winning?
65% of Doctors are Concerned
Now, the standard answer with anything AI-and-Medicine goes something like this: AI is coming, and it will be a transformative tool for physicians and improve patient care.
But the underlying unanswered question is:
The Medscape 2023 Physician and AI Report surveyed 1043 US physicians about their views on AI. In total, 65% are concerned about AI making diagnosis and treatment decisions, but 56% are enthusiastic about having it as an adjunct.
Cardiologists, anesthesiologists, and radiologists are most enthusiastic about AI, whereas family physicians and pediatricians are the least enthusiastic.
To get a more personal view of how physicians and other healthcare professionals are feeling about this transformative tech, I spoke with a variety of practicing doctors, a psychotherapist, and a third-year Harvard Medical School student.
‘Abysmally Poor Understanding’
Alfredo A. Sadun, MD, PhD, has been a neuro-ophthalmologist for nearly 50 years. A graduate of MIT and vice-chair of ophthalmology at UCLA, he’s long been fascinated by AI’s march into medicine. He’s watched it accomplish things that no ophthalmologist can do, such as identify gender, age, and risk for heart attack and stroke from retinal scans. But he doesn›t see the same level of interest and comprehension among the medical community.
“There’s still an abysmally poor understanding of AI among physicians in general,” he said. “It’s striking because these are intelligent, well-educated people. But we tend to draw conclusions based on what we’re familiar with, and most doctors’ experience with computers involves EHRs [electronic health records] and administrative garbage. It’s the reason they’re burning out.”
Easing the Burden
Anthony Philippakis, MD, PhD, left his cardiology practice in 2015 to become the chief data officer at the Broad Institute of MIT and Harvard. While there, he helped develop an AI-based method for identifying patients at risk for atrial fibrillation. Now, he’s a general partner at Google Ventures with the goal of bridging the gap between data sciences and medicine. His perspective on AI is unique, given that he’s seen the issue from both sides.
“I am not a bitter physician, but to be honest, when I was practicing, way too much of my time was spent staring at screens and not enough laying hands on patients,” he said. “Can you imagine what it would be like to speak to the EHR naturally and say, ‘Please order the following labs for this patient and notify me when the results come in.’ Boy, would that improve healthcare and physician satisfaction. Every physician I know is excited and optimistic about that. Almost everyone I’ve talked to feels like AI could take a lot of the stuff they don’t like doing off their plates.”
Indeed, the dividing line between physician support for AI and physician suspicion or skepticism of AI is just that. In our survey, more than three quarters of physicians said they would consider using AI for office administrative tasks, scheduling, EHRs, researching medical conditions, and even summarizing a patient’s record before a visit. But far fewer are supportive of it delivering diagnoses and treatments. This, despite an estimated 800,000 Americans dying or becoming permanently disabled each year because of diagnostic error.
Could AI Have Diagnosed This?
John D. Nuschke, MD, has been a primary care physician in Allentown, Pennsylvania, for 40 years. He’s a jovial general physician who insists his patients call him Jack. He’s recently started using an AI medical scribe called Freed. With the patient’s permission, it listens in on the visit and generates notes, saving Dr. Nuschke time and helping him focus on the person. He likes that type of assistance, but when it comes to AI replacing him, he’s skeptical.
“I had this patient I diagnosed with prostate cancer,” he explained. “He got treated and was fine for 5 years. Then, he started losing weight and feeling awful — got weak as a kitten. He went back to his urologist and oncologist who thought he had metastatic prostate cancer. He went through PET scans and blood work, but there was no sign his cancer had returned. So the specialists sent him back to me, and the second he walked in, I saw he was floridly hyperthyroid. I could tell across the room just by looking at him. Would AI have been able to make that diagnosis? Does AI do physical exams?”
Dr. Nuschke said he’s also had several instances where patients received their cancer diagnosis from the lab through an automated patient-portal system rather than from him. “That’s an AI of sorts, and I found it distressing,” he said.
Empathy From a Robot
All the doctors I spoke to were hopeful that by freeing them from the burden of administrative work, they would be able to return to the reason they got into this business in the first place — to spend more time with patients in need and support them with grace and compassion.
But suppose AI could do that too?
In a 2023 study conducted at the University of California San Diego and published in JAMA Internal Medicine, three licensed healthcare professionals compared the responses of ChatGPT and physicians to real-world health questions. The panel rated the AI’s answers nearly four times higher in quality and almost 10 times more empathetic than physicians’ replies.
A similar 2024 study in Nature found that Google’s large-language model AI matched or surpassed physician diagnostic accuracy in all six of the medical specialties considered. Plus, it outperformed doctors in 24 of 26 criteria for conversation quality, including politeness, explanation, honesty, and expressing care and commitment.
Nathaniel Chin, MD, is a gerontologist at the University of Wisconsin and advisory board member for the Alzheimer’s Foundation of America. Although he admits that studies like these “sadden me,” he’s also a realist. “There was hesitation among physicians at the beginning of the pandemic to virtual care because we missed the human connection,” he explained, “but we worked our way around that. We need to remember that what makes a chatbot strong is that it’s nothuman. It doesn’t burn out, it doesn’t get tired, it can look at data very quickly, and it doesn’t have to go home to a family and try to balance work with other aspects of life. A human being is very complex, whereas a chatbot has one single purpose.”
“Even if you don’t have AI in your space now or don’t like the idea of it, that doesn’t matter,” he added. “It’s coming. But it needs to be done right. If AI is implemented by clinicians for clinicians, it has great potential. But if it’s implemented by businesspeople for business reasons, perhaps not.”
‘The Ones Who Use the Tools the Best Will Be the Best’
One branch of medicine that stands to be dramatically affected by AI is mental health. Because bots are natural data-crunchers, they are becoming adept at analyzing the many subtle clues (phrasing in social media posts and text messages, smartwatch biometrics, therapy session videos…) that could indicate depression or other psychological disorders. In fact, its availability via smartphone apps could help democratize and destigmatize the practice.
“There is a day ahead — probably within 5 years — when a patient won’t be able to tell the difference between a real therapist and an AI therapist,” said Ken Mallon, MS, LMFT, a clinical psychotherapist and data scientist in San Jose, California. “That doesn’t worry me, though. It’s hard on therapists’ egos, but new technologies get developed. Things change. People who embrace these tools will benefit from them. The ones who use the tools the best will be the best.”
Time to Restructure Med School
Aditya Jain is in his third year at Harvard Medical School. At age 24, he’s heading into this brave new medical world with excitement and anxiety. Excitement because he sees AI revolutionizing healthcare on every level. Although the current generations of physicians and patients may grumble about its onset, he believes younger ones will feel comfortable with “DocGPT.” He’s excited that his generation of physicians will be the “translators and managers of this transition” and redefine “what it means to be a doctor.”
His anxiety, however, stems from the fact that AI has come on so fast that “it has not yet crossed the threshold of medical education,” he said. “Medical schools still largely prepare students to work as solo clinical decision makers. Most of my first 2 years were spent on pattern recognition and rote memorization, skills that AI can and will master.”
Indeed, Mr. Jain said AI was not a part of his first- or second-year curriculum. “I talk to students who are a year older than me, graduating, heading to residency, and they tell me they wish they had gotten a better grasp of how to use these technologies in medicine and in their practice. They were surprised to hear that people in my year hadn’t started using ChatGPT. We need to expend a lot more effort within the field, within academia, within practicing physicians, to figure out what our role will be in a world where AI is matching or even exceeding human intelligence. And then we need to restructure the medical education to better accomplish these goals.”
So Are You Ready for AI to Be a Better Doctor Than You?
“Yes, I am,” said Dr. Philippakis without hesitation. “When I was going through my medical training, I was continually confronted with the reality that I personally was not smart enough to keep all the information in my head that could be used to make a good decision for a patient. We have now reached a point where the amount of information that is important and useful in the practice of medicine outstrips what a human being can know. The opportunity to enable physicians with AI to remedy that situation is a good thing for doctors and, most importantly, a good thing for patients. I believe the future of medicine belongs not so much to the AI practitioner but to the AI-enabled practitioner.”
“Quick story,” added Dr. Chin. “I asked ChatGPT two questions. The first was ‘Explain the difference between Alzheimer’s and dementia’ because that’s the most common misconception in my field. And it gave me a pretty darn good answer — one I would use in a presentation with some tweaking. Then I asked it, ‘Are you a better doctor than me?’ And it replied, ‘My purpose is not to replace you, my purpose is to be supportive of you and enhance your ability.’ ”
A version of this article appeared on Medscape.com.
In a 2023 study published in the Annals of Emergency Medicine, European researchers fed the AI system ChatGPT information on 30 ER patients. Details included physician notes on the patients’ symptoms, physical exams, and lab results. ChatGPT made the correct diagnosis in 97% of patients compared to 87% for human doctors.
AI 1, Physicians 0
JAMA Cardiology reported in 2021 that an AI trained on nearly a million ECGs performed comparably to or exceeded cardiologist clinical diagnoses and the MUSE (GE Healthcare) system›s automated ECG analysis for most diagnostic classes.
AI 2, Physicians 0
Google’s medically focused AI model (Med-PaLM2) scored 85%+ when answering US Medical Licensing Examination–style questions. That›s an «expert» physician level and far beyond the accuracy threshold needed to pass the actual exam.
AI 3, Physicians 0
A new AI tool that uses an online finger-tapping test outperformed primary care physicians when assessing the severity of Parkinson’s disease.
AI 4, Physicians 0
JAMA Ophthalmology reported in 2024 that a chatbot outperformed glaucoma specialists and matched retina specialists in diagnostic and treatment accuracy.
AI 5, Physicians 0
Should we stop? Because we could go on. In the last few years, these AI vs Physician studies have proliferated, and guess who’s winning?
65% of Doctors are Concerned
Now, the standard answer with anything AI-and-Medicine goes something like this: AI is coming, and it will be a transformative tool for physicians and improve patient care.
But the underlying unanswered question is:
The Medscape 2023 Physician and AI Report surveyed 1043 US physicians about their views on AI. In total, 65% are concerned about AI making diagnosis and treatment decisions, but 56% are enthusiastic about having it as an adjunct.
Cardiologists, anesthesiologists, and radiologists are most enthusiastic about AI, whereas family physicians and pediatricians are the least enthusiastic.
To get a more personal view of how physicians and other healthcare professionals are feeling about this transformative tech, I spoke with a variety of practicing doctors, a psychotherapist, and a third-year Harvard Medical School student.
‘Abysmally Poor Understanding’
Alfredo A. Sadun, MD, PhD, has been a neuro-ophthalmologist for nearly 50 years. A graduate of MIT and vice-chair of ophthalmology at UCLA, he’s long been fascinated by AI’s march into medicine. He’s watched it accomplish things that no ophthalmologist can do, such as identify gender, age, and risk for heart attack and stroke from retinal scans. But he doesn›t see the same level of interest and comprehension among the medical community.
“There’s still an abysmally poor understanding of AI among physicians in general,” he said. “It’s striking because these are intelligent, well-educated people. But we tend to draw conclusions based on what we’re familiar with, and most doctors’ experience with computers involves EHRs [electronic health records] and administrative garbage. It’s the reason they’re burning out.”
Easing the Burden
Anthony Philippakis, MD, PhD, left his cardiology practice in 2015 to become the chief data officer at the Broad Institute of MIT and Harvard. While there, he helped develop an AI-based method for identifying patients at risk for atrial fibrillation. Now, he’s a general partner at Google Ventures with the goal of bridging the gap between data sciences and medicine. His perspective on AI is unique, given that he’s seen the issue from both sides.
“I am not a bitter physician, but to be honest, when I was practicing, way too much of my time was spent staring at screens and not enough laying hands on patients,” he said. “Can you imagine what it would be like to speak to the EHR naturally and say, ‘Please order the following labs for this patient and notify me when the results come in.’ Boy, would that improve healthcare and physician satisfaction. Every physician I know is excited and optimistic about that. Almost everyone I’ve talked to feels like AI could take a lot of the stuff they don’t like doing off their plates.”
Indeed, the dividing line between physician support for AI and physician suspicion or skepticism of AI is just that. In our survey, more than three quarters of physicians said they would consider using AI for office administrative tasks, scheduling, EHRs, researching medical conditions, and even summarizing a patient’s record before a visit. But far fewer are supportive of it delivering diagnoses and treatments. This, despite an estimated 800,000 Americans dying or becoming permanently disabled each year because of diagnostic error.
Could AI Have Diagnosed This?
John D. Nuschke, MD, has been a primary care physician in Allentown, Pennsylvania, for 40 years. He’s a jovial general physician who insists his patients call him Jack. He’s recently started using an AI medical scribe called Freed. With the patient’s permission, it listens in on the visit and generates notes, saving Dr. Nuschke time and helping him focus on the person. He likes that type of assistance, but when it comes to AI replacing him, he’s skeptical.
“I had this patient I diagnosed with prostate cancer,” he explained. “He got treated and was fine for 5 years. Then, he started losing weight and feeling awful — got weak as a kitten. He went back to his urologist and oncologist who thought he had metastatic prostate cancer. He went through PET scans and blood work, but there was no sign his cancer had returned. So the specialists sent him back to me, and the second he walked in, I saw he was floridly hyperthyroid. I could tell across the room just by looking at him. Would AI have been able to make that diagnosis? Does AI do physical exams?”
Dr. Nuschke said he’s also had several instances where patients received their cancer diagnosis from the lab through an automated patient-portal system rather than from him. “That’s an AI of sorts, and I found it distressing,” he said.
Empathy From a Robot
All the doctors I spoke to were hopeful that by freeing them from the burden of administrative work, they would be able to return to the reason they got into this business in the first place — to spend more time with patients in need and support them with grace and compassion.
But suppose AI could do that too?
In a 2023 study conducted at the University of California San Diego and published in JAMA Internal Medicine, three licensed healthcare professionals compared the responses of ChatGPT and physicians to real-world health questions. The panel rated the AI’s answers nearly four times higher in quality and almost 10 times more empathetic than physicians’ replies.
A similar 2024 study in Nature found that Google’s large-language model AI matched or surpassed physician diagnostic accuracy in all six of the medical specialties considered. Plus, it outperformed doctors in 24 of 26 criteria for conversation quality, including politeness, explanation, honesty, and expressing care and commitment.
Nathaniel Chin, MD, is a gerontologist at the University of Wisconsin and advisory board member for the Alzheimer’s Foundation of America. Although he admits that studies like these “sadden me,” he’s also a realist. “There was hesitation among physicians at the beginning of the pandemic to virtual care because we missed the human connection,” he explained, “but we worked our way around that. We need to remember that what makes a chatbot strong is that it’s nothuman. It doesn’t burn out, it doesn’t get tired, it can look at data very quickly, and it doesn’t have to go home to a family and try to balance work with other aspects of life. A human being is very complex, whereas a chatbot has one single purpose.”
“Even if you don’t have AI in your space now or don’t like the idea of it, that doesn’t matter,” he added. “It’s coming. But it needs to be done right. If AI is implemented by clinicians for clinicians, it has great potential. But if it’s implemented by businesspeople for business reasons, perhaps not.”
‘The Ones Who Use the Tools the Best Will Be the Best’
One branch of medicine that stands to be dramatically affected by AI is mental health. Because bots are natural data-crunchers, they are becoming adept at analyzing the many subtle clues (phrasing in social media posts and text messages, smartwatch biometrics, therapy session videos…) that could indicate depression or other psychological disorders. In fact, its availability via smartphone apps could help democratize and destigmatize the practice.
“There is a day ahead — probably within 5 years — when a patient won’t be able to tell the difference between a real therapist and an AI therapist,” said Ken Mallon, MS, LMFT, a clinical psychotherapist and data scientist in San Jose, California. “That doesn’t worry me, though. It’s hard on therapists’ egos, but new technologies get developed. Things change. People who embrace these tools will benefit from them. The ones who use the tools the best will be the best.”
Time to Restructure Med School
Aditya Jain is in his third year at Harvard Medical School. At age 24, he’s heading into this brave new medical world with excitement and anxiety. Excitement because he sees AI revolutionizing healthcare on every level. Although the current generations of physicians and patients may grumble about its onset, he believes younger ones will feel comfortable with “DocGPT.” He’s excited that his generation of physicians will be the “translators and managers of this transition” and redefine “what it means to be a doctor.”
His anxiety, however, stems from the fact that AI has come on so fast that “it has not yet crossed the threshold of medical education,” he said. “Medical schools still largely prepare students to work as solo clinical decision makers. Most of my first 2 years were spent on pattern recognition and rote memorization, skills that AI can and will master.”
Indeed, Mr. Jain said AI was not a part of his first- or second-year curriculum. “I talk to students who are a year older than me, graduating, heading to residency, and they tell me they wish they had gotten a better grasp of how to use these technologies in medicine and in their practice. They were surprised to hear that people in my year hadn’t started using ChatGPT. We need to expend a lot more effort within the field, within academia, within practicing physicians, to figure out what our role will be in a world where AI is matching or even exceeding human intelligence. And then we need to restructure the medical education to better accomplish these goals.”
So Are You Ready for AI to Be a Better Doctor Than You?
“Yes, I am,” said Dr. Philippakis without hesitation. “When I was going through my medical training, I was continually confronted with the reality that I personally was not smart enough to keep all the information in my head that could be used to make a good decision for a patient. We have now reached a point where the amount of information that is important and useful in the practice of medicine outstrips what a human being can know. The opportunity to enable physicians with AI to remedy that situation is a good thing for doctors and, most importantly, a good thing for patients. I believe the future of medicine belongs not so much to the AI practitioner but to the AI-enabled practitioner.”
“Quick story,” added Dr. Chin. “I asked ChatGPT two questions. The first was ‘Explain the difference between Alzheimer’s and dementia’ because that’s the most common misconception in my field. And it gave me a pretty darn good answer — one I would use in a presentation with some tweaking. Then I asked it, ‘Are you a better doctor than me?’ And it replied, ‘My purpose is not to replace you, my purpose is to be supportive of you and enhance your ability.’ ”
A version of this article appeared on Medscape.com.
In a 2023 study published in the Annals of Emergency Medicine, European researchers fed the AI system ChatGPT information on 30 ER patients. Details included physician notes on the patients’ symptoms, physical exams, and lab results. ChatGPT made the correct diagnosis in 97% of patients compared to 87% for human doctors.
AI 1, Physicians 0
JAMA Cardiology reported in 2021 that an AI trained on nearly a million ECGs performed comparably to or exceeded cardiologist clinical diagnoses and the MUSE (GE Healthcare) system›s automated ECG analysis for most diagnostic classes.
AI 2, Physicians 0
Google’s medically focused AI model (Med-PaLM2) scored 85%+ when answering US Medical Licensing Examination–style questions. That›s an «expert» physician level and far beyond the accuracy threshold needed to pass the actual exam.
AI 3, Physicians 0
A new AI tool that uses an online finger-tapping test outperformed primary care physicians when assessing the severity of Parkinson’s disease.
AI 4, Physicians 0
JAMA Ophthalmology reported in 2024 that a chatbot outperformed glaucoma specialists and matched retina specialists in diagnostic and treatment accuracy.
AI 5, Physicians 0
Should we stop? Because we could go on. In the last few years, these AI vs Physician studies have proliferated, and guess who’s winning?
65% of Doctors are Concerned
Now, the standard answer with anything AI-and-Medicine goes something like this: AI is coming, and it will be a transformative tool for physicians and improve patient care.
But the underlying unanswered question is:
The Medscape 2023 Physician and AI Report surveyed 1043 US physicians about their views on AI. In total, 65% are concerned about AI making diagnosis and treatment decisions, but 56% are enthusiastic about having it as an adjunct.
Cardiologists, anesthesiologists, and radiologists are most enthusiastic about AI, whereas family physicians and pediatricians are the least enthusiastic.
To get a more personal view of how physicians and other healthcare professionals are feeling about this transformative tech, I spoke with a variety of practicing doctors, a psychotherapist, and a third-year Harvard Medical School student.
‘Abysmally Poor Understanding’
Alfredo A. Sadun, MD, PhD, has been a neuro-ophthalmologist for nearly 50 years. A graduate of MIT and vice-chair of ophthalmology at UCLA, he’s long been fascinated by AI’s march into medicine. He’s watched it accomplish things that no ophthalmologist can do, such as identify gender, age, and risk for heart attack and stroke from retinal scans. But he doesn›t see the same level of interest and comprehension among the medical community.
“There’s still an abysmally poor understanding of AI among physicians in general,” he said. “It’s striking because these are intelligent, well-educated people. But we tend to draw conclusions based on what we’re familiar with, and most doctors’ experience with computers involves EHRs [electronic health records] and administrative garbage. It’s the reason they’re burning out.”
Easing the Burden
Anthony Philippakis, MD, PhD, left his cardiology practice in 2015 to become the chief data officer at the Broad Institute of MIT and Harvard. While there, he helped develop an AI-based method for identifying patients at risk for atrial fibrillation. Now, he’s a general partner at Google Ventures with the goal of bridging the gap between data sciences and medicine. His perspective on AI is unique, given that he’s seen the issue from both sides.
“I am not a bitter physician, but to be honest, when I was practicing, way too much of my time was spent staring at screens and not enough laying hands on patients,” he said. “Can you imagine what it would be like to speak to the EHR naturally and say, ‘Please order the following labs for this patient and notify me when the results come in.’ Boy, would that improve healthcare and physician satisfaction. Every physician I know is excited and optimistic about that. Almost everyone I’ve talked to feels like AI could take a lot of the stuff they don’t like doing off their plates.”
Indeed, the dividing line between physician support for AI and physician suspicion or skepticism of AI is just that. In our survey, more than three quarters of physicians said they would consider using AI for office administrative tasks, scheduling, EHRs, researching medical conditions, and even summarizing a patient’s record before a visit. But far fewer are supportive of it delivering diagnoses and treatments. This, despite an estimated 800,000 Americans dying or becoming permanently disabled each year because of diagnostic error.
Could AI Have Diagnosed This?
John D. Nuschke, MD, has been a primary care physician in Allentown, Pennsylvania, for 40 years. He’s a jovial general physician who insists his patients call him Jack. He’s recently started using an AI medical scribe called Freed. With the patient’s permission, it listens in on the visit and generates notes, saving Dr. Nuschke time and helping him focus on the person. He likes that type of assistance, but when it comes to AI replacing him, he’s skeptical.
“I had this patient I diagnosed with prostate cancer,” he explained. “He got treated and was fine for 5 years. Then, he started losing weight and feeling awful — got weak as a kitten. He went back to his urologist and oncologist who thought he had metastatic prostate cancer. He went through PET scans and blood work, but there was no sign his cancer had returned. So the specialists sent him back to me, and the second he walked in, I saw he was floridly hyperthyroid. I could tell across the room just by looking at him. Would AI have been able to make that diagnosis? Does AI do physical exams?”
Dr. Nuschke said he’s also had several instances where patients received their cancer diagnosis from the lab through an automated patient-portal system rather than from him. “That’s an AI of sorts, and I found it distressing,” he said.
Empathy From a Robot
All the doctors I spoke to were hopeful that by freeing them from the burden of administrative work, they would be able to return to the reason they got into this business in the first place — to spend more time with patients in need and support them with grace and compassion.
But suppose AI could do that too?
In a 2023 study conducted at the University of California San Diego and published in JAMA Internal Medicine, three licensed healthcare professionals compared the responses of ChatGPT and physicians to real-world health questions. The panel rated the AI’s answers nearly four times higher in quality and almost 10 times more empathetic than physicians’ replies.
A similar 2024 study in Nature found that Google’s large-language model AI matched or surpassed physician diagnostic accuracy in all six of the medical specialties considered. Plus, it outperformed doctors in 24 of 26 criteria for conversation quality, including politeness, explanation, honesty, and expressing care and commitment.
Nathaniel Chin, MD, is a gerontologist at the University of Wisconsin and advisory board member for the Alzheimer’s Foundation of America. Although he admits that studies like these “sadden me,” he’s also a realist. “There was hesitation among physicians at the beginning of the pandemic to virtual care because we missed the human connection,” he explained, “but we worked our way around that. We need to remember that what makes a chatbot strong is that it’s nothuman. It doesn’t burn out, it doesn’t get tired, it can look at data very quickly, and it doesn’t have to go home to a family and try to balance work with other aspects of life. A human being is very complex, whereas a chatbot has one single purpose.”
“Even if you don’t have AI in your space now or don’t like the idea of it, that doesn’t matter,” he added. “It’s coming. But it needs to be done right. If AI is implemented by clinicians for clinicians, it has great potential. But if it’s implemented by businesspeople for business reasons, perhaps not.”
‘The Ones Who Use the Tools the Best Will Be the Best’
One branch of medicine that stands to be dramatically affected by AI is mental health. Because bots are natural data-crunchers, they are becoming adept at analyzing the many subtle clues (phrasing in social media posts and text messages, smartwatch biometrics, therapy session videos…) that could indicate depression or other psychological disorders. In fact, its availability via smartphone apps could help democratize and destigmatize the practice.
“There is a day ahead — probably within 5 years — when a patient won’t be able to tell the difference between a real therapist and an AI therapist,” said Ken Mallon, MS, LMFT, a clinical psychotherapist and data scientist in San Jose, California. “That doesn’t worry me, though. It’s hard on therapists’ egos, but new technologies get developed. Things change. People who embrace these tools will benefit from them. The ones who use the tools the best will be the best.”
Time to Restructure Med School
Aditya Jain is in his third year at Harvard Medical School. At age 24, he’s heading into this brave new medical world with excitement and anxiety. Excitement because he sees AI revolutionizing healthcare on every level. Although the current generations of physicians and patients may grumble about its onset, he believes younger ones will feel comfortable with “DocGPT.” He’s excited that his generation of physicians will be the “translators and managers of this transition” and redefine “what it means to be a doctor.”
His anxiety, however, stems from the fact that AI has come on so fast that “it has not yet crossed the threshold of medical education,” he said. “Medical schools still largely prepare students to work as solo clinical decision makers. Most of my first 2 years were spent on pattern recognition and rote memorization, skills that AI can and will master.”
Indeed, Mr. Jain said AI was not a part of his first- or second-year curriculum. “I talk to students who are a year older than me, graduating, heading to residency, and they tell me they wish they had gotten a better grasp of how to use these technologies in medicine and in their practice. They were surprised to hear that people in my year hadn’t started using ChatGPT. We need to expend a lot more effort within the field, within academia, within practicing physicians, to figure out what our role will be in a world where AI is matching or even exceeding human intelligence. And then we need to restructure the medical education to better accomplish these goals.”
So Are You Ready for AI to Be a Better Doctor Than You?
“Yes, I am,” said Dr. Philippakis without hesitation. “When I was going through my medical training, I was continually confronted with the reality that I personally was not smart enough to keep all the information in my head that could be used to make a good decision for a patient. We have now reached a point where the amount of information that is important and useful in the practice of medicine outstrips what a human being can know. The opportunity to enable physicians with AI to remedy that situation is a good thing for doctors and, most importantly, a good thing for patients. I believe the future of medicine belongs not so much to the AI practitioner but to the AI-enabled practitioner.”
“Quick story,” added Dr. Chin. “I asked ChatGPT two questions. The first was ‘Explain the difference between Alzheimer’s and dementia’ because that’s the most common misconception in my field. And it gave me a pretty darn good answer — one I would use in a presentation with some tweaking. Then I asked it, ‘Are you a better doctor than me?’ And it replied, ‘My purpose is not to replace you, my purpose is to be supportive of you and enhance your ability.’ ”
A version of this article appeared on Medscape.com.
Oncologists Voice Ethical Concerns Over AI in Cancer Care
TOPLINE:
Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.
METHODOLOGY:
- The US Food and Drug Administration (FDA) has for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
- However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
- In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
- Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
- The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.
TAKEAWAY:
- Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
- When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
- About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
- Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.
IN PRACTICE:
“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.
SOURCE:
The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.
LIMITATIONS:
The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.
DISCLOSURES:
The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.
A version of this article appeared on Medscape.com.
TOPLINE:
Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.
METHODOLOGY:
- The US Food and Drug Administration (FDA) has for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
- However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
- In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
- Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
- The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.
TAKEAWAY:
- Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
- When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
- About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
- Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.
IN PRACTICE:
“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.
SOURCE:
The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.
LIMITATIONS:
The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.
DISCLOSURES:
The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.
A version of this article appeared on Medscape.com.
TOPLINE:
Most respondents, for instance, said patients should not be expected to understand how AI tools work, but many also felt patients could make treatment decisions based on AI-generated recommendations. Most oncologists also felt responsible for protecting patients from biased AI, but few were confident that they could do so.
METHODOLOGY:
- The US Food and Drug Administration (FDA) has for use in various medical specialties over the past few decades, and increasingly, AI tools are being integrated into cancer care.
- However, the uptake of these tools in oncology has raised ethical questions and concerns, including challenges with AI bias, error, or misuse, as well as issues explaining how an AI model reached a result.
- In the current study, researchers asked 204 oncologists from 37 states for their views on the ethical implications of using AI for cancer care.
- Among the survey respondents, 64% were men and 63% were non-Hispanic White; 29% were from academic practices, 47% had received some education on AI use in healthcare, and 45% were familiar with clinical decision models.
- The researchers assessed respondents’ answers to various questions, including whether to provide informed consent for AI use and how oncologists would approach a scenario where the AI model and the oncologist recommended a different treatment regimen.
TAKEAWAY:
- Overall, 81% of oncologists supported having patient consent to use an AI model during treatment decisions, and 85% felt that oncologists needed to be able to explain an AI-based clinical decision model to use it in the clinic; however, only 23% felt that patients also needed to be able to explain an AI model.
- When an AI decision model recommended a different treatment regimen than the treating oncologist, the most common response (36.8%) was to present both options to the patient and let the patient decide. Oncologists from academic settings were about 2.5 times more likely than those from other settings to let the patient decide. About 34% of respondents said they would present both options but recommend the oncologist’s regimen, whereas about 22% said they would present both but recommend the AI’s regimen. A small percentage would only present the oncologist’s regimen (5%) or the AI’s regimen (about 2.5%).
- About three of four respondents (76.5%) agreed that oncologists should protect patients from biased AI tools; however, only about one of four (27.9%) felt confident they could identify biased AI models.
- Most oncologists (91%) felt that AI developers were responsible for the medico-legal problems associated with AI use; less than half (47%) said oncologists or hospitals (43%) shared this responsibility.
IN PRACTICE:
“Together, these data characterize barriers that may impede the ethical adoption of AI into cancer care. The findings suggest that the implementation of AI in oncology must include rigorous assessments of its effect on care decisions, as well as decisional responsibility when problems related to AI use arise,” the authors concluded.
SOURCE:
The study, with first author Andrew Hantel, MD, from Dana-Farber Cancer Institute, Boston, was published last month in JAMA Network Open.
LIMITATIONS:
The study had a moderate sample size and response rate, although demographics of participating oncologists appear to be nationally representative. The cross-sectional study design limited the generalizability of the findings over time as AI is integrated into cancer care.
DISCLOSURES:
The study was funded by the National Cancer Institute, the Dana-Farber McGraw/Patterson Research Fund, and the Mark Foundation Emerging Leader Award. Dr. Hantel reported receiving personal fees from AbbVie, AstraZeneca, the American Journal of Managed Care, Genentech, and GSK.
A version of this article appeared on Medscape.com.
EHR Copy and Paste Can Get Physicians Into Trouble
Physicians who misuse the “copy-and-paste” feature in patients’ electronic health records (EHRs) can face serious consequences, including lost hospital privileges, fines, and malpractice lawsuits.
In California, a locum tenens physician lost her hospital privileges after repeatedly violating the copy-and-paste policy developed at Santa Rosa Memorial Hospital, Santa Rosa, California.
“Her use of copy and paste impaired continuity of care,” said Alvin Gore, MD, who was involved in the case as the hospital’s director of utilization management.
Dr. Gore said the hospital warned the doctor, but she did not change her behavior. He did not identify the physician, citing confidentiality. The case occurred more than 5 years ago. Since then, several physicians have been called onto the carpet for violations of the policy, but no one else has lost privileges, Dr. Gore said.
“EHRs are imperfect, time consuming, and somewhat rigid,” said Robert A. Dowling, MD, a practice management consultant for large medical groups. “If physicians can’t easily figure out a complex system, they’re likely to use a workaround like copy and paste.”
Copy-and-paste abuse has also led to fines. A six-member cardiology group in Somerville, New Jersey, paid a $422,000 fine to the federal government to settle copy-and-paste charges, following an investigation by the Office of the Inspector General of the Department of Health and Human Services, according to the Report on Medicare Compliance.
This big settlement, announced in 2016, is a rare case in which physicians were charged with copy-and-paste fraud — intentionally using it to enhance reimbursement.
More commonly, Medicare contractors identify physicians who unintentionally received overpayments through sloppy copy-and-paste practices, according to a coding and documentation auditor who worked for 10 years at a Medicare contractor in Pennsylvania.
Such cases are frequent and are handled confidentially, said the auditor, who asked not to be identified. Practices must return the overpayment, and the physicians involved are “contacted and educated,” she said.
Copy and paste can also show up in malpractice lawsuits. In a 2012 survey, 53% of professional liability carriers said they had handled an EHR-related malpractice claim, and 71% of those claims included copy-and-paste use.
One such case, described by CRICO, a malpractice carrier based in Massachusetts, took place in 2012-2013. “A patient developed amiodarone toxicity because the patient›s history and medications were copied from a previous note that did not document that the patient was already on the medication,» CRICO stated.
“If you do face a malpractice claim, copying and pasting the same note repeatedly makes you look clinically inattentive, even if the copy/pasted material is unrelated to the adverse event,” CRICO officials noted in a report.
The Push to Use Copy and Paste
Copy and paste is a great time-saver. One study linked its use to lower burnout rates. However, it can easily introduce errors into the medical record. “This can be a huge problem,” Dr. Dowling said. “If, for example, you copy forward a previous note that said the patient had blood in their urine ‘6 days ago,’ it is immediately inaccurate.”
Practices can control use of copy and paste through coding clerks who read the medical records and then educate doctors when problems crop up.
The Pennsylvania auditor, who now works for a large group practice, said the group has very few copy-and-paste problems because of her role. “Not charting responsibly rarely happens because I work very closely with the doctors,” she said.
Dr. Dowling, however, reports that many physicians continue to overuse copy and paste. He points to a 2022 study which found that, on average, half the clinical note at one health system had been copied and pasted.
One solution might be to sanction physicians for overusing copy and paste, just as they’re sometimes penalized for not completing their notes on time with a reduction in income or possible termination.
Practices could periodically audit medical records for excessive copy-paste use. EHR systems like Epic’s can indicate how much of a doctor’s note has been copied. But Dr. Dowling doesn’t know of any practices that do this.
“There is little appetite to introduce a new enforcement activity for physicians,” he said. “Physicians would see it just as a way to make their lives more difficult than they already are.”
Monitoring in Hospitals and Health Systems
Some hospitals and health systems have gone as far as disabling copy-and-paste function in their EHR systems. However, enterprising physicians have found ways around these blocks.
Some institutions have also introduced formal policies, directing doctors on how they can copy and paste, including Banner Health in Arizona, Northwell Health in New York, UConn Health in Connecticut, University of Maryland Medical System, and University of Toledo in Ohio.
Definitions of what is not acceptable vary, but most of these policies oppose copying someone else’s notes and direct physicians to indicate the origin of pasted material.
Santa Rosa Memorial’s policy is quite specific. It still allows some copy and paste but stipulates that it cannot be used for the chief complaint, the review of systems, the physical examination, and the assessment and plan in the medical record, except when the information can’t be obtained directly from the patient. Also, physicians must summarize test results and provide references to other providers’ notes.
Dr. Gore said he and a physician educator who works with physicians on clinical documentation proposed the policy about a decade ago. When physicians on staff were asked to comment, some said they would be opposed to a complete ban, but they generally agreed that copy and paste was a serious problem that needed to be addressed, he said.
The hospital could have simply adopted guidelines, as opposed to rules with consequences, but “we wanted our policy to have teeth,” Dr. Gore said.
When violators are identified, Dr. Gore says he meets with them confidentially and educates them on proper use of copy and paste. Sometimes, the department head is brought in. Some physicians go on to violate the policy again and have to attend another meeting, he said, but aside from the one case, no one else has been disciplined.
It’s unclear how many physicians have faced consequences for misusing copy-paste features — such data aren’t tracked, and sanctions are likely to be handled confidentially, as a personnel matter.
Geisinger Health in Pennsylvania regularly monitors copy-and-paste usage and makes it part of physicians’ professional evaluations, according to a 2022 presentation by a Geisinger official.
Meanwhile, even when systems don’t have specific policies, they may still discipline physicians when copy and paste leads to errors. Scott MacDonald, MD, chief medical information officer at UC Davis Health in Sacramento, California, told this news organization that copy-and-paste abuse has come up a few times over the years in investigations of clinical errors.
Holding Physicians Accountable
Physicians can be held accountable for copy and paste by Medicare contractors and in malpractice lawsuits, but the most obvious way is at their place of work: A practice, hospital, or health system.
One physician has lost staff privileges, but more typically, coding clerks or colleagues talk to offending physicians and try to educate them on proper use of copy and paste.
Educational outreach, however, is often ineffective, said Robert Hirschtick, MD, a retired teaching physician at Northwestern University Feinberg School of Medicine, Chicago, Illinois. “The physician may be directed to take an online course,” he said. “When they take the course, the goal is to get it done with, rather than to learn something new.”
Dr. Hirschtick’s articles on copy and paste, including one titled, “Sloppy and Paste,” have put him at the front lines of the debate. “This is an ethical issue,” he said in an interview. He agrees that some forms of copy and paste are permissible, but in many cases, “it is intellectually dishonest and potentially even plagiarism,” he said.
Dr. Hirschtick argues that copy-and-paste policies need more teeth. “Tying violations to compensation would be quite effective,” he said. “Even if physicians were rarely penalized, just knowing that it could happen to you might be enough. But I haven’t heard of anyone doing this.”
A version of this article appeared on Medscape.com.
Physicians who misuse the “copy-and-paste” feature in patients’ electronic health records (EHRs) can face serious consequences, including lost hospital privileges, fines, and malpractice lawsuits.
In California, a locum tenens physician lost her hospital privileges after repeatedly violating the copy-and-paste policy developed at Santa Rosa Memorial Hospital, Santa Rosa, California.
“Her use of copy and paste impaired continuity of care,” said Alvin Gore, MD, who was involved in the case as the hospital’s director of utilization management.
Dr. Gore said the hospital warned the doctor, but she did not change her behavior. He did not identify the physician, citing confidentiality. The case occurred more than 5 years ago. Since then, several physicians have been called onto the carpet for violations of the policy, but no one else has lost privileges, Dr. Gore said.
“EHRs are imperfect, time consuming, and somewhat rigid,” said Robert A. Dowling, MD, a practice management consultant for large medical groups. “If physicians can’t easily figure out a complex system, they’re likely to use a workaround like copy and paste.”
Copy-and-paste abuse has also led to fines. A six-member cardiology group in Somerville, New Jersey, paid a $422,000 fine to the federal government to settle copy-and-paste charges, following an investigation by the Office of the Inspector General of the Department of Health and Human Services, according to the Report on Medicare Compliance.
This big settlement, announced in 2016, is a rare case in which physicians were charged with copy-and-paste fraud — intentionally using it to enhance reimbursement.
More commonly, Medicare contractors identify physicians who unintentionally received overpayments through sloppy copy-and-paste practices, according to a coding and documentation auditor who worked for 10 years at a Medicare contractor in Pennsylvania.
Such cases are frequent and are handled confidentially, said the auditor, who asked not to be identified. Practices must return the overpayment, and the physicians involved are “contacted and educated,” she said.
Copy and paste can also show up in malpractice lawsuits. In a 2012 survey, 53% of professional liability carriers said they had handled an EHR-related malpractice claim, and 71% of those claims included copy-and-paste use.
One such case, described by CRICO, a malpractice carrier based in Massachusetts, took place in 2012-2013. “A patient developed amiodarone toxicity because the patient›s history and medications were copied from a previous note that did not document that the patient was already on the medication,» CRICO stated.
“If you do face a malpractice claim, copying and pasting the same note repeatedly makes you look clinically inattentive, even if the copy/pasted material is unrelated to the adverse event,” CRICO officials noted in a report.
The Push to Use Copy and Paste
Copy and paste is a great time-saver. One study linked its use to lower burnout rates. However, it can easily introduce errors into the medical record. “This can be a huge problem,” Dr. Dowling said. “If, for example, you copy forward a previous note that said the patient had blood in their urine ‘6 days ago,’ it is immediately inaccurate.”
Practices can control use of copy and paste through coding clerks who read the medical records and then educate doctors when problems crop up.
The Pennsylvania auditor, who now works for a large group practice, said the group has very few copy-and-paste problems because of her role. “Not charting responsibly rarely happens because I work very closely with the doctors,” she said.
Dr. Dowling, however, reports that many physicians continue to overuse copy and paste. He points to a 2022 study which found that, on average, half the clinical note at one health system had been copied and pasted.
One solution might be to sanction physicians for overusing copy and paste, just as they’re sometimes penalized for not completing their notes on time with a reduction in income or possible termination.
Practices could periodically audit medical records for excessive copy-paste use. EHR systems like Epic’s can indicate how much of a doctor’s note has been copied. But Dr. Dowling doesn’t know of any practices that do this.
“There is little appetite to introduce a new enforcement activity for physicians,” he said. “Physicians would see it just as a way to make their lives more difficult than they already are.”
Monitoring in Hospitals and Health Systems
Some hospitals and health systems have gone as far as disabling copy-and-paste function in their EHR systems. However, enterprising physicians have found ways around these blocks.
Some institutions have also introduced formal policies, directing doctors on how they can copy and paste, including Banner Health in Arizona, Northwell Health in New York, UConn Health in Connecticut, University of Maryland Medical System, and University of Toledo in Ohio.
Definitions of what is not acceptable vary, but most of these policies oppose copying someone else’s notes and direct physicians to indicate the origin of pasted material.
Santa Rosa Memorial’s policy is quite specific. It still allows some copy and paste but stipulates that it cannot be used for the chief complaint, the review of systems, the physical examination, and the assessment and plan in the medical record, except when the information can’t be obtained directly from the patient. Also, physicians must summarize test results and provide references to other providers’ notes.
Dr. Gore said he and a physician educator who works with physicians on clinical documentation proposed the policy about a decade ago. When physicians on staff were asked to comment, some said they would be opposed to a complete ban, but they generally agreed that copy and paste was a serious problem that needed to be addressed, he said.
The hospital could have simply adopted guidelines, as opposed to rules with consequences, but “we wanted our policy to have teeth,” Dr. Gore said.
When violators are identified, Dr. Gore says he meets with them confidentially and educates them on proper use of copy and paste. Sometimes, the department head is brought in. Some physicians go on to violate the policy again and have to attend another meeting, he said, but aside from the one case, no one else has been disciplined.
It’s unclear how many physicians have faced consequences for misusing copy-paste features — such data aren’t tracked, and sanctions are likely to be handled confidentially, as a personnel matter.
Geisinger Health in Pennsylvania regularly monitors copy-and-paste usage and makes it part of physicians’ professional evaluations, according to a 2022 presentation by a Geisinger official.
Meanwhile, even when systems don’t have specific policies, they may still discipline physicians when copy and paste leads to errors. Scott MacDonald, MD, chief medical information officer at UC Davis Health in Sacramento, California, told this news organization that copy-and-paste abuse has come up a few times over the years in investigations of clinical errors.
Holding Physicians Accountable
Physicians can be held accountable for copy and paste by Medicare contractors and in malpractice lawsuits, but the most obvious way is at their place of work: A practice, hospital, or health system.
One physician has lost staff privileges, but more typically, coding clerks or colleagues talk to offending physicians and try to educate them on proper use of copy and paste.
Educational outreach, however, is often ineffective, said Robert Hirschtick, MD, a retired teaching physician at Northwestern University Feinberg School of Medicine, Chicago, Illinois. “The physician may be directed to take an online course,” he said. “When they take the course, the goal is to get it done with, rather than to learn something new.”
Dr. Hirschtick’s articles on copy and paste, including one titled, “Sloppy and Paste,” have put him at the front lines of the debate. “This is an ethical issue,” he said in an interview. He agrees that some forms of copy and paste are permissible, but in many cases, “it is intellectually dishonest and potentially even plagiarism,” he said.
Dr. Hirschtick argues that copy-and-paste policies need more teeth. “Tying violations to compensation would be quite effective,” he said. “Even if physicians were rarely penalized, just knowing that it could happen to you might be enough. But I haven’t heard of anyone doing this.”
A version of this article appeared on Medscape.com.
Physicians who misuse the “copy-and-paste” feature in patients’ electronic health records (EHRs) can face serious consequences, including lost hospital privileges, fines, and malpractice lawsuits.
In California, a locum tenens physician lost her hospital privileges after repeatedly violating the copy-and-paste policy developed at Santa Rosa Memorial Hospital, Santa Rosa, California.
“Her use of copy and paste impaired continuity of care,” said Alvin Gore, MD, who was involved in the case as the hospital’s director of utilization management.
Dr. Gore said the hospital warned the doctor, but she did not change her behavior. He did not identify the physician, citing confidentiality. The case occurred more than 5 years ago. Since then, several physicians have been called onto the carpet for violations of the policy, but no one else has lost privileges, Dr. Gore said.
“EHRs are imperfect, time consuming, and somewhat rigid,” said Robert A. Dowling, MD, a practice management consultant for large medical groups. “If physicians can’t easily figure out a complex system, they’re likely to use a workaround like copy and paste.”
Copy-and-paste abuse has also led to fines. A six-member cardiology group in Somerville, New Jersey, paid a $422,000 fine to the federal government to settle copy-and-paste charges, following an investigation by the Office of the Inspector General of the Department of Health and Human Services, according to the Report on Medicare Compliance.
This big settlement, announced in 2016, is a rare case in which physicians were charged with copy-and-paste fraud — intentionally using it to enhance reimbursement.
More commonly, Medicare contractors identify physicians who unintentionally received overpayments through sloppy copy-and-paste practices, according to a coding and documentation auditor who worked for 10 years at a Medicare contractor in Pennsylvania.
Such cases are frequent and are handled confidentially, said the auditor, who asked not to be identified. Practices must return the overpayment, and the physicians involved are “contacted and educated,” she said.
Copy and paste can also show up in malpractice lawsuits. In a 2012 survey, 53% of professional liability carriers said they had handled an EHR-related malpractice claim, and 71% of those claims included copy-and-paste use.
One such case, described by CRICO, a malpractice carrier based in Massachusetts, took place in 2012-2013. “A patient developed amiodarone toxicity because the patient›s history and medications were copied from a previous note that did not document that the patient was already on the medication,» CRICO stated.
“If you do face a malpractice claim, copying and pasting the same note repeatedly makes you look clinically inattentive, even if the copy/pasted material is unrelated to the adverse event,” CRICO officials noted in a report.
The Push to Use Copy and Paste
Copy and paste is a great time-saver. One study linked its use to lower burnout rates. However, it can easily introduce errors into the medical record. “This can be a huge problem,” Dr. Dowling said. “If, for example, you copy forward a previous note that said the patient had blood in their urine ‘6 days ago,’ it is immediately inaccurate.”
Practices can control use of copy and paste through coding clerks who read the medical records and then educate doctors when problems crop up.
The Pennsylvania auditor, who now works for a large group practice, said the group has very few copy-and-paste problems because of her role. “Not charting responsibly rarely happens because I work very closely with the doctors,” she said.
Dr. Dowling, however, reports that many physicians continue to overuse copy and paste. He points to a 2022 study which found that, on average, half the clinical note at one health system had been copied and pasted.
One solution might be to sanction physicians for overusing copy and paste, just as they’re sometimes penalized for not completing their notes on time with a reduction in income or possible termination.
Practices could periodically audit medical records for excessive copy-paste use. EHR systems like Epic’s can indicate how much of a doctor’s note has been copied. But Dr. Dowling doesn’t know of any practices that do this.
“There is little appetite to introduce a new enforcement activity for physicians,” he said. “Physicians would see it just as a way to make their lives more difficult than they already are.”
Monitoring in Hospitals and Health Systems
Some hospitals and health systems have gone as far as disabling copy-and-paste function in their EHR systems. However, enterprising physicians have found ways around these blocks.
Some institutions have also introduced formal policies, directing doctors on how they can copy and paste, including Banner Health in Arizona, Northwell Health in New York, UConn Health in Connecticut, University of Maryland Medical System, and University of Toledo in Ohio.
Definitions of what is not acceptable vary, but most of these policies oppose copying someone else’s notes and direct physicians to indicate the origin of pasted material.
Santa Rosa Memorial’s policy is quite specific. It still allows some copy and paste but stipulates that it cannot be used for the chief complaint, the review of systems, the physical examination, and the assessment and plan in the medical record, except when the information can’t be obtained directly from the patient. Also, physicians must summarize test results and provide references to other providers’ notes.
Dr. Gore said he and a physician educator who works with physicians on clinical documentation proposed the policy about a decade ago. When physicians on staff were asked to comment, some said they would be opposed to a complete ban, but they generally agreed that copy and paste was a serious problem that needed to be addressed, he said.
The hospital could have simply adopted guidelines, as opposed to rules with consequences, but “we wanted our policy to have teeth,” Dr. Gore said.
When violators are identified, Dr. Gore says he meets with them confidentially and educates them on proper use of copy and paste. Sometimes, the department head is brought in. Some physicians go on to violate the policy again and have to attend another meeting, he said, but aside from the one case, no one else has been disciplined.
It’s unclear how many physicians have faced consequences for misusing copy-paste features — such data aren’t tracked, and sanctions are likely to be handled confidentially, as a personnel matter.
Geisinger Health in Pennsylvania regularly monitors copy-and-paste usage and makes it part of physicians’ professional evaluations, according to a 2022 presentation by a Geisinger official.
Meanwhile, even when systems don’t have specific policies, they may still discipline physicians when copy and paste leads to errors. Scott MacDonald, MD, chief medical information officer at UC Davis Health in Sacramento, California, told this news organization that copy-and-paste abuse has come up a few times over the years in investigations of clinical errors.
Holding Physicians Accountable
Physicians can be held accountable for copy and paste by Medicare contractors and in malpractice lawsuits, but the most obvious way is at their place of work: A practice, hospital, or health system.
One physician has lost staff privileges, but more typically, coding clerks or colleagues talk to offending physicians and try to educate them on proper use of copy and paste.
Educational outreach, however, is often ineffective, said Robert Hirschtick, MD, a retired teaching physician at Northwestern University Feinberg School of Medicine, Chicago, Illinois. “The physician may be directed to take an online course,” he said. “When they take the course, the goal is to get it done with, rather than to learn something new.”
Dr. Hirschtick’s articles on copy and paste, including one titled, “Sloppy and Paste,” have put him at the front lines of the debate. “This is an ethical issue,” he said in an interview. He agrees that some forms of copy and paste are permissible, but in many cases, “it is intellectually dishonest and potentially even plagiarism,” he said.
Dr. Hirschtick argues that copy-and-paste policies need more teeth. “Tying violations to compensation would be quite effective,” he said. “Even if physicians were rarely penalized, just knowing that it could happen to you might be enough. But I haven’t heard of anyone doing this.”
A version of this article appeared on Medscape.com.
Using AI to Transform Diabetic Foot and Limb Preservation
Diabetic foot complications represent a major global health challenge, with a high prevalence among patients with diabetes. A diabetic foot ulcer (DFU) not only affects the patient›s quality of life but also increases the risk for amputation.
Worldwide, a DFU occurs every second, and an amputation occurs every 20 seconds. The limitations of current detection and intervention methods underline the urgent need for innovative solutions.
Recent advances in artificial intelligence (AI) have paved the way for individualized risk prediction models for chronic wound management. These models use deep learning algorithms to analyze clinical data and images, providing personalized treatment plans that may improve healing outcomes and reduce the risk for amputation.
AI-powered tools can also be deployed for the diagnosis of diabetic foot complications. Using image analysis and pattern recognition, AI tools are learning to accurately detect signs of DFUs and other complications, facilitating early and effective intervention. Our group and others have been working not only on imaging devices but also on thermographic tools that — with the help of AI — can create an automated “foot selfie” to predict and prevent problems before they start.
AI’s predictive capabilities are instrumental to its clinical value. By identifying patients at high risk for DFUs, healthcare providers can implement preemptive measures, significantly reducing the likelihood of severe complications.
Although the potential benefits of AI in diabetic foot care are immense, integrating these tools into clinical practice poses challenges. These include ensuring the reliability of AI predictions, addressing data privacy concerns, and training healthcare professionals on the use of AI technologies.
As in so many other areas in our lives, AI holds the promise to revolutionize diabetic foot and limb preservation, offering hope for improved patient outcomes through early detection, precise diagnosis, and personalized care. However, realizing this potential requires ongoing research, development, and collaboration across the medical and technological fields to ensure these innovative solutions can be effectively integrated into standard care practices.
Dr. Armstrong is professor of surgery, Keck School of Medicine of University of Southern California, Los Angeles, California. He has disclosed the following relevant financial relationships: Partially supported by National Institutes of Health; National Institute of Diabetes; Digestive and Kidney Disease Award Number 1R01124789-01A1.
A version of this article first appeared on Medscape.com.
Diabetic foot complications represent a major global health challenge, with a high prevalence among patients with diabetes. A diabetic foot ulcer (DFU) not only affects the patient›s quality of life but also increases the risk for amputation.
Worldwide, a DFU occurs every second, and an amputation occurs every 20 seconds. The limitations of current detection and intervention methods underline the urgent need for innovative solutions.
Recent advances in artificial intelligence (AI) have paved the way for individualized risk prediction models for chronic wound management. These models use deep learning algorithms to analyze clinical data and images, providing personalized treatment plans that may improve healing outcomes and reduce the risk for amputation.
AI-powered tools can also be deployed for the diagnosis of diabetic foot complications. Using image analysis and pattern recognition, AI tools are learning to accurately detect signs of DFUs and other complications, facilitating early and effective intervention. Our group and others have been working not only on imaging devices but also on thermographic tools that — with the help of AI — can create an automated “foot selfie” to predict and prevent problems before they start.
AI’s predictive capabilities are instrumental to its clinical value. By identifying patients at high risk for DFUs, healthcare providers can implement preemptive measures, significantly reducing the likelihood of severe complications.
Although the potential benefits of AI in diabetic foot care are immense, integrating these tools into clinical practice poses challenges. These include ensuring the reliability of AI predictions, addressing data privacy concerns, and training healthcare professionals on the use of AI technologies.
As in so many other areas in our lives, AI holds the promise to revolutionize diabetic foot and limb preservation, offering hope for improved patient outcomes through early detection, precise diagnosis, and personalized care. However, realizing this potential requires ongoing research, development, and collaboration across the medical and technological fields to ensure these innovative solutions can be effectively integrated into standard care practices.
Dr. Armstrong is professor of surgery, Keck School of Medicine of University of Southern California, Los Angeles, California. He has disclosed the following relevant financial relationships: Partially supported by National Institutes of Health; National Institute of Diabetes; Digestive and Kidney Disease Award Number 1R01124789-01A1.
A version of this article first appeared on Medscape.com.
Diabetic foot complications represent a major global health challenge, with a high prevalence among patients with diabetes. A diabetic foot ulcer (DFU) not only affects the patient›s quality of life but also increases the risk for amputation.
Worldwide, a DFU occurs every second, and an amputation occurs every 20 seconds. The limitations of current detection and intervention methods underline the urgent need for innovative solutions.
Recent advances in artificial intelligence (AI) have paved the way for individualized risk prediction models for chronic wound management. These models use deep learning algorithms to analyze clinical data and images, providing personalized treatment plans that may improve healing outcomes and reduce the risk for amputation.
AI-powered tools can also be deployed for the diagnosis of diabetic foot complications. Using image analysis and pattern recognition, AI tools are learning to accurately detect signs of DFUs and other complications, facilitating early and effective intervention. Our group and others have been working not only on imaging devices but also on thermographic tools that — with the help of AI — can create an automated “foot selfie” to predict and prevent problems before they start.
AI’s predictive capabilities are instrumental to its clinical value. By identifying patients at high risk for DFUs, healthcare providers can implement preemptive measures, significantly reducing the likelihood of severe complications.
Although the potential benefits of AI in diabetic foot care are immense, integrating these tools into clinical practice poses challenges. These include ensuring the reliability of AI predictions, addressing data privacy concerns, and training healthcare professionals on the use of AI technologies.
As in so many other areas in our lives, AI holds the promise to revolutionize diabetic foot and limb preservation, offering hope for improved patient outcomes through early detection, precise diagnosis, and personalized care. However, realizing this potential requires ongoing research, development, and collaboration across the medical and technological fields to ensure these innovative solutions can be effectively integrated into standard care practices.
Dr. Armstrong is professor of surgery, Keck School of Medicine of University of Southern California, Los Angeles, California. He has disclosed the following relevant financial relationships: Partially supported by National Institutes of Health; National Institute of Diabetes; Digestive and Kidney Disease Award Number 1R01124789-01A1.
A version of this article first appeared on Medscape.com.
Medicine or Politics? Doctors Defend Their Social Activism
It should come as no surprise that when physicians speak out on social and political issues, there is sometimes a backlash. This can range from the typical trolling that occurs online to rarer cases of professional penalties. Two doctors were fired by NYU Langone Health late last year after they posted social media messages about the Israel-Hamas war. Still, many physicians are not only willing to stand up for what they believe in, but they see it as an essential part of their profession.
"We're now at a place where doctors need to engage in public advocacy as an urgent part of our job," wrote Rob Davidson, MD, an emergency department physician, at the onslaught of the COVID-19 pandemic. In an Op-Ed piece for The Guardian, Dr. Davidson noted how the virus forced many physicians into becoming "activist doctors," calling for adequate personal protective equipment and correcting misinformation. "What we want above all is for the administration to listen to doctors, nurses, and frontline health workers - and stop playing politics," he wrote.
'It's Not About Being Political'
The intersection of medicine and politics is hardly new. Doctors frequently testify before Congress, sharing their expertise on issues concerning public health. This, however, isn't the same as "playing politics."
"I'm not taking political stances," said Megan Ranney, MD, Dean of the Yale School of Public Health. "Rather, I'm using science to inform best practices, and I'm vocal around the area where I have expertise where we could do collectively better."
Dr. Ranney's work to end firearm injury and death garnered particular attention when she co-authored an open letter to the National Rifle Association (NRA) in 2018. She wrote the letter in response to a tweet by the organization, admonishing physicians to "stay in their lane" when it comes to gun control.
Dr. Ranney's letter discussed gun violence as a public health crisis and urged the NRA to "be part of the solution" by joining the collective effort to reduce firearm injury and death through research, education, and advocacy. "We are not anti-gun," she stated. "We are anti-bullet hole," adding that "almost half of doctors own guns."
The NRA disagreed. When Dr. Ranney testified before Congress during a hearing on gun violence in 2023, NRA spokesperson Billy McLaughlin condemned her testimony as an effort to "dismantle the Second Amendment," calling Dr. Ranney "a known gun control extremist."
"If you actually read what I write, or if you actually listen to what I say, I'm not saying things on behalf of one political party or another," said Dr. Ranney. "It's not about being political. It's about recognizing our role in describing what's happening and making it clear for the world to see. Showing where, based off of data, there may be a better path to improve health and wellbeing."
In spite of the backlash, Dr. Ranney has no regrets about being an activist. "In the current media landscape, folks love to slap labels on people that may or may not be accurate. To me, what matters isn't where I land with a particular politician or political party, but how the work that I do improves health for populations."
When the Need to Act Outweighs the Fear
Laura Andreson, DO, an ob.gyn, took activism a step further when she joined a group of women in Tennessee to file a suit against the state, the attorney general, and the state board of medical examiners. The issue was the Tennessee's abortion ban, which the suit claimed prevented women from getting "necessary and potentially life-saving medical care."
Dr. Andreson, who says she was "not at all" politically active in the past, began to realize how the abortion ban could drastically affect her profession and her patients. "I don't know what flipped in me, but I just felt like I could do this," she said.
Like Dr. Ranney, Dr. Andreson has been as visible as she has been vocal, giving press conferences and interviews, but she acknowledges she has some fears about safety. In fact, after filing the lawsuit, the Center for Reproductive Rights recommended that she go to a website, DeleteMe, that removes personal data from the internet, making it more difficult for people to find her information. "But my need to do this and my desire to do this is stronger than my fears," she added.
Dr. Andreson, who is part of a small practice, did check with both her coworkers and the hospital administration before moving forward with the lawsuit. She was relieved to find that she had the support of her practice and that there wasn't anything in the hospital bylaws to prevent her from filing the lawsuit. "But the people in the bigger institutions who probably have an even better expert base than I do, they are handcuffed," she said.
It has been, in Dr. Andreson's words, "a little uncomfortable" being on the board of the Tennessee Medical Association when the Tennessee Board of Medical Examiners is part of the lawsuit. "We're all members of the same group," she said. "But I'm not suing them as individuals; I'm suing them as an entity that is under our government."
Dr. Andreson said most people have been supportive of her activist work, though she admitted to feeling frustrated when she encounters apathy from fellow ob.gyns. She got little response when she circulated information explaining the abortion laws and trying to get others involved. But she still sees education as being a key part of making change happen.
"I think advocacy, as someone who is considered a responsible, trustworthy person by your community, is important, because you can sway some people just by educating them," she said.
Fighting Inequities in Medicine and Beyond
Christina Chen, MD, says she felt very supported by her medical community at the Mayo Clinic in Rochester, Minnesota, when she and 16 other Asian American physicians posted a video on Instagram in 2020 highlighting increased violence and harassment of Asian Americans during COVID-19. It soon went viral, and the Mayo Clinic distributed it across their social media channels. The only negative repercussions Mayo faced were a few posts on social media saying that politics should not be brought into the healthcare space. Dr. Chen disagrees.
"Social issues and political decisions have direct impact on the health of our communities," Dr. Chen said. "We know that we still have a long way to go to solve health inequities, which is a public health problem, and we all play a huge role in voicing our concerns."
Activism, however, seems to be more complicated when it involves physicians being critical of inequities within the medical field. Nephrologist, Vanessa Grubbs, MD, MPH, founded the nonprofit Black Doc Village in 2022 to raise awareness about the wrongful dismissal of Black residents and expand the Black physician workforce.
Dr. Grubbs said that the medical community has not been supportive of her activism. "The reason why I'm no longer in academia is in part because they got very upset with me tweeting about how some trainees are biased in their treatment of attendings," she said. "Senior White men attendings are often treated very differently than junior women of color faculty."
Dr. Grubbs also expressed her views in 2020 essay in the New England Journal of Medicine where she criticized academic medical institutions for ignoring systemic racism, paying lip service to diversity, equity, and inclusion, and staying "deafeningly silent" when issues of racism are raised.
Today, Black Doc Village is focused on conducting research that can be used to change policy. And Dr. Grubbs now has the full support of her colleagues at West Oakland Health, in Oakland, California, which aspires to advance the Bay Area Black community's health and dignity. "So, no one here has a problem with me speaking out," she added.
The emphasis on data-driven activism as opposed to "playing politics," is a recurring theme for many physicians who publicly engage with social issues.
"It's not partisan," Dr. Ranney said. "Rather, it's a commitment to translating science into actionable steps that can be used regardless of what political party you are in. My job is not to be on one side or the other, but to advance human health." These doctors challenge their critics to explain how such a goal is outside their purview.
A version of this article first appeared on Medscape.com.
It should come as no surprise that when physicians speak out on social and political issues, there is sometimes a backlash. This can range from the typical trolling that occurs online to rarer cases of professional penalties. Two doctors were fired by NYU Langone Health late last year after they posted social media messages about the Israel-Hamas war. Still, many physicians are not only willing to stand up for what they believe in, but they see it as an essential part of their profession.
"We're now at a place where doctors need to engage in public advocacy as an urgent part of our job," wrote Rob Davidson, MD, an emergency department physician, at the onslaught of the COVID-19 pandemic. In an Op-Ed piece for The Guardian, Dr. Davidson noted how the virus forced many physicians into becoming "activist doctors," calling for adequate personal protective equipment and correcting misinformation. "What we want above all is for the administration to listen to doctors, nurses, and frontline health workers - and stop playing politics," he wrote.
'It's Not About Being Political'
The intersection of medicine and politics is hardly new. Doctors frequently testify before Congress, sharing their expertise on issues concerning public health. This, however, isn't the same as "playing politics."
"I'm not taking political stances," said Megan Ranney, MD, Dean of the Yale School of Public Health. "Rather, I'm using science to inform best practices, and I'm vocal around the area where I have expertise where we could do collectively better."
Dr. Ranney's work to end firearm injury and death garnered particular attention when she co-authored an open letter to the National Rifle Association (NRA) in 2018. She wrote the letter in response to a tweet by the organization, admonishing physicians to "stay in their lane" when it comes to gun control.
Dr. Ranney's letter discussed gun violence as a public health crisis and urged the NRA to "be part of the solution" by joining the collective effort to reduce firearm injury and death through research, education, and advocacy. "We are not anti-gun," she stated. "We are anti-bullet hole," adding that "almost half of doctors own guns."
The NRA disagreed. When Dr. Ranney testified before Congress during a hearing on gun violence in 2023, NRA spokesperson Billy McLaughlin condemned her testimony as an effort to "dismantle the Second Amendment," calling Dr. Ranney "a known gun control extremist."
"If you actually read what I write, or if you actually listen to what I say, I'm not saying things on behalf of one political party or another," said Dr. Ranney. "It's not about being political. It's about recognizing our role in describing what's happening and making it clear for the world to see. Showing where, based off of data, there may be a better path to improve health and wellbeing."
In spite of the backlash, Dr. Ranney has no regrets about being an activist. "In the current media landscape, folks love to slap labels on people that may or may not be accurate. To me, what matters isn't where I land with a particular politician or political party, but how the work that I do improves health for populations."
When the Need to Act Outweighs the Fear
Laura Andreson, DO, an ob.gyn, took activism a step further when she joined a group of women in Tennessee to file a suit against the state, the attorney general, and the state board of medical examiners. The issue was the Tennessee's abortion ban, which the suit claimed prevented women from getting "necessary and potentially life-saving medical care."
Dr. Andreson, who says she was "not at all" politically active in the past, began to realize how the abortion ban could drastically affect her profession and her patients. "I don't know what flipped in me, but I just felt like I could do this," she said.
Like Dr. Ranney, Dr. Andreson has been as visible as she has been vocal, giving press conferences and interviews, but she acknowledges she has some fears about safety. In fact, after filing the lawsuit, the Center for Reproductive Rights recommended that she go to a website, DeleteMe, that removes personal data from the internet, making it more difficult for people to find her information. "But my need to do this and my desire to do this is stronger than my fears," she added.
Dr. Andreson, who is part of a small practice, did check with both her coworkers and the hospital administration before moving forward with the lawsuit. She was relieved to find that she had the support of her practice and that there wasn't anything in the hospital bylaws to prevent her from filing the lawsuit. "But the people in the bigger institutions who probably have an even better expert base than I do, they are handcuffed," she said.
It has been, in Dr. Andreson's words, "a little uncomfortable" being on the board of the Tennessee Medical Association when the Tennessee Board of Medical Examiners is part of the lawsuit. "We're all members of the same group," she said. "But I'm not suing them as individuals; I'm suing them as an entity that is under our government."
Dr. Andreson said most people have been supportive of her activist work, though she admitted to feeling frustrated when she encounters apathy from fellow ob.gyns. She got little response when she circulated information explaining the abortion laws and trying to get others involved. But she still sees education as being a key part of making change happen.
"I think advocacy, as someone who is considered a responsible, trustworthy person by your community, is important, because you can sway some people just by educating them," she said.
Fighting Inequities in Medicine and Beyond
Christina Chen, MD, says she felt very supported by her medical community at the Mayo Clinic in Rochester, Minnesota, when she and 16 other Asian American physicians posted a video on Instagram in 2020 highlighting increased violence and harassment of Asian Americans during COVID-19. It soon went viral, and the Mayo Clinic distributed it across their social media channels. The only negative repercussions Mayo faced were a few posts on social media saying that politics should not be brought into the healthcare space. Dr. Chen disagrees.
"Social issues and political decisions have direct impact on the health of our communities," Dr. Chen said. "We know that we still have a long way to go to solve health inequities, which is a public health problem, and we all play a huge role in voicing our concerns."
Activism, however, seems to be more complicated when it involves physicians being critical of inequities within the medical field. Nephrologist, Vanessa Grubbs, MD, MPH, founded the nonprofit Black Doc Village in 2022 to raise awareness about the wrongful dismissal of Black residents and expand the Black physician workforce.
Dr. Grubbs said that the medical community has not been supportive of her activism. "The reason why I'm no longer in academia is in part because they got very upset with me tweeting about how some trainees are biased in their treatment of attendings," she said. "Senior White men attendings are often treated very differently than junior women of color faculty."
Dr. Grubbs also expressed her views in 2020 essay in the New England Journal of Medicine where she criticized academic medical institutions for ignoring systemic racism, paying lip service to diversity, equity, and inclusion, and staying "deafeningly silent" when issues of racism are raised.
Today, Black Doc Village is focused on conducting research that can be used to change policy. And Dr. Grubbs now has the full support of her colleagues at West Oakland Health, in Oakland, California, which aspires to advance the Bay Area Black community's health and dignity. "So, no one here has a problem with me speaking out," she added.
The emphasis on data-driven activism as opposed to "playing politics," is a recurring theme for many physicians who publicly engage with social issues.
"It's not partisan," Dr. Ranney said. "Rather, it's a commitment to translating science into actionable steps that can be used regardless of what political party you are in. My job is not to be on one side or the other, but to advance human health." These doctors challenge their critics to explain how such a goal is outside their purview.
A version of this article first appeared on Medscape.com.
It should come as no surprise that when physicians speak out on social and political issues, there is sometimes a backlash. This can range from the typical trolling that occurs online to rarer cases of professional penalties. Two doctors were fired by NYU Langone Health late last year after they posted social media messages about the Israel-Hamas war. Still, many physicians are not only willing to stand up for what they believe in, but they see it as an essential part of their profession.
"We're now at a place where doctors need to engage in public advocacy as an urgent part of our job," wrote Rob Davidson, MD, an emergency department physician, at the onslaught of the COVID-19 pandemic. In an Op-Ed piece for The Guardian, Dr. Davidson noted how the virus forced many physicians into becoming "activist doctors," calling for adequate personal protective equipment and correcting misinformation. "What we want above all is for the administration to listen to doctors, nurses, and frontline health workers - and stop playing politics," he wrote.
'It's Not About Being Political'
The intersection of medicine and politics is hardly new. Doctors frequently testify before Congress, sharing their expertise on issues concerning public health. This, however, isn't the same as "playing politics."
"I'm not taking political stances," said Megan Ranney, MD, Dean of the Yale School of Public Health. "Rather, I'm using science to inform best practices, and I'm vocal around the area where I have expertise where we could do collectively better."
Dr. Ranney's work to end firearm injury and death garnered particular attention when she co-authored an open letter to the National Rifle Association (NRA) in 2018. She wrote the letter in response to a tweet by the organization, admonishing physicians to "stay in their lane" when it comes to gun control.
Dr. Ranney's letter discussed gun violence as a public health crisis and urged the NRA to "be part of the solution" by joining the collective effort to reduce firearm injury and death through research, education, and advocacy. "We are not anti-gun," she stated. "We are anti-bullet hole," adding that "almost half of doctors own guns."
The NRA disagreed. When Dr. Ranney testified before Congress during a hearing on gun violence in 2023, NRA spokesperson Billy McLaughlin condemned her testimony as an effort to "dismantle the Second Amendment," calling Dr. Ranney "a known gun control extremist."
"If you actually read what I write, or if you actually listen to what I say, I'm not saying things on behalf of one political party or another," said Dr. Ranney. "It's not about being political. It's about recognizing our role in describing what's happening and making it clear for the world to see. Showing where, based off of data, there may be a better path to improve health and wellbeing."
In spite of the backlash, Dr. Ranney has no regrets about being an activist. "In the current media landscape, folks love to slap labels on people that may or may not be accurate. To me, what matters isn't where I land with a particular politician or political party, but how the work that I do improves health for populations."
When the Need to Act Outweighs the Fear
Laura Andreson, DO, an ob.gyn, took activism a step further when she joined a group of women in Tennessee to file a suit against the state, the attorney general, and the state board of medical examiners. The issue was the Tennessee's abortion ban, which the suit claimed prevented women from getting "necessary and potentially life-saving medical care."
Dr. Andreson, who says she was "not at all" politically active in the past, began to realize how the abortion ban could drastically affect her profession and her patients. "I don't know what flipped in me, but I just felt like I could do this," she said.
Like Dr. Ranney, Dr. Andreson has been as visible as she has been vocal, giving press conferences and interviews, but she acknowledges she has some fears about safety. In fact, after filing the lawsuit, the Center for Reproductive Rights recommended that she go to a website, DeleteMe, that removes personal data from the internet, making it more difficult for people to find her information. "But my need to do this and my desire to do this is stronger than my fears," she added.
Dr. Andreson, who is part of a small practice, did check with both her coworkers and the hospital administration before moving forward with the lawsuit. She was relieved to find that she had the support of her practice and that there wasn't anything in the hospital bylaws to prevent her from filing the lawsuit. "But the people in the bigger institutions who probably have an even better expert base than I do, they are handcuffed," she said.
It has been, in Dr. Andreson's words, "a little uncomfortable" being on the board of the Tennessee Medical Association when the Tennessee Board of Medical Examiners is part of the lawsuit. "We're all members of the same group," she said. "But I'm not suing them as individuals; I'm suing them as an entity that is under our government."
Dr. Andreson said most people have been supportive of her activist work, though she admitted to feeling frustrated when she encounters apathy from fellow ob.gyns. She got little response when she circulated information explaining the abortion laws and trying to get others involved. But she still sees education as being a key part of making change happen.
"I think advocacy, as someone who is considered a responsible, trustworthy person by your community, is important, because you can sway some people just by educating them," she said.
Fighting Inequities in Medicine and Beyond
Christina Chen, MD, says she felt very supported by her medical community at the Mayo Clinic in Rochester, Minnesota, when she and 16 other Asian American physicians posted a video on Instagram in 2020 highlighting increased violence and harassment of Asian Americans during COVID-19. It soon went viral, and the Mayo Clinic distributed it across their social media channels. The only negative repercussions Mayo faced were a few posts on social media saying that politics should not be brought into the healthcare space. Dr. Chen disagrees.
"Social issues and political decisions have direct impact on the health of our communities," Dr. Chen said. "We know that we still have a long way to go to solve health inequities, which is a public health problem, and we all play a huge role in voicing our concerns."
Activism, however, seems to be more complicated when it involves physicians being critical of inequities within the medical field. Nephrologist, Vanessa Grubbs, MD, MPH, founded the nonprofit Black Doc Village in 2022 to raise awareness about the wrongful dismissal of Black residents and expand the Black physician workforce.
Dr. Grubbs said that the medical community has not been supportive of her activism. "The reason why I'm no longer in academia is in part because they got very upset with me tweeting about how some trainees are biased in their treatment of attendings," she said. "Senior White men attendings are often treated very differently than junior women of color faculty."
Dr. Grubbs also expressed her views in 2020 essay in the New England Journal of Medicine where she criticized academic medical institutions for ignoring systemic racism, paying lip service to diversity, equity, and inclusion, and staying "deafeningly silent" when issues of racism are raised.
Today, Black Doc Village is focused on conducting research that can be used to change policy. And Dr. Grubbs now has the full support of her colleagues at West Oakland Health, in Oakland, California, which aspires to advance the Bay Area Black community's health and dignity. "So, no one here has a problem with me speaking out," she added.
The emphasis on data-driven activism as opposed to "playing politics," is a recurring theme for many physicians who publicly engage with social issues.
"It's not partisan," Dr. Ranney said. "Rather, it's a commitment to translating science into actionable steps that can be used regardless of what political party you are in. My job is not to be on one side or the other, but to advance human health." These doctors challenge their critics to explain how such a goal is outside their purview.
A version of this article first appeared on Medscape.com.
Is A Patient Getting Under Your Skin? A Dermatologist Shares Tips for Coping
SAN DIEGO — In his role as chief medical officer for Ascension Medical Group–Texas, which employs about 1,000 physicians across every medical specialty,
At the annual meeting of the American Academy of Dermatology, Dr. Reichenberg, professor of dermatology at the University of Texas at Austin, shared several tips for managing such difficult patients:
Look for ‘red flags’ that raise concerns. This may include patients’ unrealistic expectations for a cure, “which could be because of their cultural or educational background,” he said. Difficult patients also may view physicians as enemies.
“They may quote legal jargon or threaten consequences if there is a bad outcome,” he explained. “They may say, ‘I’m a great reviewer on Yelp and I look forward to giving you a great Yelp review when we finish today.’ They may also have previously sued physicians, or they may tell you that their last physician was horrible.”
Shift into robot mode. In other words, don’t stray from your practice’s protocol by offering special treatment to difficult patients. For example, if a difficult patient shows up 15 minutes late and the office has a policy that patients should be rescheduled if they arrive 10 minutes late, “do not break that policy no matter what, because that’s your protocol,” he advised. “You also do not promise anything you don’t know or that nobody could know. If a difficult patient asks, ‘what is the statistical chance that I’ll get better with this treatment,’ you either say, ‘studies have shown that this is the exact percentage,’ or ‘I don’t know. We’re going to do our best.’”
Set expectations at the outset. “If I walk into the room and the nurse has been in there for 25 minutes doing the intake and I know it’s going to be a long visit, I’ll start by saying, ‘I have 8 minutes to see you today,’ ” Dr. Reichenberg said. “ ‘Whatever we don’t finish today we’ll have to do during a follow-up visit, so let’s please prioritize what we need to do.’ ” Sometimes he sets his smartphone alarm to 8 minutes and when the timer goes off, he’ll say, “I’m so sorry, but I have to go.” For talkative patients, he continued, “I’ll ask, ‘is it okay if I interrupt you if I have a clarifying question?’ That gives you permission to interrupt.”
Blame a third “party” or policy. When patients express anger, find an “enemy” that you can be angry at together. “You might say something like, ‘I’m as frustrated as you are; I can’t believe how broken our health care system is that I have only 8 minutes with you today,’ ” he advised. “Show that you’re on the same side as them.” You could also blame a policy by saying something like, “I’m sorry; I can’t do that for you. My practice has strict rules about that. I’m as frustrated as you are.”
Practice self-regulation. Here, the goal is to delay the time between being triggered by the patient who gets under your skin and your response to that person, such as saying you received “a page or an important text before you walk out of the exam room,” he said. This principle also applies to messages that unreasonable individuals send by e-mail or through messages on their patient portal. “Probably the biggest mistakes I’ve seen from physicians is when they get really angry and they write an angry portal message or e-mail and send it out,” Dr. Reichenberg said. “If I feel triggered, I wait to respond. I’ll sometimes forward [the response] it to my nurse and request that person to send it out the next morning, so the reply reads, ‘Dr. Reichenberg said…’ That gives me the chance to calm down. It also gives the patient a chance to calm down.”
Never worry alone. When struggling to communicate effectively with a difficult patient, he recommends seeking input from a trusted physician colleague. “Better yet, pick up the phone and call the patient’s primary care doctor or another specialist who takes care of that person, and talk about it,” he said. “Figure out if this is your problem or the patient’s problem. They may offer advice on how to handle that person.”
Know when the conflict is untenable. Sometimes it’s best to resign from providing care to difficult patients. “I might write or say something like, ‘I resign from your care. I do not have any expertise to help you with your problem,’ ” Dr. Reichenberg said. “Or, ‘I don’t know that I have the infrastructure to handle the kind of problems you have. I’m not sure we’re the best fit.’ I would suggest that you not give every single detail about why you’re firing them, because the patients could write a step-by-step response, arguing against that.” If you decide to terminate the relationship with a patient, make sure that he or she is not in an acute phase of their illness. “You do not want to get sued for patient abandonment,” he said. “Know your state laws. In general, you’re going to give them a statement of intent to terminate — usually in 30 days — but you have to agree to treat them emergently.” Dr. Reichenberg also provides them with a referral source so they can find a new physician and waives the fee for sending medical records to the new provider. “Also, though it’s not required, I’ll include a statement about the consequences of not receiving care, if I think that they’re [neglecting] their own care,” he said.
Dr. Reichenberg reported having no financial disclosures.
SAN DIEGO — In his role as chief medical officer for Ascension Medical Group–Texas, which employs about 1,000 physicians across every medical specialty,
At the annual meeting of the American Academy of Dermatology, Dr. Reichenberg, professor of dermatology at the University of Texas at Austin, shared several tips for managing such difficult patients:
Look for ‘red flags’ that raise concerns. This may include patients’ unrealistic expectations for a cure, “which could be because of their cultural or educational background,” he said. Difficult patients also may view physicians as enemies.
“They may quote legal jargon or threaten consequences if there is a bad outcome,” he explained. “They may say, ‘I’m a great reviewer on Yelp and I look forward to giving you a great Yelp review when we finish today.’ They may also have previously sued physicians, or they may tell you that their last physician was horrible.”
Shift into robot mode. In other words, don’t stray from your practice’s protocol by offering special treatment to difficult patients. For example, if a difficult patient shows up 15 minutes late and the office has a policy that patients should be rescheduled if they arrive 10 minutes late, “do not break that policy no matter what, because that’s your protocol,” he advised. “You also do not promise anything you don’t know or that nobody could know. If a difficult patient asks, ‘what is the statistical chance that I’ll get better with this treatment,’ you either say, ‘studies have shown that this is the exact percentage,’ or ‘I don’t know. We’re going to do our best.’”
Set expectations at the outset. “If I walk into the room and the nurse has been in there for 25 minutes doing the intake and I know it’s going to be a long visit, I’ll start by saying, ‘I have 8 minutes to see you today,’ ” Dr. Reichenberg said. “ ‘Whatever we don’t finish today we’ll have to do during a follow-up visit, so let’s please prioritize what we need to do.’ ” Sometimes he sets his smartphone alarm to 8 minutes and when the timer goes off, he’ll say, “I’m so sorry, but I have to go.” For talkative patients, he continued, “I’ll ask, ‘is it okay if I interrupt you if I have a clarifying question?’ That gives you permission to interrupt.”
Blame a third “party” or policy. When patients express anger, find an “enemy” that you can be angry at together. “You might say something like, ‘I’m as frustrated as you are; I can’t believe how broken our health care system is that I have only 8 minutes with you today,’ ” he advised. “Show that you’re on the same side as them.” You could also blame a policy by saying something like, “I’m sorry; I can’t do that for you. My practice has strict rules about that. I’m as frustrated as you are.”
Practice self-regulation. Here, the goal is to delay the time between being triggered by the patient who gets under your skin and your response to that person, such as saying you received “a page or an important text before you walk out of the exam room,” he said. This principle also applies to messages that unreasonable individuals send by e-mail or through messages on their patient portal. “Probably the biggest mistakes I’ve seen from physicians is when they get really angry and they write an angry portal message or e-mail and send it out,” Dr. Reichenberg said. “If I feel triggered, I wait to respond. I’ll sometimes forward [the response] it to my nurse and request that person to send it out the next morning, so the reply reads, ‘Dr. Reichenberg said…’ That gives me the chance to calm down. It also gives the patient a chance to calm down.”
Never worry alone. When struggling to communicate effectively with a difficult patient, he recommends seeking input from a trusted physician colleague. “Better yet, pick up the phone and call the patient’s primary care doctor or another specialist who takes care of that person, and talk about it,” he said. “Figure out if this is your problem or the patient’s problem. They may offer advice on how to handle that person.”
Know when the conflict is untenable. Sometimes it’s best to resign from providing care to difficult patients. “I might write or say something like, ‘I resign from your care. I do not have any expertise to help you with your problem,’ ” Dr. Reichenberg said. “Or, ‘I don’t know that I have the infrastructure to handle the kind of problems you have. I’m not sure we’re the best fit.’ I would suggest that you not give every single detail about why you’re firing them, because the patients could write a step-by-step response, arguing against that.” If you decide to terminate the relationship with a patient, make sure that he or she is not in an acute phase of their illness. “You do not want to get sued for patient abandonment,” he said. “Know your state laws. In general, you’re going to give them a statement of intent to terminate — usually in 30 days — but you have to agree to treat them emergently.” Dr. Reichenberg also provides them with a referral source so they can find a new physician and waives the fee for sending medical records to the new provider. “Also, though it’s not required, I’ll include a statement about the consequences of not receiving care, if I think that they’re [neglecting] their own care,” he said.
Dr. Reichenberg reported having no financial disclosures.
SAN DIEGO — In his role as chief medical officer for Ascension Medical Group–Texas, which employs about 1,000 physicians across every medical specialty,
At the annual meeting of the American Academy of Dermatology, Dr. Reichenberg, professor of dermatology at the University of Texas at Austin, shared several tips for managing such difficult patients:
Look for ‘red flags’ that raise concerns. This may include patients’ unrealistic expectations for a cure, “which could be because of their cultural or educational background,” he said. Difficult patients also may view physicians as enemies.
“They may quote legal jargon or threaten consequences if there is a bad outcome,” he explained. “They may say, ‘I’m a great reviewer on Yelp and I look forward to giving you a great Yelp review when we finish today.’ They may also have previously sued physicians, or they may tell you that their last physician was horrible.”
Shift into robot mode. In other words, don’t stray from your practice’s protocol by offering special treatment to difficult patients. For example, if a difficult patient shows up 15 minutes late and the office has a policy that patients should be rescheduled if they arrive 10 minutes late, “do not break that policy no matter what, because that’s your protocol,” he advised. “You also do not promise anything you don’t know or that nobody could know. If a difficult patient asks, ‘what is the statistical chance that I’ll get better with this treatment,’ you either say, ‘studies have shown that this is the exact percentage,’ or ‘I don’t know. We’re going to do our best.’”
Set expectations at the outset. “If I walk into the room and the nurse has been in there for 25 minutes doing the intake and I know it’s going to be a long visit, I’ll start by saying, ‘I have 8 minutes to see you today,’ ” Dr. Reichenberg said. “ ‘Whatever we don’t finish today we’ll have to do during a follow-up visit, so let’s please prioritize what we need to do.’ ” Sometimes he sets his smartphone alarm to 8 minutes and when the timer goes off, he’ll say, “I’m so sorry, but I have to go.” For talkative patients, he continued, “I’ll ask, ‘is it okay if I interrupt you if I have a clarifying question?’ That gives you permission to interrupt.”
Blame a third “party” or policy. When patients express anger, find an “enemy” that you can be angry at together. “You might say something like, ‘I’m as frustrated as you are; I can’t believe how broken our health care system is that I have only 8 minutes with you today,’ ” he advised. “Show that you’re on the same side as them.” You could also blame a policy by saying something like, “I’m sorry; I can’t do that for you. My practice has strict rules about that. I’m as frustrated as you are.”
Practice self-regulation. Here, the goal is to delay the time between being triggered by the patient who gets under your skin and your response to that person, such as saying you received “a page or an important text before you walk out of the exam room,” he said. This principle also applies to messages that unreasonable individuals send by e-mail or through messages on their patient portal. “Probably the biggest mistakes I’ve seen from physicians is when they get really angry and they write an angry portal message or e-mail and send it out,” Dr. Reichenberg said. “If I feel triggered, I wait to respond. I’ll sometimes forward [the response] it to my nurse and request that person to send it out the next morning, so the reply reads, ‘Dr. Reichenberg said…’ That gives me the chance to calm down. It also gives the patient a chance to calm down.”
Never worry alone. When struggling to communicate effectively with a difficult patient, he recommends seeking input from a trusted physician colleague. “Better yet, pick up the phone and call the patient’s primary care doctor or another specialist who takes care of that person, and talk about it,” he said. “Figure out if this is your problem or the patient’s problem. They may offer advice on how to handle that person.”
Know when the conflict is untenable. Sometimes it’s best to resign from providing care to difficult patients. “I might write or say something like, ‘I resign from your care. I do not have any expertise to help you with your problem,’ ” Dr. Reichenberg said. “Or, ‘I don’t know that I have the infrastructure to handle the kind of problems you have. I’m not sure we’re the best fit.’ I would suggest that you not give every single detail about why you’re firing them, because the patients could write a step-by-step response, arguing against that.” If you decide to terminate the relationship with a patient, make sure that he or she is not in an acute phase of their illness. “You do not want to get sued for patient abandonment,” he said. “Know your state laws. In general, you’re going to give them a statement of intent to terminate — usually in 30 days — but you have to agree to treat them emergently.” Dr. Reichenberg also provides them with a referral source so they can find a new physician and waives the fee for sending medical records to the new provider. “Also, though it’s not required, I’ll include a statement about the consequences of not receiving care, if I think that they’re [neglecting] their own care,” he said.
Dr. Reichenberg reported having no financial disclosures.
FROM AAD 2024
New Quality Measure Improves Follow-Up for CRC Screening
the developers said.
As part of their work, the researchers conducted a retrospective study of 20,581 adults aged 50-75 years from 38 health systems that showed that fewer than half (48%) had a follow-up colonoscopy within 180 days of an initial abnormal SBT for CRC.
“The low follow-up rates to an abnormal SBT were initially surprising,” first author Elizabeth L. Ciemins, PhD, MPH, MA, Research and Analytics, American Medical Group Association (AMGA), Alexandria, Virginia, told this news organization.
“However, once we interviewed clinicians and learned that this was not a measure they were tracking, along with their own incorrect assumptions of a much higher follow-up rate, the low rates made sense. As is commonly said, ‘you can’t change what you don’t measure,’” she said.
The CRC screening completion measure the researchers propose “builds on and addresses an important shortcoming in an existing measure and will help ensure complete screening for CRC,” they noted in their JAMA Network Open paper.
The key elements of the follow-up measure are the date and result of a SBT and the date of the follow-up colonoscopy — if it occurred, Dr. Ciemins explained.
“Currently, health systems are not consistently tracking this measure, but they have the data elements to do so, especially if they are doing colonoscopies in-house,” she said.
Field testing showed that use of this new measure is “feasible, valid, and reliable,” the authors said. Dr. Ciemins believed this CRC screening completion measure could be widely implemented.
“Three AMGA member health systems feasibility tested the data elements and found that they could reliably abstract the required elements from electronic health records (EHRs),” she told this news organization.
The researchers are currently testing the measure among 20 AMGA member health systems, that are submitting quarterly data on a version of the specified measure.
“Advancing this measure as a quality performance measure could significantly increase the early detection of CRC, thereby improving health and ultimately saving lives,” the authors concluded in their paper.
The Right Direction, But Questions Remain
The coauthors of a linked commentary said this research highlights the “suboptimal” rates of a timely follow-up colonoscopy after positive SBT results. They applauded the authors for “focusing attention on a meaningful approach to measuring high-quality CRC screening and providing guidance for standardized measurement.”
However, several questions arise from this study, “including whether 6 months is the ideal interval for colonoscopy completion after a positive SBT result, where this measure fits in the context of existing CRC screening measures, and how to implement it in practice,” Jennifer K. Maratt, MD, with Indiana University School of Medicine, Indianapolis, and coauthors wrote.
“This measure alone does not address all the gaps in the screening process, nor does it address barriers to colonoscopy completion, but it points us in the right direction for measuring the success of screening programs,” Dr. Maratt and her colleagues added.
The study was supported by a grant from the AARP. The authors and editorial writers had no relevant disclosures.
A version of this article appeared on Medscape.com.
the developers said.
As part of their work, the researchers conducted a retrospective study of 20,581 adults aged 50-75 years from 38 health systems that showed that fewer than half (48%) had a follow-up colonoscopy within 180 days of an initial abnormal SBT for CRC.
“The low follow-up rates to an abnormal SBT were initially surprising,” first author Elizabeth L. Ciemins, PhD, MPH, MA, Research and Analytics, American Medical Group Association (AMGA), Alexandria, Virginia, told this news organization.
“However, once we interviewed clinicians and learned that this was not a measure they were tracking, along with their own incorrect assumptions of a much higher follow-up rate, the low rates made sense. As is commonly said, ‘you can’t change what you don’t measure,’” she said.
The CRC screening completion measure the researchers propose “builds on and addresses an important shortcoming in an existing measure and will help ensure complete screening for CRC,” they noted in their JAMA Network Open paper.
The key elements of the follow-up measure are the date and result of a SBT and the date of the follow-up colonoscopy — if it occurred, Dr. Ciemins explained.
“Currently, health systems are not consistently tracking this measure, but they have the data elements to do so, especially if they are doing colonoscopies in-house,” she said.
Field testing showed that use of this new measure is “feasible, valid, and reliable,” the authors said. Dr. Ciemins believed this CRC screening completion measure could be widely implemented.
“Three AMGA member health systems feasibility tested the data elements and found that they could reliably abstract the required elements from electronic health records (EHRs),” she told this news organization.
The researchers are currently testing the measure among 20 AMGA member health systems, that are submitting quarterly data on a version of the specified measure.
“Advancing this measure as a quality performance measure could significantly increase the early detection of CRC, thereby improving health and ultimately saving lives,” the authors concluded in their paper.
The Right Direction, But Questions Remain
The coauthors of a linked commentary said this research highlights the “suboptimal” rates of a timely follow-up colonoscopy after positive SBT results. They applauded the authors for “focusing attention on a meaningful approach to measuring high-quality CRC screening and providing guidance for standardized measurement.”
However, several questions arise from this study, “including whether 6 months is the ideal interval for colonoscopy completion after a positive SBT result, where this measure fits in the context of existing CRC screening measures, and how to implement it in practice,” Jennifer K. Maratt, MD, with Indiana University School of Medicine, Indianapolis, and coauthors wrote.
“This measure alone does not address all the gaps in the screening process, nor does it address barriers to colonoscopy completion, but it points us in the right direction for measuring the success of screening programs,” Dr. Maratt and her colleagues added.
The study was supported by a grant from the AARP. The authors and editorial writers had no relevant disclosures.
A version of this article appeared on Medscape.com.
the developers said.
As part of their work, the researchers conducted a retrospective study of 20,581 adults aged 50-75 years from 38 health systems that showed that fewer than half (48%) had a follow-up colonoscopy within 180 days of an initial abnormal SBT for CRC.
“The low follow-up rates to an abnormal SBT were initially surprising,” first author Elizabeth L. Ciemins, PhD, MPH, MA, Research and Analytics, American Medical Group Association (AMGA), Alexandria, Virginia, told this news organization.
“However, once we interviewed clinicians and learned that this was not a measure they were tracking, along with their own incorrect assumptions of a much higher follow-up rate, the low rates made sense. As is commonly said, ‘you can’t change what you don’t measure,’” she said.
The CRC screening completion measure the researchers propose “builds on and addresses an important shortcoming in an existing measure and will help ensure complete screening for CRC,” they noted in their JAMA Network Open paper.
The key elements of the follow-up measure are the date and result of a SBT and the date of the follow-up colonoscopy — if it occurred, Dr. Ciemins explained.
“Currently, health systems are not consistently tracking this measure, but they have the data elements to do so, especially if they are doing colonoscopies in-house,” she said.
Field testing showed that use of this new measure is “feasible, valid, and reliable,” the authors said. Dr. Ciemins believed this CRC screening completion measure could be widely implemented.
“Three AMGA member health systems feasibility tested the data elements and found that they could reliably abstract the required elements from electronic health records (EHRs),” she told this news organization.
The researchers are currently testing the measure among 20 AMGA member health systems, that are submitting quarterly data on a version of the specified measure.
“Advancing this measure as a quality performance measure could significantly increase the early detection of CRC, thereby improving health and ultimately saving lives,” the authors concluded in their paper.
The Right Direction, But Questions Remain
The coauthors of a linked commentary said this research highlights the “suboptimal” rates of a timely follow-up colonoscopy after positive SBT results. They applauded the authors for “focusing attention on a meaningful approach to measuring high-quality CRC screening and providing guidance for standardized measurement.”
However, several questions arise from this study, “including whether 6 months is the ideal interval for colonoscopy completion after a positive SBT result, where this measure fits in the context of existing CRC screening measures, and how to implement it in practice,” Jennifer K. Maratt, MD, with Indiana University School of Medicine, Indianapolis, and coauthors wrote.
“This measure alone does not address all the gaps in the screening process, nor does it address barriers to colonoscopy completion, but it points us in the right direction for measuring the success of screening programs,” Dr. Maratt and her colleagues added.
The study was supported by a grant from the AARP. The authors and editorial writers had no relevant disclosures.
A version of this article appeared on Medscape.com.
Congress Directly Provides $10 Million for Arthritis Research for First Time
Congress provided $10 million to fund arthritis research in the recently passed federal fiscal year 2024 budget.
The new arthritis program is part of the Department of Defense’s (DOD’s) Congressionally Directed Medical Research Programs (CDMRP), which provides dedicated funding to study certain diseases and health conditions.
This is the first stand-alone research program for arthritis of the CDMRP, though the organization had previously funded arthritis-related research through their other programs, including chronic pain management, joint warfighter medical, peer-reviewed orthopedic, peer-reviewed medical, and tick-borne disease programs.
It is not yet known what specific aspects of arthritis this funding will go toward. The standard process for new programs involves speaking with researchers, clinicians, and individuals with these targeted health conditions to better understand research gaps and narrow focus, Akua Roach, PhD, the program manager for this new CDMRP arthritis research program, told this news organization.
“We’re not going to be able to solve every question,” she said, though the allocated $10 million is “a great number to do a lot of great work.”
While the CDMRP is under the DOD, research funding can go to studying patient populations outside of military personnel or veterans, she added.
“I think that is perhaps a common misconception that if you are getting funding from the DOD, that you have to have a DOD population, and that is not true,” she said.
Another misconception is that CDMRP funding only goes to military treatment facilities. In fact, on average, 92% of CDMRP funding goes to academia, industry, and other nonmilitary recipients, noted CDMRP Director Colonel Sarah Goldman.
“Anyone around the world can apply for funding,” she told this news organization. “We want to fund the best research.”
Because the funding is provided under the defense bill, there will be discussions around the military relevance of research, she added, which not only includes service members but also their families.
CDMRP anticipates that funding opportunities through this new arthritis research program will be available by July or August 2024.
A version of this article appeared on Medscape.com.
Congress provided $10 million to fund arthritis research in the recently passed federal fiscal year 2024 budget.
The new arthritis program is part of the Department of Defense’s (DOD’s) Congressionally Directed Medical Research Programs (CDMRP), which provides dedicated funding to study certain diseases and health conditions.
This is the first stand-alone research program for arthritis of the CDMRP, though the organization had previously funded arthritis-related research through their other programs, including chronic pain management, joint warfighter medical, peer-reviewed orthopedic, peer-reviewed medical, and tick-borne disease programs.
It is not yet known what specific aspects of arthritis this funding will go toward. The standard process for new programs involves speaking with researchers, clinicians, and individuals with these targeted health conditions to better understand research gaps and narrow focus, Akua Roach, PhD, the program manager for this new CDMRP arthritis research program, told this news organization.
“We’re not going to be able to solve every question,” she said, though the allocated $10 million is “a great number to do a lot of great work.”
While the CDMRP is under the DOD, research funding can go to studying patient populations outside of military personnel or veterans, she added.
“I think that is perhaps a common misconception that if you are getting funding from the DOD, that you have to have a DOD population, and that is not true,” she said.
Another misconception is that CDMRP funding only goes to military treatment facilities. In fact, on average, 92% of CDMRP funding goes to academia, industry, and other nonmilitary recipients, noted CDMRP Director Colonel Sarah Goldman.
“Anyone around the world can apply for funding,” she told this news organization. “We want to fund the best research.”
Because the funding is provided under the defense bill, there will be discussions around the military relevance of research, she added, which not only includes service members but also their families.
CDMRP anticipates that funding opportunities through this new arthritis research program will be available by July or August 2024.
A version of this article appeared on Medscape.com.
Congress provided $10 million to fund arthritis research in the recently passed federal fiscal year 2024 budget.
The new arthritis program is part of the Department of Defense’s (DOD’s) Congressionally Directed Medical Research Programs (CDMRP), which provides dedicated funding to study certain diseases and health conditions.
This is the first stand-alone research program for arthritis of the CDMRP, though the organization had previously funded arthritis-related research through their other programs, including chronic pain management, joint warfighter medical, peer-reviewed orthopedic, peer-reviewed medical, and tick-borne disease programs.
It is not yet known what specific aspects of arthritis this funding will go toward. The standard process for new programs involves speaking with researchers, clinicians, and individuals with these targeted health conditions to better understand research gaps and narrow focus, Akua Roach, PhD, the program manager for this new CDMRP arthritis research program, told this news organization.
“We’re not going to be able to solve every question,” she said, though the allocated $10 million is “a great number to do a lot of great work.”
While the CDMRP is under the DOD, research funding can go to studying patient populations outside of military personnel or veterans, she added.
“I think that is perhaps a common misconception that if you are getting funding from the DOD, that you have to have a DOD population, and that is not true,” she said.
Another misconception is that CDMRP funding only goes to military treatment facilities. In fact, on average, 92% of CDMRP funding goes to academia, industry, and other nonmilitary recipients, noted CDMRP Director Colonel Sarah Goldman.
“Anyone around the world can apply for funding,” she told this news organization. “We want to fund the best research.”
Because the funding is provided under the defense bill, there will be discussions around the military relevance of research, she added, which not only includes service members but also their families.
CDMRP anticipates that funding opportunities through this new arthritis research program will be available by July or August 2024.
A version of this article appeared on Medscape.com.