User login
In this Practical AI column, we’ve explored everything from large language models to the nuances of trial matching, but one of the most immediate and impactful applications of AI is unfolding right now in breast imaging. For oncologists, this isn’t an abstract future — with new screening guidelines, dense-breast mandates, and a shrinking radiology workforce, it’s the imaging reports and patient questions landing in your clinic today.
Here is what oncologists need to know, and how to put it to work for their patients.
Why AI in Mammography Matters
More than 200 million women undergo breast cancer screening each year. In the US alone, 10% of the 40 million women screened annually require additional diagnostic imaging, and 4%–5% of these women are eventually diagnosed with breast cancer.
Two major shifts are redefining breast cancer screening in the US: The US Preventive Services Task Force (USPSTF) now recommends biennial screening from age 40 to 74 years, and notifying patients of breast density is a federal requirement as of September 10, 2024. That means more mammograms, more patient questions, and more downstream oncology decisions. Patients will increasingly ask about “dense” breast results and what to do next. Add a national radiologist shortage into the mix, and the pressure on timely callbacks, biopsies, and treatment planning will only grow.
Can AI Help Without Compromising Care?
The short answer is yes. With AI, we may be able to transform these rate-limiting steps into opportunities for earlier detection, decentralized screening, and smarter triage and save hundreds of thousands of women from an unnecessary diagnostic procedure, if implemented deliberately.
Don’t Confuse Today’s AI With Yesterday’s CAD
Think of older computer-aided detection (CAD) like a 1990s chemotherapy drug: It sometimes helped, but it came with significant toxicity and rarely delivered consistent survival benefits. Today’s deep-learning AI is closer to targeted therapy — trained on millions of “trial participants” (mammograms), more precise, and applied in specific contexts where it adds value. If you once dismissed CAD as noise, it’s time to revisit what AI can now offer.
The role of AI is broader than drawing boxes. It provides second readings, worklist triage, risk prediction, density assessment, and decision support. FDA has cleared several AI tools for both 2D and digital breast tomosynthesis (DBT), which include iCAD ProFound (DBT), ScreenPoint Transpara (2D/DBT), and Lunit INSIGHT DBT.
Some of the strongest evidence for AI in mammography is as a second reader during screening. Large trials show that AI plus one radiologist can match reading from two radiologists, cutting workload by about 40%. For example, the MASAI randomized trial showed that AI-supported screening achieved similar cancer detection but cut human screen-reading workload about 44% vs standard double reading (39,996 vs 40,024 participants). The primary interval cancer outcomes are maturing, but the safety analysis is reassuring.
Reducing second reads and arbitration time are important for clinicians because it frees capacity for callbacks and diagnostic workups. This will be especially key given that screening now starts at age 40. That will mean about 21 to 22 million more women are newly eligible, translating to about 10 to 11 million additional mammograms each year under biennial screening.
Another important area where AI can make its mark in mammography is triage and time to diagnosis. The results from a randomized implementation study showed that AI-prioritized worklists accelerated time to additional imaging and biopsy diagnosis without harming efficiency for others — exactly the kind of outcome patients feel.
Multiple studies have demonstrated improved diagnostic performance and shorter reading times when AI supports DBT interpretation, which is important because DBT can otherwise be time intensive.
We are also seeing rapid advancement in risk-based screening, moving beyond a single dense vs not dense approach. Deep-learning risk models, such as Mirai, predict 1- to 5-year breast cancer risk directly from the mammogram, and these tools are now being assessed prospectively to guide supplemental MRI. Cost-effectiveness modeling supports risk-stratified intervals vs one-size-fits-all schedules.
Finally, automated density tools, such as Transpara Density and Volpara, offer objective, reproducible volumetric measures that map to the Breast Imaging-Reporting and Data System, which is useful for Mammography Quality Standards Act-required reporting and as inputs to risk calculators.
While early evidence suggests AI may help surface future or interval cancers earlier, including more invasive tumors, the definitive impacts on interval cancer rates and mortality require longitudinal follow-up, which is now in progress.
Pitfalls to Watch For
Bias is real. Studies show false-positive differences by race, age, and density. AI can even infer racial identity from images, potentially amplifying disparities. Performance can also shift by vendor, demographics, and prevalence.
A Radiology study of 4855 DBT exams showed that an algorithm produced more false-positive case scores in Black patients and older patients (aged 71-80 years) patients and in women with extremely dense breasts. This can happen because AI can infer proxies for race directly from images, even when humans cannot, and this can propagate disparities if not addressed. External validations and reviews emphasize that performance can shift with device manufacturer, demographics, and prevalence, which is why all tools need to undergo local validation and calibration.
Here’s a pragmatic adoption checklist before going live with an AI tool.
- Confirm FDA clearance: Verify the name and version of the algorithm, imaging modes (2D vs DBT), and operating points. Confirm 510(k) numbers.
- Local validation: Test on your patient mix and vendor stack (Hologic, GE, Siemens, Fuji). Compare this to your baseline recall rate, positive predictive value of recall (PPV1), cancer detection rate, and reading time. Commit to recalibration if drift occurs.
- Equity plan: Monitor false-positive and negative false-rates by age, race/ethnicity, and density; document corrective actions if disparities emerge. (This isn’t optional.)
- Workflow clarity: Is AI a second reader, an additional reader, or a triage tool? Who arbitrates discordance? What’s the escalation path for high-risk or interval cancer-like patterns?
- Regulatory strategy: Confirm whether the vendor has (or will file) a Predetermined Change Control Plan so models can be updated safely without repeated submissions. Also confirm how you’ll be notified about performance-relevant changes.
- Data governance: Audit logs of AI outputs, retention, protected health information handling, and the patient communication policy for AI-assisted reads.
After going live, set up a quarterly dashboard. It should include cancer detection rate per 1000 patients, recall rate, PPV1, interval cancer rate (as it matures), reading time, and turnaround time to diagnostic imaging or biopsy — all stratified by age, race/ethnicity, and density.
Here, I dissect what this discussion means through the lens of Moravec’s paradox (machines excel at what clinicians find hard, and vice versa) and offer a possible playbook for putting these tools to work.
What to Tell Patients
When speaking with patients, emphasize that a radiologist still reads their mammogram. AI helps with consistency and efficiency; it doesn’t replace human oversight. Patients with dense breasts should still expect a standard notice; discussion of individualized risk factors, such as family history, genetics, and prior biopsies; and consideration of supplemental imaging if risk warrants. But it’s also important to tell these patients that while dense breasts are common, they do not automatically mean high cancer risk.
As for screening schedules, remind patients that screening is at least biennial from 40 to 74 years of age per the USPSTF guidelines; however, specialty groups may recommend starting on an annual schedule at 40.
What You Can Implement Now
There are multiple practical use cases you can introduce now. One is to use AI as a second reader or an additional reader safety net to preserve detection while reducing human workload. This helps your breast center absorb screening expansion to age 40 without diluting quality. Another is to turn on AI triage to shorten the time to callback and biopsy for the few who need it most — patients notice and appreciate faster answers. You can also begin adopting automated density plus risk models to move beyond “dense/not dense.” For selected patients, AI-informed risk can justify MRI or tailored intervals.
Here’s a quick cheat sheet (for your next leadership or tumor-board meeting).
Do:
- Use AI as a second or additional reader or triage tool, not as a black box.
- Track cancer detection rate, recall, PPV1, interval cancers, and reading time, stratified by age, race, and breast density.
- Pair automated density with AI risk to personalize screening and supplemental imaging.
- Enroll patients in future clinical trials, such as PRISM, the first large-scale randomized controlled trial of AI for screening mammography. This US-based, $16 million, seven-site study is funded by the Patient-Centered Outcomes Research Institute.
Don’t:
- Assume “AI = CAD.” The 2015 CAD story is over; modern deep learning systems are different and require different oversight.
- Go live without a local validation and equity plan or without clarity on software updates.
- Forget to remind patients that screening starts at age 40, and dense breast notifications are now universal. Use the visit to discuss risk, supplemental imaging, and why a human still directs their care.
The Bottom Line
AI won’t replace radiologists or read mammograms for us — just as PET scans didn’t replace oncologists and stethoscopes didn’t make cardiologists obsolete. What it will do is catch what the tired human eye might miss, shave days off anxious waiting, and turn breast density into data instead of doubt. For oncologists, that means staging sooner, enrolling smarter, and spending more time talking with patients instead of chasing callbacks.
In short, AI may not take the picture, but it helps us frame the story, making it sharper, faster, and with fewer blind spots. By pairing this powerful technology with rigorous, equity-focused local validation and transparent governance under the FDA’s emerging Predetermined Change Control Plan framework, we can realize the tangible benefits of practical AI for our patients without widening disparities.
Now, during Breast Cancer Awareness Month, how about we add on AI to that pink ribbon — how cool would that be?
Thoughts? Drop me a line at [email protected]. Let’s keep the conversation — and pink ribbons — going.
Arturo Loaiza-Bonilla, MD, MSEd, is the co-founder and chief medical AI officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr Loaiza-Bonilla serves as Systemwide Chief of Hematology and Oncology at St. Luke’s University Health Network, where he maintains a connection to patient care by attending to patients 2 days a week.
A version of this article first appeared on Medscape.com.
In this Practical AI column, we’ve explored everything from large language models to the nuances of trial matching, but one of the most immediate and impactful applications of AI is unfolding right now in breast imaging. For oncologists, this isn’t an abstract future — with new screening guidelines, dense-breast mandates, and a shrinking radiology workforce, it’s the imaging reports and patient questions landing in your clinic today.
Here is what oncologists need to know, and how to put it to work for their patients.
Why AI in Mammography Matters
More than 200 million women undergo breast cancer screening each year. In the US alone, 10% of the 40 million women screened annually require additional diagnostic imaging, and 4%–5% of these women are eventually diagnosed with breast cancer.
Two major shifts are redefining breast cancer screening in the US: The US Preventive Services Task Force (USPSTF) now recommends biennial screening from age 40 to 74 years, and notifying patients of breast density is a federal requirement as of September 10, 2024. That means more mammograms, more patient questions, and more downstream oncology decisions. Patients will increasingly ask about “dense” breast results and what to do next. Add a national radiologist shortage into the mix, and the pressure on timely callbacks, biopsies, and treatment planning will only grow.
Can AI Help Without Compromising Care?
The short answer is yes. With AI, we may be able to transform these rate-limiting steps into opportunities for earlier detection, decentralized screening, and smarter triage and save hundreds of thousands of women from an unnecessary diagnostic procedure, if implemented deliberately.
Don’t Confuse Today’s AI With Yesterday’s CAD
Think of older computer-aided detection (CAD) like a 1990s chemotherapy drug: It sometimes helped, but it came with significant toxicity and rarely delivered consistent survival benefits. Today’s deep-learning AI is closer to targeted therapy — trained on millions of “trial participants” (mammograms), more precise, and applied in specific contexts where it adds value. If you once dismissed CAD as noise, it’s time to revisit what AI can now offer.
The role of AI is broader than drawing boxes. It provides second readings, worklist triage, risk prediction, density assessment, and decision support. FDA has cleared several AI tools for both 2D and digital breast tomosynthesis (DBT), which include iCAD ProFound (DBT), ScreenPoint Transpara (2D/DBT), and Lunit INSIGHT DBT.
Some of the strongest evidence for AI in mammography is as a second reader during screening. Large trials show that AI plus one radiologist can match reading from two radiologists, cutting workload by about 40%. For example, the MASAI randomized trial showed that AI-supported screening achieved similar cancer detection but cut human screen-reading workload about 44% vs standard double reading (39,996 vs 40,024 participants). The primary interval cancer outcomes are maturing, but the safety analysis is reassuring.
Reducing second reads and arbitration time are important for clinicians because it frees capacity for callbacks and diagnostic workups. This will be especially key given that screening now starts at age 40. That will mean about 21 to 22 million more women are newly eligible, translating to about 10 to 11 million additional mammograms each year under biennial screening.
Another important area where AI can make its mark in mammography is triage and time to diagnosis. The results from a randomized implementation study showed that AI-prioritized worklists accelerated time to additional imaging and biopsy diagnosis without harming efficiency for others — exactly the kind of outcome patients feel.
Multiple studies have demonstrated improved diagnostic performance and shorter reading times when AI supports DBT interpretation, which is important because DBT can otherwise be time intensive.
We are also seeing rapid advancement in risk-based screening, moving beyond a single dense vs not dense approach. Deep-learning risk models, such as Mirai, predict 1- to 5-year breast cancer risk directly from the mammogram, and these tools are now being assessed prospectively to guide supplemental MRI. Cost-effectiveness modeling supports risk-stratified intervals vs one-size-fits-all schedules.
Finally, automated density tools, such as Transpara Density and Volpara, offer objective, reproducible volumetric measures that map to the Breast Imaging-Reporting and Data System, which is useful for Mammography Quality Standards Act-required reporting and as inputs to risk calculators.
While early evidence suggests AI may help surface future or interval cancers earlier, including more invasive tumors, the definitive impacts on interval cancer rates and mortality require longitudinal follow-up, which is now in progress.
Pitfalls to Watch For
Bias is real. Studies show false-positive differences by race, age, and density. AI can even infer racial identity from images, potentially amplifying disparities. Performance can also shift by vendor, demographics, and prevalence.
A Radiology study of 4855 DBT exams showed that an algorithm produced more false-positive case scores in Black patients and older patients (aged 71-80 years) patients and in women with extremely dense breasts. This can happen because AI can infer proxies for race directly from images, even when humans cannot, and this can propagate disparities if not addressed. External validations and reviews emphasize that performance can shift with device manufacturer, demographics, and prevalence, which is why all tools need to undergo local validation and calibration.
Here’s a pragmatic adoption checklist before going live with an AI tool.
- Confirm FDA clearance: Verify the name and version of the algorithm, imaging modes (2D vs DBT), and operating points. Confirm 510(k) numbers.
- Local validation: Test on your patient mix and vendor stack (Hologic, GE, Siemens, Fuji). Compare this to your baseline recall rate, positive predictive value of recall (PPV1), cancer detection rate, and reading time. Commit to recalibration if drift occurs.
- Equity plan: Monitor false-positive and negative false-rates by age, race/ethnicity, and density; document corrective actions if disparities emerge. (This isn’t optional.)
- Workflow clarity: Is AI a second reader, an additional reader, or a triage tool? Who arbitrates discordance? What’s the escalation path for high-risk or interval cancer-like patterns?
- Regulatory strategy: Confirm whether the vendor has (or will file) a Predetermined Change Control Plan so models can be updated safely without repeated submissions. Also confirm how you’ll be notified about performance-relevant changes.
- Data governance: Audit logs of AI outputs, retention, protected health information handling, and the patient communication policy for AI-assisted reads.
After going live, set up a quarterly dashboard. It should include cancer detection rate per 1000 patients, recall rate, PPV1, interval cancer rate (as it matures), reading time, and turnaround time to diagnostic imaging or biopsy — all stratified by age, race/ethnicity, and density.
Here, I dissect what this discussion means through the lens of Moravec’s paradox (machines excel at what clinicians find hard, and vice versa) and offer a possible playbook for putting these tools to work.
What to Tell Patients
When speaking with patients, emphasize that a radiologist still reads their mammogram. AI helps with consistency and efficiency; it doesn’t replace human oversight. Patients with dense breasts should still expect a standard notice; discussion of individualized risk factors, such as family history, genetics, and prior biopsies; and consideration of supplemental imaging if risk warrants. But it’s also important to tell these patients that while dense breasts are common, they do not automatically mean high cancer risk.
As for screening schedules, remind patients that screening is at least biennial from 40 to 74 years of age per the USPSTF guidelines; however, specialty groups may recommend starting on an annual schedule at 40.
What You Can Implement Now
There are multiple practical use cases you can introduce now. One is to use AI as a second reader or an additional reader safety net to preserve detection while reducing human workload. This helps your breast center absorb screening expansion to age 40 without diluting quality. Another is to turn on AI triage to shorten the time to callback and biopsy for the few who need it most — patients notice and appreciate faster answers. You can also begin adopting automated density plus risk models to move beyond “dense/not dense.” For selected patients, AI-informed risk can justify MRI or tailored intervals.
Here’s a quick cheat sheet (for your next leadership or tumor-board meeting).
Do:
- Use AI as a second or additional reader or triage tool, not as a black box.
- Track cancer detection rate, recall, PPV1, interval cancers, and reading time, stratified by age, race, and breast density.
- Pair automated density with AI risk to personalize screening and supplemental imaging.
- Enroll patients in future clinical trials, such as PRISM, the first large-scale randomized controlled trial of AI for screening mammography. This US-based, $16 million, seven-site study is funded by the Patient-Centered Outcomes Research Institute.
Don’t:
- Assume “AI = CAD.” The 2015 CAD story is over; modern deep learning systems are different and require different oversight.
- Go live without a local validation and equity plan or without clarity on software updates.
- Forget to remind patients that screening starts at age 40, and dense breast notifications are now universal. Use the visit to discuss risk, supplemental imaging, and why a human still directs their care.
The Bottom Line
AI won’t replace radiologists or read mammograms for us — just as PET scans didn’t replace oncologists and stethoscopes didn’t make cardiologists obsolete. What it will do is catch what the tired human eye might miss, shave days off anxious waiting, and turn breast density into data instead of doubt. For oncologists, that means staging sooner, enrolling smarter, and spending more time talking with patients instead of chasing callbacks.
In short, AI may not take the picture, but it helps us frame the story, making it sharper, faster, and with fewer blind spots. By pairing this powerful technology with rigorous, equity-focused local validation and transparent governance under the FDA’s emerging Predetermined Change Control Plan framework, we can realize the tangible benefits of practical AI for our patients without widening disparities.
Now, during Breast Cancer Awareness Month, how about we add on AI to that pink ribbon — how cool would that be?
Thoughts? Drop me a line at [email protected]. Let’s keep the conversation — and pink ribbons — going.
Arturo Loaiza-Bonilla, MD, MSEd, is the co-founder and chief medical AI officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr Loaiza-Bonilla serves as Systemwide Chief of Hematology and Oncology at St. Luke’s University Health Network, where he maintains a connection to patient care by attending to patients 2 days a week.
A version of this article first appeared on Medscape.com.
In this Practical AI column, we’ve explored everything from large language models to the nuances of trial matching, but one of the most immediate and impactful applications of AI is unfolding right now in breast imaging. For oncologists, this isn’t an abstract future — with new screening guidelines, dense-breast mandates, and a shrinking radiology workforce, it’s the imaging reports and patient questions landing in your clinic today.
Here is what oncologists need to know, and how to put it to work for their patients.
Why AI in Mammography Matters
More than 200 million women undergo breast cancer screening each year. In the US alone, 10% of the 40 million women screened annually require additional diagnostic imaging, and 4%–5% of these women are eventually diagnosed with breast cancer.
Two major shifts are redefining breast cancer screening in the US: The US Preventive Services Task Force (USPSTF) now recommends biennial screening from age 40 to 74 years, and notifying patients of breast density is a federal requirement as of September 10, 2024. That means more mammograms, more patient questions, and more downstream oncology decisions. Patients will increasingly ask about “dense” breast results and what to do next. Add a national radiologist shortage into the mix, and the pressure on timely callbacks, biopsies, and treatment planning will only grow.
Can AI Help Without Compromising Care?
The short answer is yes. With AI, we may be able to transform these rate-limiting steps into opportunities for earlier detection, decentralized screening, and smarter triage and save hundreds of thousands of women from an unnecessary diagnostic procedure, if implemented deliberately.
Don’t Confuse Today’s AI With Yesterday’s CAD
Think of older computer-aided detection (CAD) like a 1990s chemotherapy drug: It sometimes helped, but it came with significant toxicity and rarely delivered consistent survival benefits. Today’s deep-learning AI is closer to targeted therapy — trained on millions of “trial participants” (mammograms), more precise, and applied in specific contexts where it adds value. If you once dismissed CAD as noise, it’s time to revisit what AI can now offer.
The role of AI is broader than drawing boxes. It provides second readings, worklist triage, risk prediction, density assessment, and decision support. FDA has cleared several AI tools for both 2D and digital breast tomosynthesis (DBT), which include iCAD ProFound (DBT), ScreenPoint Transpara (2D/DBT), and Lunit INSIGHT DBT.
Some of the strongest evidence for AI in mammography is as a second reader during screening. Large trials show that AI plus one radiologist can match reading from two radiologists, cutting workload by about 40%. For example, the MASAI randomized trial showed that AI-supported screening achieved similar cancer detection but cut human screen-reading workload about 44% vs standard double reading (39,996 vs 40,024 participants). The primary interval cancer outcomes are maturing, but the safety analysis is reassuring.
Reducing second reads and arbitration time are important for clinicians because it frees capacity for callbacks and diagnostic workups. This will be especially key given that screening now starts at age 40. That will mean about 21 to 22 million more women are newly eligible, translating to about 10 to 11 million additional mammograms each year under biennial screening.
Another important area where AI can make its mark in mammography is triage and time to diagnosis. The results from a randomized implementation study showed that AI-prioritized worklists accelerated time to additional imaging and biopsy diagnosis without harming efficiency for others — exactly the kind of outcome patients feel.
Multiple studies have demonstrated improved diagnostic performance and shorter reading times when AI supports DBT interpretation, which is important because DBT can otherwise be time intensive.
We are also seeing rapid advancement in risk-based screening, moving beyond a single dense vs not dense approach. Deep-learning risk models, such as Mirai, predict 1- to 5-year breast cancer risk directly from the mammogram, and these tools are now being assessed prospectively to guide supplemental MRI. Cost-effectiveness modeling supports risk-stratified intervals vs one-size-fits-all schedules.
Finally, automated density tools, such as Transpara Density and Volpara, offer objective, reproducible volumetric measures that map to the Breast Imaging-Reporting and Data System, which is useful for Mammography Quality Standards Act-required reporting and as inputs to risk calculators.
While early evidence suggests AI may help surface future or interval cancers earlier, including more invasive tumors, the definitive impacts on interval cancer rates and mortality require longitudinal follow-up, which is now in progress.
Pitfalls to Watch For
Bias is real. Studies show false-positive differences by race, age, and density. AI can even infer racial identity from images, potentially amplifying disparities. Performance can also shift by vendor, demographics, and prevalence.
A Radiology study of 4855 DBT exams showed that an algorithm produced more false-positive case scores in Black patients and older patients (aged 71-80 years) patients and in women with extremely dense breasts. This can happen because AI can infer proxies for race directly from images, even when humans cannot, and this can propagate disparities if not addressed. External validations and reviews emphasize that performance can shift with device manufacturer, demographics, and prevalence, which is why all tools need to undergo local validation and calibration.
Here’s a pragmatic adoption checklist before going live with an AI tool.
- Confirm FDA clearance: Verify the name and version of the algorithm, imaging modes (2D vs DBT), and operating points. Confirm 510(k) numbers.
- Local validation: Test on your patient mix and vendor stack (Hologic, GE, Siemens, Fuji). Compare this to your baseline recall rate, positive predictive value of recall (PPV1), cancer detection rate, and reading time. Commit to recalibration if drift occurs.
- Equity plan: Monitor false-positive and negative false-rates by age, race/ethnicity, and density; document corrective actions if disparities emerge. (This isn’t optional.)
- Workflow clarity: Is AI a second reader, an additional reader, or a triage tool? Who arbitrates discordance? What’s the escalation path for high-risk or interval cancer-like patterns?
- Regulatory strategy: Confirm whether the vendor has (or will file) a Predetermined Change Control Plan so models can be updated safely without repeated submissions. Also confirm how you’ll be notified about performance-relevant changes.
- Data governance: Audit logs of AI outputs, retention, protected health information handling, and the patient communication policy for AI-assisted reads.
After going live, set up a quarterly dashboard. It should include cancer detection rate per 1000 patients, recall rate, PPV1, interval cancer rate (as it matures), reading time, and turnaround time to diagnostic imaging or biopsy — all stratified by age, race/ethnicity, and density.
Here, I dissect what this discussion means through the lens of Moravec’s paradox (machines excel at what clinicians find hard, and vice versa) and offer a possible playbook for putting these tools to work.
What to Tell Patients
When speaking with patients, emphasize that a radiologist still reads their mammogram. AI helps with consistency and efficiency; it doesn’t replace human oversight. Patients with dense breasts should still expect a standard notice; discussion of individualized risk factors, such as family history, genetics, and prior biopsies; and consideration of supplemental imaging if risk warrants. But it’s also important to tell these patients that while dense breasts are common, they do not automatically mean high cancer risk.
As for screening schedules, remind patients that screening is at least biennial from 40 to 74 years of age per the USPSTF guidelines; however, specialty groups may recommend starting on an annual schedule at 40.
What You Can Implement Now
There are multiple practical use cases you can introduce now. One is to use AI as a second reader or an additional reader safety net to preserve detection while reducing human workload. This helps your breast center absorb screening expansion to age 40 without diluting quality. Another is to turn on AI triage to shorten the time to callback and biopsy for the few who need it most — patients notice and appreciate faster answers. You can also begin adopting automated density plus risk models to move beyond “dense/not dense.” For selected patients, AI-informed risk can justify MRI or tailored intervals.
Here’s a quick cheat sheet (for your next leadership or tumor-board meeting).
Do:
- Use AI as a second or additional reader or triage tool, not as a black box.
- Track cancer detection rate, recall, PPV1, interval cancers, and reading time, stratified by age, race, and breast density.
- Pair automated density with AI risk to personalize screening and supplemental imaging.
- Enroll patients in future clinical trials, such as PRISM, the first large-scale randomized controlled trial of AI for screening mammography. This US-based, $16 million, seven-site study is funded by the Patient-Centered Outcomes Research Institute.
Don’t:
- Assume “AI = CAD.” The 2015 CAD story is over; modern deep learning systems are different and require different oversight.
- Go live without a local validation and equity plan or without clarity on software updates.
- Forget to remind patients that screening starts at age 40, and dense breast notifications are now universal. Use the visit to discuss risk, supplemental imaging, and why a human still directs their care.
The Bottom Line
AI won’t replace radiologists or read mammograms for us — just as PET scans didn’t replace oncologists and stethoscopes didn’t make cardiologists obsolete. What it will do is catch what the tired human eye might miss, shave days off anxious waiting, and turn breast density into data instead of doubt. For oncologists, that means staging sooner, enrolling smarter, and spending more time talking with patients instead of chasing callbacks.
In short, AI may not take the picture, but it helps us frame the story, making it sharper, faster, and with fewer blind spots. By pairing this powerful technology with rigorous, equity-focused local validation and transparent governance under the FDA’s emerging Predetermined Change Control Plan framework, we can realize the tangible benefits of practical AI for our patients without widening disparities.
Now, during Breast Cancer Awareness Month, how about we add on AI to that pink ribbon — how cool would that be?
Thoughts? Drop me a line at [email protected]. Let’s keep the conversation — and pink ribbons — going.
Arturo Loaiza-Bonilla, MD, MSEd, is the co-founder and chief medical AI officer at Massive Bio, a company connecting patients to clinical trials using artificial intelligence. His research and professional interests focus on precision medicine, clinical trial design, digital health, entrepreneurship, and patient advocacy. Dr Loaiza-Bonilla serves as Systemwide Chief of Hematology and Oncology at St. Luke’s University Health Network, where he maintains a connection to patient care by attending to patients 2 days a week.
A version of this article first appeared on Medscape.com.