User login
“New antibiotics discovered using AI!”
That’s how headlines read in December 2023, when MIT researchers announced a new class of antibiotics that could wipe out the drug-resistant superbug methicillin-resistant Staphylococcus aureus (MRSA) in mice.
Powered by deep learning, the study was a significant breakthrough. Few new antibiotics have come out since the 1960s, and this one in particular could be crucial in fighting tough-to-treat MRSA, which kills more than 10,000 people annually in the United States.
But as remarkable as the antibiotic discovery was, it may not be the most impactful part of this study.
“Of course, we view the antibiotic-discovery angle to be very important,” said Felix Wong, PhD, a colead author of the study and postdoctoral fellow at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. “But I think equally important, or maybe even more important, is really our method of opening up the black box.”
The black box is generally thought of as impenetrable in complex machine learning models, and that poses a challenge in the drug discovery realm.
“A major bottleneck in AI-ML-driven drug discovery is that nobody knows what the heck is going on,” said Dr. Wong. Models have such powerful architectures that their decision-making is mysterious.
Researchers input data, such as patient features, and the model says what drugs might be effective. But researchers have no idea how the model arrived at its predictions — until now.
What the Researchers Did
Dr. Wong and his colleagues first mined 39,000 compounds for antibiotic activity against MRSA. They fed information about the compounds’ chemical structures and antibiotic activity into their machine learning model. With this, they “trained” the model to predict whether a compound is antibacterial.
Next, they used additional deep learning to narrow the field, ruling out compounds toxic to humans. Then, deploying their various models at once, they screened 12 million commercially available compounds. Five classes emerged as likely MRSA fighters. Further testing of 280 compounds from the five classes produced the final results: Two compounds from the same class. Both reduced MRSA infection in mouse models.
How did the computer flag these compounds? The researchers sought to answer that question by figuring out which chemical structures the model had been looking for.
A chemical structure can be “pruned” — that is, scientists can remove certain atoms and bonds to reveal an underlying substructure. The MIT researchers used the Monte Carlo Tree Search, a commonly used algorithm in machine learning, to select which atoms and bonds to edit out. Then they fed the pruned substructures into their model to find out which was likely responsible for the antibacterial activity.
“The main idea is we can pinpoint which substructure of a chemical structure is causative instead of just correlated with high antibiotic activity,” Dr. Wong said.
This could fuel new “design-driven” or generative AI approaches where these substructures become “starting points to design entirely unseen, unprecedented antibiotics,” Dr. Wong said. “That’s one of the key efforts that we’ve been working on since the publication of this paper.”
More broadly, their method could lead to discoveries in drug classes beyond antibiotics, such as antivirals and anticancer drugs, according to Dr. Wong.
“This is the first major study that I’ve seen seeking to incorporate explainability into deep learning models in the context of antibiotics,” said César de la Fuente, PhD, an assistant professor at the University of Pennsylvania, Philadelphia, Pennsylvania, whose lab has been engaged in AI for antibiotic discovery for the past 5 years.
“It’s kind of like going into the black box with a magnifying lens and figuring out what is actually happening in there,” Dr. de la Fuente said. “And that will open up possibilities for leveraging those different steps to make better drugs.”
How Explainable AI Could Revolutionize Medicine
In studies, explainable AI is showing its potential for informing clinical decisions as well — flagging high-risk patients and letting doctors know why that calculation was made. University of Washington researchers have used the technology to predict whether a patient will have hypoxemia during surgery, revealing which features contributed to the prediction, such as blood pressure or body mass index. Another study used explainable AI to help emergency medical services providers and emergency room clinicians optimize time — for example, by identifying trauma patients at high risk for acute traumatic coagulopathy more quickly.
A crucial benefit of explainable AI is its ability to audit machine learning models for mistakes, said Su-In Lee, PhD, a computer scientist who led the UW research.
For example, a surge of research during the pandemic suggested that AI models could predict COVID-19 infection based on chest x-rays. Dr. Lee’s research used explainable AI to show that many of the studies were not as accurate as they claimed. Her lab revealed that many models› decisions were based not on pathologies but rather on other aspects such as laterality markers in the corners of x-rays or medical devices worn by patients (like pacemakers). She applied the same model auditing technique to AI-powered dermatology devices, digging into the flawed reasoning in their melanoma predictions.
Explainable AI is beginning to affect drug development too. A 2023 study led by Dr. Lee used it to explain how to select complementary drugs for acute myeloid leukemia patients based on the differentiation levels of cancer cells. And in two other studies aimed at identifying Alzheimer’s therapeutic targets, “explainable AI played a key role in terms of identifying the driver pathway,” she said.
Currently, the US Food and Drug Administration (FDA) approval doesn’t require an understanding of a drug’s mechanism of action. But the issue is being raised more often, including at December’s Health Regulatory Policy Conference at MIT’s Jameel Clinic. And just over a year ago, Dr. Lee predicted that the FDA approval process would come to incorporate explainable AI analysis.
“I didn’t hesitate,” Dr. Lee said, regarding her prediction. “We didn’t see this in 2023, so I won’t assert that I was right, but I can confidently say that we are progressing in that direction.”
What’s Next?
The MIT study is part of the Antibiotics-AI project, a 7-year effort to leverage AI to find new antibiotics. Phare Bio, a nonprofit started by MIT professor James Collins, PhD, and others, will do clinical testing on the antibiotic candidates.
Even with the AI’s assistance, there’s still a long way to go before clinical approval.
But knowing which elements contribute to a candidate’s effectiveness against MRSA could help the researchers formulate scientific hypotheses and design better validation, Dr. Lee noted. In other words, because they used explainable AI, they could be better positioned for clinical trial success.
A version of this article appeared on Medscape.com.
“New antibiotics discovered using AI!”
That’s how headlines read in December 2023, when MIT researchers announced a new class of antibiotics that could wipe out the drug-resistant superbug methicillin-resistant Staphylococcus aureus (MRSA) in mice.
Powered by deep learning, the study was a significant breakthrough. Few new antibiotics have come out since the 1960s, and this one in particular could be crucial in fighting tough-to-treat MRSA, which kills more than 10,000 people annually in the United States.
But as remarkable as the antibiotic discovery was, it may not be the most impactful part of this study.
“Of course, we view the antibiotic-discovery angle to be very important,” said Felix Wong, PhD, a colead author of the study and postdoctoral fellow at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. “But I think equally important, or maybe even more important, is really our method of opening up the black box.”
The black box is generally thought of as impenetrable in complex machine learning models, and that poses a challenge in the drug discovery realm.
“A major bottleneck in AI-ML-driven drug discovery is that nobody knows what the heck is going on,” said Dr. Wong. Models have such powerful architectures that their decision-making is mysterious.
Researchers input data, such as patient features, and the model says what drugs might be effective. But researchers have no idea how the model arrived at its predictions — until now.
What the Researchers Did
Dr. Wong and his colleagues first mined 39,000 compounds for antibiotic activity against MRSA. They fed information about the compounds’ chemical structures and antibiotic activity into their machine learning model. With this, they “trained” the model to predict whether a compound is antibacterial.
Next, they used additional deep learning to narrow the field, ruling out compounds toxic to humans. Then, deploying their various models at once, they screened 12 million commercially available compounds. Five classes emerged as likely MRSA fighters. Further testing of 280 compounds from the five classes produced the final results: Two compounds from the same class. Both reduced MRSA infection in mouse models.
How did the computer flag these compounds? The researchers sought to answer that question by figuring out which chemical structures the model had been looking for.
A chemical structure can be “pruned” — that is, scientists can remove certain atoms and bonds to reveal an underlying substructure. The MIT researchers used the Monte Carlo Tree Search, a commonly used algorithm in machine learning, to select which atoms and bonds to edit out. Then they fed the pruned substructures into their model to find out which was likely responsible for the antibacterial activity.
“The main idea is we can pinpoint which substructure of a chemical structure is causative instead of just correlated with high antibiotic activity,” Dr. Wong said.
This could fuel new “design-driven” or generative AI approaches where these substructures become “starting points to design entirely unseen, unprecedented antibiotics,” Dr. Wong said. “That’s one of the key efforts that we’ve been working on since the publication of this paper.”
More broadly, their method could lead to discoveries in drug classes beyond antibiotics, such as antivirals and anticancer drugs, according to Dr. Wong.
“This is the first major study that I’ve seen seeking to incorporate explainability into deep learning models in the context of antibiotics,” said César de la Fuente, PhD, an assistant professor at the University of Pennsylvania, Philadelphia, Pennsylvania, whose lab has been engaged in AI for antibiotic discovery for the past 5 years.
“It’s kind of like going into the black box with a magnifying lens and figuring out what is actually happening in there,” Dr. de la Fuente said. “And that will open up possibilities for leveraging those different steps to make better drugs.”
How Explainable AI Could Revolutionize Medicine
In studies, explainable AI is showing its potential for informing clinical decisions as well — flagging high-risk patients and letting doctors know why that calculation was made. University of Washington researchers have used the technology to predict whether a patient will have hypoxemia during surgery, revealing which features contributed to the prediction, such as blood pressure or body mass index. Another study used explainable AI to help emergency medical services providers and emergency room clinicians optimize time — for example, by identifying trauma patients at high risk for acute traumatic coagulopathy more quickly.
A crucial benefit of explainable AI is its ability to audit machine learning models for mistakes, said Su-In Lee, PhD, a computer scientist who led the UW research.
For example, a surge of research during the pandemic suggested that AI models could predict COVID-19 infection based on chest x-rays. Dr. Lee’s research used explainable AI to show that many of the studies were not as accurate as they claimed. Her lab revealed that many models› decisions were based not on pathologies but rather on other aspects such as laterality markers in the corners of x-rays or medical devices worn by patients (like pacemakers). She applied the same model auditing technique to AI-powered dermatology devices, digging into the flawed reasoning in their melanoma predictions.
Explainable AI is beginning to affect drug development too. A 2023 study led by Dr. Lee used it to explain how to select complementary drugs for acute myeloid leukemia patients based on the differentiation levels of cancer cells. And in two other studies aimed at identifying Alzheimer’s therapeutic targets, “explainable AI played a key role in terms of identifying the driver pathway,” she said.
Currently, the US Food and Drug Administration (FDA) approval doesn’t require an understanding of a drug’s mechanism of action. But the issue is being raised more often, including at December’s Health Regulatory Policy Conference at MIT’s Jameel Clinic. And just over a year ago, Dr. Lee predicted that the FDA approval process would come to incorporate explainable AI analysis.
“I didn’t hesitate,” Dr. Lee said, regarding her prediction. “We didn’t see this in 2023, so I won’t assert that I was right, but I can confidently say that we are progressing in that direction.”
What’s Next?
The MIT study is part of the Antibiotics-AI project, a 7-year effort to leverage AI to find new antibiotics. Phare Bio, a nonprofit started by MIT professor James Collins, PhD, and others, will do clinical testing on the antibiotic candidates.
Even with the AI’s assistance, there’s still a long way to go before clinical approval.
But knowing which elements contribute to a candidate’s effectiveness against MRSA could help the researchers formulate scientific hypotheses and design better validation, Dr. Lee noted. In other words, because they used explainable AI, they could be better positioned for clinical trial success.
A version of this article appeared on Medscape.com.
“New antibiotics discovered using AI!”
That’s how headlines read in December 2023, when MIT researchers announced a new class of antibiotics that could wipe out the drug-resistant superbug methicillin-resistant Staphylococcus aureus (MRSA) in mice.
Powered by deep learning, the study was a significant breakthrough. Few new antibiotics have come out since the 1960s, and this one in particular could be crucial in fighting tough-to-treat MRSA, which kills more than 10,000 people annually in the United States.
But as remarkable as the antibiotic discovery was, it may not be the most impactful part of this study.
“Of course, we view the antibiotic-discovery angle to be very important,” said Felix Wong, PhD, a colead author of the study and postdoctoral fellow at the Broad Institute of MIT and Harvard, Cambridge, Massachusetts. “But I think equally important, or maybe even more important, is really our method of opening up the black box.”
The black box is generally thought of as impenetrable in complex machine learning models, and that poses a challenge in the drug discovery realm.
“A major bottleneck in AI-ML-driven drug discovery is that nobody knows what the heck is going on,” said Dr. Wong. Models have such powerful architectures that their decision-making is mysterious.
Researchers input data, such as patient features, and the model says what drugs might be effective. But researchers have no idea how the model arrived at its predictions — until now.
What the Researchers Did
Dr. Wong and his colleagues first mined 39,000 compounds for antibiotic activity against MRSA. They fed information about the compounds’ chemical structures and antibiotic activity into their machine learning model. With this, they “trained” the model to predict whether a compound is antibacterial.
Next, they used additional deep learning to narrow the field, ruling out compounds toxic to humans. Then, deploying their various models at once, they screened 12 million commercially available compounds. Five classes emerged as likely MRSA fighters. Further testing of 280 compounds from the five classes produced the final results: Two compounds from the same class. Both reduced MRSA infection in mouse models.
How did the computer flag these compounds? The researchers sought to answer that question by figuring out which chemical structures the model had been looking for.
A chemical structure can be “pruned” — that is, scientists can remove certain atoms and bonds to reveal an underlying substructure. The MIT researchers used the Monte Carlo Tree Search, a commonly used algorithm in machine learning, to select which atoms and bonds to edit out. Then they fed the pruned substructures into their model to find out which was likely responsible for the antibacterial activity.
“The main idea is we can pinpoint which substructure of a chemical structure is causative instead of just correlated with high antibiotic activity,” Dr. Wong said.
This could fuel new “design-driven” or generative AI approaches where these substructures become “starting points to design entirely unseen, unprecedented antibiotics,” Dr. Wong said. “That’s one of the key efforts that we’ve been working on since the publication of this paper.”
More broadly, their method could lead to discoveries in drug classes beyond antibiotics, such as antivirals and anticancer drugs, according to Dr. Wong.
“This is the first major study that I’ve seen seeking to incorporate explainability into deep learning models in the context of antibiotics,” said César de la Fuente, PhD, an assistant professor at the University of Pennsylvania, Philadelphia, Pennsylvania, whose lab has been engaged in AI for antibiotic discovery for the past 5 years.
“It’s kind of like going into the black box with a magnifying lens and figuring out what is actually happening in there,” Dr. de la Fuente said. “And that will open up possibilities for leveraging those different steps to make better drugs.”
How Explainable AI Could Revolutionize Medicine
In studies, explainable AI is showing its potential for informing clinical decisions as well — flagging high-risk patients and letting doctors know why that calculation was made. University of Washington researchers have used the technology to predict whether a patient will have hypoxemia during surgery, revealing which features contributed to the prediction, such as blood pressure or body mass index. Another study used explainable AI to help emergency medical services providers and emergency room clinicians optimize time — for example, by identifying trauma patients at high risk for acute traumatic coagulopathy more quickly.
A crucial benefit of explainable AI is its ability to audit machine learning models for mistakes, said Su-In Lee, PhD, a computer scientist who led the UW research.
For example, a surge of research during the pandemic suggested that AI models could predict COVID-19 infection based on chest x-rays. Dr. Lee’s research used explainable AI to show that many of the studies were not as accurate as they claimed. Her lab revealed that many models› decisions were based not on pathologies but rather on other aspects such as laterality markers in the corners of x-rays or medical devices worn by patients (like pacemakers). She applied the same model auditing technique to AI-powered dermatology devices, digging into the flawed reasoning in their melanoma predictions.
Explainable AI is beginning to affect drug development too. A 2023 study led by Dr. Lee used it to explain how to select complementary drugs for acute myeloid leukemia patients based on the differentiation levels of cancer cells. And in two other studies aimed at identifying Alzheimer’s therapeutic targets, “explainable AI played a key role in terms of identifying the driver pathway,” she said.
Currently, the US Food and Drug Administration (FDA) approval doesn’t require an understanding of a drug’s mechanism of action. But the issue is being raised more often, including at December’s Health Regulatory Policy Conference at MIT’s Jameel Clinic. And just over a year ago, Dr. Lee predicted that the FDA approval process would come to incorporate explainable AI analysis.
“I didn’t hesitate,” Dr. Lee said, regarding her prediction. “We didn’t see this in 2023, so I won’t assert that I was right, but I can confidently say that we are progressing in that direction.”
What’s Next?
The MIT study is part of the Antibiotics-AI project, a 7-year effort to leverage AI to find new antibiotics. Phare Bio, a nonprofit started by MIT professor James Collins, PhD, and others, will do clinical testing on the antibiotic candidates.
Even with the AI’s assistance, there’s still a long way to go before clinical approval.
But knowing which elements contribute to a candidate’s effectiveness against MRSA could help the researchers formulate scientific hypotheses and design better validation, Dr. Lee noted. In other words, because they used explainable AI, they could be better positioned for clinical trial success.
A version of this article appeared on Medscape.com.