Building an AI Army of Digital Twins to Fight Cancer

Article Type
Changed
Wed, 11/13/2024 - 09:41

A patient has cancer. It’s decision time.

Clinician and patient alike face, really, the ultimate challenge when making those decisions. They have to consider the patient’s individual circumstances, available treatment options, potential side effects, relevant clinical data such as the patient’s genetic profile and cancer specifics, and more.

“That’s a lot of information to hold,” said Uzma Asghar, PhD, MRCP, a British consultant medical oncologist at The Royal Marsden Hospital and a chief scientific officer at Concr LTD.

What if there were a way to test — quickly and accurately — all the potential paths forward?

That’s the goal of digital twins. An artificial intelligence (AI)–based program uses all the known data on patients and their types of illness and creates a “twin” that can be used over and over to simulate disease progression, test treatments, and predict individual responses to therapies.

“What the [digital twin] model can do for the clinician is to hold all that information and process it really quickly, within a couple of minutes,” Asghar noted.

A digital twin is more than just a computer model or simulation because it copies a real-world person and relies on real-world data. Some digital twin programs also integrate new information as it becomes available. This technology holds promise for personalized medicine, drug discovery, developing screening strategies, and better understanding diseases.
 

How to Deliver a Twin

To create a digital twin, experts develop a computer model with data to hone its expertise in an area of medicine, such as cancer types and treatments. Then “you train the model on information it’s seen, and then introduce a patient and patient’s information,” said Asghar.

Asghar is currently working with colleagues to develop digital twins that could eventually help solve the aforementioned cancer scenario — a doctor and patient decide the best course of cancer treatment. But their applications are manifold, particularly in clinical research.

Digital twins often include a machine learning component, which would fall under the umbrella term of AI, said Asghar, but it’s not like ChatGPT or other generative AI modules many people are now familiar with.

“The difference here is the model is not there to replace the clinician or to replace clinical trials,” Asghar noted. Instead, digital twins help make decisions faster in a way that can be more affordable.
 

Digital Twins to Predict Cancer Outcomes

Asghar is currently involved in UK clinical trials enrolling patients with cancer to test the accuracy of digital twin programs.

At this point, these studies do not yet use digital twins to guide the course of treatment, which is something they hope to do eventually. For now, they are still at the validation phase — the digital twin program makes predictions about the treatments and then the researchers later evaluate how accurate the predictions turned out to be based on real information from the enrolled patients.

Their current model gives predictions for RECIST (response evaluation criteria in solid tumor), treatment response, and survival. In addition to collecting data from ongoing clinical trials, they’ve used retrospective data, such as from the Cancer Tumor Atlas, to test the model.

“We’ve clinically validated it now in over 9000 patients,” said Asghar, who noted that they are constantly testing it on new patients. Their data include 30 chemotherapies and 23 cancer types, but they are focusing on four: Triple-negative breast cancer, cancer of unknown primary, pancreatic cancer, and colorectal cancer.

“The reason for choosing those four cancer types is that they are aggressive, their response to chemotherapy isn’t as great, and the outcome for those patient populations, there’s significant room for improvement,” Asghar explained.

Currently, Asghar said, the model is around 80%-90% correct in predicting what the actual clinical outcomes turn out to be.

The final stage of their work, before it becomes widely available to clinicians, will be to integrate it into a clinical trial in which some clinicians use the model to make decisions about treatment vs some who don’t use the model. By studying patient outcomes in both groups, they will be able to determine the value of the digital twin program they created.
 

 

 

What Else Can a Twin Do? A Lot

While a model that helps clinicians make decisions about cancer treatments may be among the first digital twin programs that become widely available, there are many other kinds of digital twins in the works.

For example, a digital twin could be used as a benchmark for a patient to determine how their cancer might have progressed without treatment. Say a patient’s tumor grew during treatment, it might seem like the treatment failed, but a digital twin might show that if left untreated, the tumor would have grown five times as fast, said Paul Macklin, PhD, professor in the Department of Intelligent Systems Engineering at Indiana University Bloomington.

Alternatively, if the virtual patient’s tumor is around the same size as the real patient’s tumor, “that means that treatment has lost its efficacy. It’s time to do something new,” said Macklin. And a digital twin could help with not only choosing a therapy but also choosing a dosing schedule, he noted.

The models can also be updated as new treatments come out, which could help clinicians virtually explore how they might affect a patient before having that patient switch treatments.

Digital twins could also assist in decision-making based on a patient’s priorities and real-life circumstances. “Maybe your priority is not necessarily to shrink this [tumor] at all costs ... maybe your priority is some mix of that and also quality of life,” Macklin said, referring to potential side effects. Or if someone lives 3 hours from the nearest cancer center, a digital twin could help determine whether less frequent treatments could still be effective.

And while much of the activity around digital twins in biomedical research has been focused on cancer, Asghar said the technology has the potential to be applied to other diseases as well. A digital twin for cardiovascular disease could help doctors choose the best treatment. It could also integrate new information from a smartwatch or glucose monitor to make better predictions and help doctors adjust the treatment plan.
 

Faster, More Effective Research With Twins

Because digital twin programs can quickly analyze large datasets, they can also make real-world studies more effective and efficient.

Though digital twins would not fully replace real clinical trials, they could help run through preliminary scenarios before starting a full clinical trial, which would “save everybody some money, time and pain and risk,” said Macklin.

It’s also possible to use digital twins to design better screening strategies for early cancer detection and monitoring, said Ioannis Zervantonakis, PhD, a bioengineering professor at the University of Pittsburgh.

Zervantonakis is tapping digital twin technology for research that homes in on understanding tumors. In this case, the digital twin is a virtual representation of a real tumor, complete with its complex network of cells and the surrounding tissue.

Zervantonakis’ lab is using the technology to study cell-cell interactions in the tumor microenvironment, with a focus on human epidermal growth factor receptor 2–targeted therapy resistance in breast cancer. The digital twin they developed will simulate tumor growth, predict drug response, analyze cellular interactions, and optimize treatment strategies.
 

 

 

The Long Push Forward

One big hurdle to making digital twins more widely available is that regulation for the technology is still in progress.

“We’re developing the technology, and what’s also happening is the regulatory framework is being developed in parallel. So we’re almost developing things blindly on the basis that we think this is what the regulators would want,” explained Asghar.

“It’s really important that these technologies are regulated properly, just like drugs, and that’s what we’re pushing and advocating for,” said Asghar, noting that people need to know that like drugs, a digital twin has strengths and limitations.

And while a digital twin can be a cost-saving approach in the long run, it does require funding to get a program built, and finding funds can be difficult because not everyone knows about the technology. More funding means more trials.

With more data, Asghar is hopeful that within a few years, a digital twin model could be available for clinicians to use to help inform treatment decisions. This could lead to more effective treatments and, ultimately, better patient outcomes.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

A patient has cancer. It’s decision time.

Clinician and patient alike face, really, the ultimate challenge when making those decisions. They have to consider the patient’s individual circumstances, available treatment options, potential side effects, relevant clinical data such as the patient’s genetic profile and cancer specifics, and more.

“That’s a lot of information to hold,” said Uzma Asghar, PhD, MRCP, a British consultant medical oncologist at The Royal Marsden Hospital and a chief scientific officer at Concr LTD.

What if there were a way to test — quickly and accurately — all the potential paths forward?

That’s the goal of digital twins. An artificial intelligence (AI)–based program uses all the known data on patients and their types of illness and creates a “twin” that can be used over and over to simulate disease progression, test treatments, and predict individual responses to therapies.

“What the [digital twin] model can do for the clinician is to hold all that information and process it really quickly, within a couple of minutes,” Asghar noted.

A digital twin is more than just a computer model or simulation because it copies a real-world person and relies on real-world data. Some digital twin programs also integrate new information as it becomes available. This technology holds promise for personalized medicine, drug discovery, developing screening strategies, and better understanding diseases.
 

How to Deliver a Twin

To create a digital twin, experts develop a computer model with data to hone its expertise in an area of medicine, such as cancer types and treatments. Then “you train the model on information it’s seen, and then introduce a patient and patient’s information,” said Asghar.

Asghar is currently working with colleagues to develop digital twins that could eventually help solve the aforementioned cancer scenario — a doctor and patient decide the best course of cancer treatment. But their applications are manifold, particularly in clinical research.

Digital twins often include a machine learning component, which would fall under the umbrella term of AI, said Asghar, but it’s not like ChatGPT or other generative AI modules many people are now familiar with.

“The difference here is the model is not there to replace the clinician or to replace clinical trials,” Asghar noted. Instead, digital twins help make decisions faster in a way that can be more affordable.
 

Digital Twins to Predict Cancer Outcomes

Asghar is currently involved in UK clinical trials enrolling patients with cancer to test the accuracy of digital twin programs.

At this point, these studies do not yet use digital twins to guide the course of treatment, which is something they hope to do eventually. For now, they are still at the validation phase — the digital twin program makes predictions about the treatments and then the researchers later evaluate how accurate the predictions turned out to be based on real information from the enrolled patients.

Their current model gives predictions for RECIST (response evaluation criteria in solid tumor), treatment response, and survival. In addition to collecting data from ongoing clinical trials, they’ve used retrospective data, such as from the Cancer Tumor Atlas, to test the model.

“We’ve clinically validated it now in over 9000 patients,” said Asghar, who noted that they are constantly testing it on new patients. Their data include 30 chemotherapies and 23 cancer types, but they are focusing on four: Triple-negative breast cancer, cancer of unknown primary, pancreatic cancer, and colorectal cancer.

“The reason for choosing those four cancer types is that they are aggressive, their response to chemotherapy isn’t as great, and the outcome for those patient populations, there’s significant room for improvement,” Asghar explained.

Currently, Asghar said, the model is around 80%-90% correct in predicting what the actual clinical outcomes turn out to be.

The final stage of their work, before it becomes widely available to clinicians, will be to integrate it into a clinical trial in which some clinicians use the model to make decisions about treatment vs some who don’t use the model. By studying patient outcomes in both groups, they will be able to determine the value of the digital twin program they created.
 

 

 

What Else Can a Twin Do? A Lot

While a model that helps clinicians make decisions about cancer treatments may be among the first digital twin programs that become widely available, there are many other kinds of digital twins in the works.

For example, a digital twin could be used as a benchmark for a patient to determine how their cancer might have progressed without treatment. Say a patient’s tumor grew during treatment, it might seem like the treatment failed, but a digital twin might show that if left untreated, the tumor would have grown five times as fast, said Paul Macklin, PhD, professor in the Department of Intelligent Systems Engineering at Indiana University Bloomington.

Alternatively, if the virtual patient’s tumor is around the same size as the real patient’s tumor, “that means that treatment has lost its efficacy. It’s time to do something new,” said Macklin. And a digital twin could help with not only choosing a therapy but also choosing a dosing schedule, he noted.

The models can also be updated as new treatments come out, which could help clinicians virtually explore how they might affect a patient before having that patient switch treatments.

Digital twins could also assist in decision-making based on a patient’s priorities and real-life circumstances. “Maybe your priority is not necessarily to shrink this [tumor] at all costs ... maybe your priority is some mix of that and also quality of life,” Macklin said, referring to potential side effects. Or if someone lives 3 hours from the nearest cancer center, a digital twin could help determine whether less frequent treatments could still be effective.

And while much of the activity around digital twins in biomedical research has been focused on cancer, Asghar said the technology has the potential to be applied to other diseases as well. A digital twin for cardiovascular disease could help doctors choose the best treatment. It could also integrate new information from a smartwatch or glucose monitor to make better predictions and help doctors adjust the treatment plan.
 

Faster, More Effective Research With Twins

Because digital twin programs can quickly analyze large datasets, they can also make real-world studies more effective and efficient.

Though digital twins would not fully replace real clinical trials, they could help run through preliminary scenarios before starting a full clinical trial, which would “save everybody some money, time and pain and risk,” said Macklin.

It’s also possible to use digital twins to design better screening strategies for early cancer detection and monitoring, said Ioannis Zervantonakis, PhD, a bioengineering professor at the University of Pittsburgh.

Zervantonakis is tapping digital twin technology for research that homes in on understanding tumors. In this case, the digital twin is a virtual representation of a real tumor, complete with its complex network of cells and the surrounding tissue.

Zervantonakis’ lab is using the technology to study cell-cell interactions in the tumor microenvironment, with a focus on human epidermal growth factor receptor 2–targeted therapy resistance in breast cancer. The digital twin they developed will simulate tumor growth, predict drug response, analyze cellular interactions, and optimize treatment strategies.
 

 

 

The Long Push Forward

One big hurdle to making digital twins more widely available is that regulation for the technology is still in progress.

“We’re developing the technology, and what’s also happening is the regulatory framework is being developed in parallel. So we’re almost developing things blindly on the basis that we think this is what the regulators would want,” explained Asghar.

“It’s really important that these technologies are regulated properly, just like drugs, and that’s what we’re pushing and advocating for,” said Asghar, noting that people need to know that like drugs, a digital twin has strengths and limitations.

And while a digital twin can be a cost-saving approach in the long run, it does require funding to get a program built, and finding funds can be difficult because not everyone knows about the technology. More funding means more trials.

With more data, Asghar is hopeful that within a few years, a digital twin model could be available for clinicians to use to help inform treatment decisions. This could lead to more effective treatments and, ultimately, better patient outcomes.
 

A version of this article appeared on Medscape.com.

A patient has cancer. It’s decision time.

Clinician and patient alike face, really, the ultimate challenge when making those decisions. They have to consider the patient’s individual circumstances, available treatment options, potential side effects, relevant clinical data such as the patient’s genetic profile and cancer specifics, and more.

“That’s a lot of information to hold,” said Uzma Asghar, PhD, MRCP, a British consultant medical oncologist at The Royal Marsden Hospital and a chief scientific officer at Concr LTD.

What if there were a way to test — quickly and accurately — all the potential paths forward?

That’s the goal of digital twins. An artificial intelligence (AI)–based program uses all the known data on patients and their types of illness and creates a “twin” that can be used over and over to simulate disease progression, test treatments, and predict individual responses to therapies.

“What the [digital twin] model can do for the clinician is to hold all that information and process it really quickly, within a couple of minutes,” Asghar noted.

A digital twin is more than just a computer model or simulation because it copies a real-world person and relies on real-world data. Some digital twin programs also integrate new information as it becomes available. This technology holds promise for personalized medicine, drug discovery, developing screening strategies, and better understanding diseases.
 

How to Deliver a Twin

To create a digital twin, experts develop a computer model with data to hone its expertise in an area of medicine, such as cancer types and treatments. Then “you train the model on information it’s seen, and then introduce a patient and patient’s information,” said Asghar.

Asghar is currently working with colleagues to develop digital twins that could eventually help solve the aforementioned cancer scenario — a doctor and patient decide the best course of cancer treatment. But their applications are manifold, particularly in clinical research.

Digital twins often include a machine learning component, which would fall under the umbrella term of AI, said Asghar, but it’s not like ChatGPT or other generative AI modules many people are now familiar with.

“The difference here is the model is not there to replace the clinician or to replace clinical trials,” Asghar noted. Instead, digital twins help make decisions faster in a way that can be more affordable.
 

Digital Twins to Predict Cancer Outcomes

Asghar is currently involved in UK clinical trials enrolling patients with cancer to test the accuracy of digital twin programs.

At this point, these studies do not yet use digital twins to guide the course of treatment, which is something they hope to do eventually. For now, they are still at the validation phase — the digital twin program makes predictions about the treatments and then the researchers later evaluate how accurate the predictions turned out to be based on real information from the enrolled patients.

Their current model gives predictions for RECIST (response evaluation criteria in solid tumor), treatment response, and survival. In addition to collecting data from ongoing clinical trials, they’ve used retrospective data, such as from the Cancer Tumor Atlas, to test the model.

“We’ve clinically validated it now in over 9000 patients,” said Asghar, who noted that they are constantly testing it on new patients. Their data include 30 chemotherapies and 23 cancer types, but they are focusing on four: Triple-negative breast cancer, cancer of unknown primary, pancreatic cancer, and colorectal cancer.

“The reason for choosing those four cancer types is that they are aggressive, their response to chemotherapy isn’t as great, and the outcome for those patient populations, there’s significant room for improvement,” Asghar explained.

Currently, Asghar said, the model is around 80%-90% correct in predicting what the actual clinical outcomes turn out to be.

The final stage of their work, before it becomes widely available to clinicians, will be to integrate it into a clinical trial in which some clinicians use the model to make decisions about treatment vs some who don’t use the model. By studying patient outcomes in both groups, they will be able to determine the value of the digital twin program they created.
 

 

 

What Else Can a Twin Do? A Lot

While a model that helps clinicians make decisions about cancer treatments may be among the first digital twin programs that become widely available, there are many other kinds of digital twins in the works.

For example, a digital twin could be used as a benchmark for a patient to determine how their cancer might have progressed without treatment. Say a patient’s tumor grew during treatment, it might seem like the treatment failed, but a digital twin might show that if left untreated, the tumor would have grown five times as fast, said Paul Macklin, PhD, professor in the Department of Intelligent Systems Engineering at Indiana University Bloomington.

Alternatively, if the virtual patient’s tumor is around the same size as the real patient’s tumor, “that means that treatment has lost its efficacy. It’s time to do something new,” said Macklin. And a digital twin could help with not only choosing a therapy but also choosing a dosing schedule, he noted.

The models can also be updated as new treatments come out, which could help clinicians virtually explore how they might affect a patient before having that patient switch treatments.

Digital twins could also assist in decision-making based on a patient’s priorities and real-life circumstances. “Maybe your priority is not necessarily to shrink this [tumor] at all costs ... maybe your priority is some mix of that and also quality of life,” Macklin said, referring to potential side effects. Or if someone lives 3 hours from the nearest cancer center, a digital twin could help determine whether less frequent treatments could still be effective.

And while much of the activity around digital twins in biomedical research has been focused on cancer, Asghar said the technology has the potential to be applied to other diseases as well. A digital twin for cardiovascular disease could help doctors choose the best treatment. It could also integrate new information from a smartwatch or glucose monitor to make better predictions and help doctors adjust the treatment plan.
 

Faster, More Effective Research With Twins

Because digital twin programs can quickly analyze large datasets, they can also make real-world studies more effective and efficient.

Though digital twins would not fully replace real clinical trials, they could help run through preliminary scenarios before starting a full clinical trial, which would “save everybody some money, time and pain and risk,” said Macklin.

It’s also possible to use digital twins to design better screening strategies for early cancer detection and monitoring, said Ioannis Zervantonakis, PhD, a bioengineering professor at the University of Pittsburgh.

Zervantonakis is tapping digital twin technology for research that homes in on understanding tumors. In this case, the digital twin is a virtual representation of a real tumor, complete with its complex network of cells and the surrounding tissue.

Zervantonakis’ lab is using the technology to study cell-cell interactions in the tumor microenvironment, with a focus on human epidermal growth factor receptor 2–targeted therapy resistance in breast cancer. The digital twin they developed will simulate tumor growth, predict drug response, analyze cellular interactions, and optimize treatment strategies.
 

 

 

The Long Push Forward

One big hurdle to making digital twins more widely available is that regulation for the technology is still in progress.

“We’re developing the technology, and what’s also happening is the regulatory framework is being developed in parallel. So we’re almost developing things blindly on the basis that we think this is what the regulators would want,” explained Asghar.

“It’s really important that these technologies are regulated properly, just like drugs, and that’s what we’re pushing and advocating for,” said Asghar, noting that people need to know that like drugs, a digital twin has strengths and limitations.

And while a digital twin can be a cost-saving approach in the long run, it does require funding to get a program built, and finding funds can be difficult because not everyone knows about the technology. More funding means more trials.

With more data, Asghar is hopeful that within a few years, a digital twin model could be available for clinicians to use to help inform treatment decisions. This could lead to more effective treatments and, ultimately, better patient outcomes.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Humans and Carbs: A Complicated 800,000-Year Relationship

Article Type
Changed
Tue, 10/29/2024 - 05:47

Trying to reduce your carbohydrate intake means going against nearly a million years of evolution.

Humans are among a few species with multiple copies of certain genes that help us break down starch — carbs like potatoes, beans, corn, and grains — so that we can turn it into energy our bodies can use.

However, it’s been difficult for researchers to pinpoint when in human history we acquired multiple copies of these genes because they’re in a region of the genome that’s hard to sequence.

A recent study published in Science suggests that humans may have developed multiple copies of the gene for amylase — an enzyme that’s the first step in starch digestion — over 800,000 years ago, long before the agricultural revolution. This genetic change could have helped us adapt to eating starchy foods.

The study shows how “what your ancestors ate thousands of years ago could be affecting our genetics today,” said Kelsey Jorgensen, PhD, a biological anthropologist at The University of Kansas, Lawrence, who was not involved in the study.

The double-edged sword has sharpened over all those centuries. On one hand, the human body needs and craves carbs to function. On the other hand, our modern-day consumption of carbs, especially calorie-dense/nutritionally-barren processed carbs, has long since passed “healthy.”
 

How Researchers Found Our Carb-Lover Gene

The enzyme amylase turns complex carbs into maltose, a sweet-tasting sugar that is made of two glucose molecules linked together. We make two kinds of amylases: Salivary amylase that breaks down carbs in our mouths and pancreatic amylase that is secreted into our small intestines.

Modern humans have multiple copies of both amylases. Past research showed that human populations with diets high in starch can have up to nine copies of the gene for salivary amylase, called AMY1.

To pinpoint when in human history we acquired multiple copies of AMY1, the new study utilized novel techniques, called optical genome mapping and long-read sequencing, to sequence and analyze the genes. They sequenced 98 modern-day samples and 68 ancient DNA samples, including one from a Siberian person who lived 45,000 years ago.

The ancient DNA data in the study allowed the researchers to track how the number of amylase genes changed over time, said George Perry, PhD, an anthropological geneticist at The Pennsylvania State University-University Park (he was not involved in the study).

Based on the sequencing, the team analyzed changes in the genes in their samples to gauge evolutionary timelines. Perry noted that this was a “very clever approach to estimating the amylase copy number mutation rate, which in turn can really help in testing evolutionary hypotheses.”

The researchers found that even before farming, hunter-gatherers had between four and eight AMY1 genes in their cells. This suggests that people across Eurasia already had a number of these genes long before they started growing crops. (Recent research indicates that Neanderthals also ate starchy foods.)

“Even archaic hominins had these [genetic] variations and that indicates that they were consuming starch,” said Feyza Yilmaz, PhD, an associate computational scientist at The Jackson Laboratory in Bar Harbor, Maine, and a lead author of the study.

However, 4000 years ago, after the agricultural revolution, the research indicates that there were even more AMY1 copies acquired. Yilmaz noted, “with the advance of agriculture, we see an increase in high amylase copy number haplotypes. So genetic variation goes hand in hand with adaptation to the environment.” 

previous study showed that species that share an environment with humans, such as dogs and pigs, also have copy number variation of amylase genes, said Yilmaz, indicating a link between genome changes and an increase in starch consumption.
 

 

 

Potential Health Impacts on Modern Humans

The duplications in the AMY1 gene could have allowed humans to better digest starches. And it’s conceivable that having more copies of the gene means being able to break down starches even more efficiently, and those with more copies “may be more prone to having high blood sugar, prediabetes, that sort of thing,” Jorgensen said.

Whether those with more AMY1 genes have more health risks is an active area of research. “Researchers tested whether there’s a correlation between AMY1 gene copies and diabetes or BMI [body mass index]. And so far, some studies show that there is indeed correlation, but other studies show that there is no correlation at all,” said Yilmaz.

Yilmaz pointed out that only 5 or 10% of carb digestion happens in our mouths, the rest occurs in our small intestine, plus there are many other factors involved in eating and metabolism.

“I am really looking forward to seeing studies which truly figure out the connection between AMY1 copy number and metabolic health and also what type of factors play a role in metabolic health,” said Yilmaz.

It’s also possible that having more AMY1 copies could lead to more carb cravings as the enzyme creates a type of sugar in our mouths. “Previous studies show that there’s a correlation between AMY1 copy number and also the amylase enzyme levels, so the faster we process the starch, the taste [of starches] will be sweeter,” said Yilmaz.

However, the link between cravings and copy numbers isn’t clear. And we don’t exactly know what came first — did the starch in humans’ diet lead to more copies of amylase genes, or did the copies of the amylase genes drive cravings that lead us to cultivate more carbs? We’ll need more research to find out.
 

How Will Today’s Processed Carbs Affect Our Genes Tomorrow?

As our diet changes to increasingly include processed carbs, what will happen to our AMY1 genes is fuzzy. “I don’t know what this could do to our genomes in the next 1000 years or more than 1000 years,” Yilmaz noted, but she said from the evidence it seems as though we may have peaked in AMY1 copies.

Jorgensen noted that this research is focused on a European population. She wonders whether the pattern of AMY1 duplication will be repeated in other populations “because the rise of starch happened first in the Middle East and then Europe and then later in the Americas,” she said.

“There’s individual variation and then there’s population-wide variation,” Jorgensen pointed out. She speculates that the historical diet of different cultures could explain population-based variations in AMY1 genes — it’s something future research could investigate. Other populations may also experience genetic changes as much of the world shifts to a more carb-heavy Western diet.

Overall, this research adds to the growing evidence that humans have a long history of loving carbs — for better and, at least over our most recent history and immediate future, for worse.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Trying to reduce your carbohydrate intake means going against nearly a million years of evolution.

Humans are among a few species with multiple copies of certain genes that help us break down starch — carbs like potatoes, beans, corn, and grains — so that we can turn it into energy our bodies can use.

However, it’s been difficult for researchers to pinpoint when in human history we acquired multiple copies of these genes because they’re in a region of the genome that’s hard to sequence.

A recent study published in Science suggests that humans may have developed multiple copies of the gene for amylase — an enzyme that’s the first step in starch digestion — over 800,000 years ago, long before the agricultural revolution. This genetic change could have helped us adapt to eating starchy foods.

The study shows how “what your ancestors ate thousands of years ago could be affecting our genetics today,” said Kelsey Jorgensen, PhD, a biological anthropologist at The University of Kansas, Lawrence, who was not involved in the study.

The double-edged sword has sharpened over all those centuries. On one hand, the human body needs and craves carbs to function. On the other hand, our modern-day consumption of carbs, especially calorie-dense/nutritionally-barren processed carbs, has long since passed “healthy.”
 

How Researchers Found Our Carb-Lover Gene

The enzyme amylase turns complex carbs into maltose, a sweet-tasting sugar that is made of two glucose molecules linked together. We make two kinds of amylases: Salivary amylase that breaks down carbs in our mouths and pancreatic amylase that is secreted into our small intestines.

Modern humans have multiple copies of both amylases. Past research showed that human populations with diets high in starch can have up to nine copies of the gene for salivary amylase, called AMY1.

To pinpoint when in human history we acquired multiple copies of AMY1, the new study utilized novel techniques, called optical genome mapping and long-read sequencing, to sequence and analyze the genes. They sequenced 98 modern-day samples and 68 ancient DNA samples, including one from a Siberian person who lived 45,000 years ago.

The ancient DNA data in the study allowed the researchers to track how the number of amylase genes changed over time, said George Perry, PhD, an anthropological geneticist at The Pennsylvania State University-University Park (he was not involved in the study).

Based on the sequencing, the team analyzed changes in the genes in their samples to gauge evolutionary timelines. Perry noted that this was a “very clever approach to estimating the amylase copy number mutation rate, which in turn can really help in testing evolutionary hypotheses.”

The researchers found that even before farming, hunter-gatherers had between four and eight AMY1 genes in their cells. This suggests that people across Eurasia already had a number of these genes long before they started growing crops. (Recent research indicates that Neanderthals also ate starchy foods.)

“Even archaic hominins had these [genetic] variations and that indicates that they were consuming starch,” said Feyza Yilmaz, PhD, an associate computational scientist at The Jackson Laboratory in Bar Harbor, Maine, and a lead author of the study.

However, 4000 years ago, after the agricultural revolution, the research indicates that there were even more AMY1 copies acquired. Yilmaz noted, “with the advance of agriculture, we see an increase in high amylase copy number haplotypes. So genetic variation goes hand in hand with adaptation to the environment.” 

previous study showed that species that share an environment with humans, such as dogs and pigs, also have copy number variation of amylase genes, said Yilmaz, indicating a link between genome changes and an increase in starch consumption.
 

 

 

Potential Health Impacts on Modern Humans

The duplications in the AMY1 gene could have allowed humans to better digest starches. And it’s conceivable that having more copies of the gene means being able to break down starches even more efficiently, and those with more copies “may be more prone to having high blood sugar, prediabetes, that sort of thing,” Jorgensen said.

Whether those with more AMY1 genes have more health risks is an active area of research. “Researchers tested whether there’s a correlation between AMY1 gene copies and diabetes or BMI [body mass index]. And so far, some studies show that there is indeed correlation, but other studies show that there is no correlation at all,” said Yilmaz.

Yilmaz pointed out that only 5 or 10% of carb digestion happens in our mouths, the rest occurs in our small intestine, plus there are many other factors involved in eating and metabolism.

“I am really looking forward to seeing studies which truly figure out the connection between AMY1 copy number and metabolic health and also what type of factors play a role in metabolic health,” said Yilmaz.

It’s also possible that having more AMY1 copies could lead to more carb cravings as the enzyme creates a type of sugar in our mouths. “Previous studies show that there’s a correlation between AMY1 copy number and also the amylase enzyme levels, so the faster we process the starch, the taste [of starches] will be sweeter,” said Yilmaz.

However, the link between cravings and copy numbers isn’t clear. And we don’t exactly know what came first — did the starch in humans’ diet lead to more copies of amylase genes, or did the copies of the amylase genes drive cravings that lead us to cultivate more carbs? We’ll need more research to find out.
 

How Will Today’s Processed Carbs Affect Our Genes Tomorrow?

As our diet changes to increasingly include processed carbs, what will happen to our AMY1 genes is fuzzy. “I don’t know what this could do to our genomes in the next 1000 years or more than 1000 years,” Yilmaz noted, but she said from the evidence it seems as though we may have peaked in AMY1 copies.

Jorgensen noted that this research is focused on a European population. She wonders whether the pattern of AMY1 duplication will be repeated in other populations “because the rise of starch happened first in the Middle East and then Europe and then later in the Americas,” she said.

“There’s individual variation and then there’s population-wide variation,” Jorgensen pointed out. She speculates that the historical diet of different cultures could explain population-based variations in AMY1 genes — it’s something future research could investigate. Other populations may also experience genetic changes as much of the world shifts to a more carb-heavy Western diet.

Overall, this research adds to the growing evidence that humans have a long history of loving carbs — for better and, at least over our most recent history and immediate future, for worse.
 

A version of this article appeared on Medscape.com.

Trying to reduce your carbohydrate intake means going against nearly a million years of evolution.

Humans are among a few species with multiple copies of certain genes that help us break down starch — carbs like potatoes, beans, corn, and grains — so that we can turn it into energy our bodies can use.

However, it’s been difficult for researchers to pinpoint when in human history we acquired multiple copies of these genes because they’re in a region of the genome that’s hard to sequence.

A recent study published in Science suggests that humans may have developed multiple copies of the gene for amylase — an enzyme that’s the first step in starch digestion — over 800,000 years ago, long before the agricultural revolution. This genetic change could have helped us adapt to eating starchy foods.

The study shows how “what your ancestors ate thousands of years ago could be affecting our genetics today,” said Kelsey Jorgensen, PhD, a biological anthropologist at The University of Kansas, Lawrence, who was not involved in the study.

The double-edged sword has sharpened over all those centuries. On one hand, the human body needs and craves carbs to function. On the other hand, our modern-day consumption of carbs, especially calorie-dense/nutritionally-barren processed carbs, has long since passed “healthy.”
 

How Researchers Found Our Carb-Lover Gene

The enzyme amylase turns complex carbs into maltose, a sweet-tasting sugar that is made of two glucose molecules linked together. We make two kinds of amylases: Salivary amylase that breaks down carbs in our mouths and pancreatic amylase that is secreted into our small intestines.

Modern humans have multiple copies of both amylases. Past research showed that human populations with diets high in starch can have up to nine copies of the gene for salivary amylase, called AMY1.

To pinpoint when in human history we acquired multiple copies of AMY1, the new study utilized novel techniques, called optical genome mapping and long-read sequencing, to sequence and analyze the genes. They sequenced 98 modern-day samples and 68 ancient DNA samples, including one from a Siberian person who lived 45,000 years ago.

The ancient DNA data in the study allowed the researchers to track how the number of amylase genes changed over time, said George Perry, PhD, an anthropological geneticist at The Pennsylvania State University-University Park (he was not involved in the study).

Based on the sequencing, the team analyzed changes in the genes in their samples to gauge evolutionary timelines. Perry noted that this was a “very clever approach to estimating the amylase copy number mutation rate, which in turn can really help in testing evolutionary hypotheses.”

The researchers found that even before farming, hunter-gatherers had between four and eight AMY1 genes in their cells. This suggests that people across Eurasia already had a number of these genes long before they started growing crops. (Recent research indicates that Neanderthals also ate starchy foods.)

“Even archaic hominins had these [genetic] variations and that indicates that they were consuming starch,” said Feyza Yilmaz, PhD, an associate computational scientist at The Jackson Laboratory in Bar Harbor, Maine, and a lead author of the study.

However, 4000 years ago, after the agricultural revolution, the research indicates that there were even more AMY1 copies acquired. Yilmaz noted, “with the advance of agriculture, we see an increase in high amylase copy number haplotypes. So genetic variation goes hand in hand with adaptation to the environment.” 

previous study showed that species that share an environment with humans, such as dogs and pigs, also have copy number variation of amylase genes, said Yilmaz, indicating a link between genome changes and an increase in starch consumption.
 

 

 

Potential Health Impacts on Modern Humans

The duplications in the AMY1 gene could have allowed humans to better digest starches. And it’s conceivable that having more copies of the gene means being able to break down starches even more efficiently, and those with more copies “may be more prone to having high blood sugar, prediabetes, that sort of thing,” Jorgensen said.

Whether those with more AMY1 genes have more health risks is an active area of research. “Researchers tested whether there’s a correlation between AMY1 gene copies and diabetes or BMI [body mass index]. And so far, some studies show that there is indeed correlation, but other studies show that there is no correlation at all,” said Yilmaz.

Yilmaz pointed out that only 5 or 10% of carb digestion happens in our mouths, the rest occurs in our small intestine, plus there are many other factors involved in eating and metabolism.

“I am really looking forward to seeing studies which truly figure out the connection between AMY1 copy number and metabolic health and also what type of factors play a role in metabolic health,” said Yilmaz.

It’s also possible that having more AMY1 copies could lead to more carb cravings as the enzyme creates a type of sugar in our mouths. “Previous studies show that there’s a correlation between AMY1 copy number and also the amylase enzyme levels, so the faster we process the starch, the taste [of starches] will be sweeter,” said Yilmaz.

However, the link between cravings and copy numbers isn’t clear. And we don’t exactly know what came first — did the starch in humans’ diet lead to more copies of amylase genes, or did the copies of the amylase genes drive cravings that lead us to cultivate more carbs? We’ll need more research to find out.
 

How Will Today’s Processed Carbs Affect Our Genes Tomorrow?

As our diet changes to increasingly include processed carbs, what will happen to our AMY1 genes is fuzzy. “I don’t know what this could do to our genomes in the next 1000 years or more than 1000 years,” Yilmaz noted, but she said from the evidence it seems as though we may have peaked in AMY1 copies.

Jorgensen noted that this research is focused on a European population. She wonders whether the pattern of AMY1 duplication will be repeated in other populations “because the rise of starch happened first in the Middle East and then Europe and then later in the Americas,” she said.

“There’s individual variation and then there’s population-wide variation,” Jorgensen pointed out. She speculates that the historical diet of different cultures could explain population-based variations in AMY1 genes — it’s something future research could investigate. Other populations may also experience genetic changes as much of the world shifts to a more carb-heavy Western diet.

Overall, this research adds to the growing evidence that humans have a long history of loving carbs — for better and, at least over our most recent history and immediate future, for worse.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A New Way to ‘Smuggle’ Drugs Through the Blood-Brain Barrier

Article Type
Changed
Thu, 10/10/2024 - 14:14

 

Getting drugs to the brain is difficult. The very thing designed to protect the brain’s environment — the blood-brain barrier (BBB) — is one of the main reasons diseases like Alzheimer’s are so hard to treat.

And even if a drug can cross the BBB, it’s difficult to ensure it reaches specific areas of the brain like the hippocampus, which is located deep within the brain and notoriously difficult to target with conventional drugs.

However, new research shows that novel bioengineered proteins can target neurons in the hippocampus. Using a mouse model, the researchers found that these proteins could be delivered to the hippocampus intranasally — through the nose via a spray.

“This is an urgent topic because many potential therapeutic agents do not readily cross the blood-brain barrier or have limited effects even after intranasal delivery,” said Konrad Talbot, PhD, professor of neurosurgery and pathology at Loma Linda University, Loma Linda, California, who was not involved in the study.

This is the first time a protein drug, which is larger than many drug molecules, has been specifically delivered to the hippocampus, said Noriyasu Kamei, PhD, a professor of pharmaceutical science at Kobe Gakuin University in Kobe, Japan, and lead author of the study.
 

How Did They Do It?

“Smuggle” may be a flip term, but it’s not inaccurate.

Insulin has the ability to cross the BBB, so the team began with insulin as the vehicle. By attaching other molecules to an insulin fragment, researchers theorized they could create an insulin fusion protein that can be transported across the BBB and into the brain via a process called macropinocytosis.

They executed this technique in mice by fusing florescent proteins to insulin. To treat Alzheimer’s or other diseases, they would want to fuse therapeutic molecules to the insulin for brain delivery — a future step for their research.

Other groups are studying a similar approach using transferrin receptor instead of insulin to shuttle molecules across the BBB. However, the transferrin receptor doesn’t make it to the hippocampus, Kamei said.

A benefit of their system, Kamei pointed out, is that because the method just requires a small piece of insulin to work, it’s straightforward to produce in bacteria. Importantly, he said, the insulin fusion protein should not affect blood glucose levels.
 

Why Insulin?

Aside from its ability to cross the BBB, the team thought to use insulin as the basis of a fusion protein because of their previous work.

“I found that insulin has the unique characteristics to be accumulated specifically in the hippocampal neuronal layers,” Kamei explained. That potential for accumulation is key, as they can deliver more of a drug that way.

In their past work, Kamei and colleagues also found that it could be delivered from the nose to the brain, indicating that it may be possible to use a simple nasal spray.

“The potential for noninvasive delivery of proteins by intranasal administration to the hippocampal neurons is novel,” said John Varghese, PhD, professor of neurology at University of California Los Angeles (he was not involved in the study). He noted that it’s also possible that this method could be harnessed to treat other brain diseases.

There are other drugs that treat central nervous system diseases, such as desmopressin and buserelin, which are available as nasal sprays. However, these drugs are synthetic hormones, and though relatively small molecules, they do not cross the BBB.

There are also antibody treatments for Alzheimer’s, such as aducanumab (which will soon be discontinued), lecanemab, and donanemab; however, they aren’t always effective and they require an intravenous infusion, and while they cross the BBB to a degree, to bolster delivery to the brain, studies have proposed additional methods like focused ultrasound.

“Neuronal uptake of drugs potentially therapeutic for Alzheimer’s may be significantly enhanced by fusion of those drugs with insulin. This should be a research priority,” said Talbot.

While this is exciting and has potential, such drugs won’t be available anytime soon. Kamei would like to complete the research at a basic level in 5 years, including testing insulin fused with larger proteins such as therapeutic antibodies. If all goes well, they’ll move on to testing insulin fusion drugs in people.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Getting drugs to the brain is difficult. The very thing designed to protect the brain’s environment — the blood-brain barrier (BBB) — is one of the main reasons diseases like Alzheimer’s are so hard to treat.

And even if a drug can cross the BBB, it’s difficult to ensure it reaches specific areas of the brain like the hippocampus, which is located deep within the brain and notoriously difficult to target with conventional drugs.

However, new research shows that novel bioengineered proteins can target neurons in the hippocampus. Using a mouse model, the researchers found that these proteins could be delivered to the hippocampus intranasally — through the nose via a spray.

“This is an urgent topic because many potential therapeutic agents do not readily cross the blood-brain barrier or have limited effects even after intranasal delivery,” said Konrad Talbot, PhD, professor of neurosurgery and pathology at Loma Linda University, Loma Linda, California, who was not involved in the study.

This is the first time a protein drug, which is larger than many drug molecules, has been specifically delivered to the hippocampus, said Noriyasu Kamei, PhD, a professor of pharmaceutical science at Kobe Gakuin University in Kobe, Japan, and lead author of the study.
 

How Did They Do It?

“Smuggle” may be a flip term, but it’s not inaccurate.

Insulin has the ability to cross the BBB, so the team began with insulin as the vehicle. By attaching other molecules to an insulin fragment, researchers theorized they could create an insulin fusion protein that can be transported across the BBB and into the brain via a process called macropinocytosis.

They executed this technique in mice by fusing florescent proteins to insulin. To treat Alzheimer’s or other diseases, they would want to fuse therapeutic molecules to the insulin for brain delivery — a future step for their research.

Other groups are studying a similar approach using transferrin receptor instead of insulin to shuttle molecules across the BBB. However, the transferrin receptor doesn’t make it to the hippocampus, Kamei said.

A benefit of their system, Kamei pointed out, is that because the method just requires a small piece of insulin to work, it’s straightforward to produce in bacteria. Importantly, he said, the insulin fusion protein should not affect blood glucose levels.
 

Why Insulin?

Aside from its ability to cross the BBB, the team thought to use insulin as the basis of a fusion protein because of their previous work.

“I found that insulin has the unique characteristics to be accumulated specifically in the hippocampal neuronal layers,” Kamei explained. That potential for accumulation is key, as they can deliver more of a drug that way.

In their past work, Kamei and colleagues also found that it could be delivered from the nose to the brain, indicating that it may be possible to use a simple nasal spray.

“The potential for noninvasive delivery of proteins by intranasal administration to the hippocampal neurons is novel,” said John Varghese, PhD, professor of neurology at University of California Los Angeles (he was not involved in the study). He noted that it’s also possible that this method could be harnessed to treat other brain diseases.

There are other drugs that treat central nervous system diseases, such as desmopressin and buserelin, which are available as nasal sprays. However, these drugs are synthetic hormones, and though relatively small molecules, they do not cross the BBB.

There are also antibody treatments for Alzheimer’s, such as aducanumab (which will soon be discontinued), lecanemab, and donanemab; however, they aren’t always effective and they require an intravenous infusion, and while they cross the BBB to a degree, to bolster delivery to the brain, studies have proposed additional methods like focused ultrasound.

“Neuronal uptake of drugs potentially therapeutic for Alzheimer’s may be significantly enhanced by fusion of those drugs with insulin. This should be a research priority,” said Talbot.

While this is exciting and has potential, such drugs won’t be available anytime soon. Kamei would like to complete the research at a basic level in 5 years, including testing insulin fused with larger proteins such as therapeutic antibodies. If all goes well, they’ll move on to testing insulin fusion drugs in people.
 

A version of this article first appeared on Medscape.com.

 

Getting drugs to the brain is difficult. The very thing designed to protect the brain’s environment — the blood-brain barrier (BBB) — is one of the main reasons diseases like Alzheimer’s are so hard to treat.

And even if a drug can cross the BBB, it’s difficult to ensure it reaches specific areas of the brain like the hippocampus, which is located deep within the brain and notoriously difficult to target with conventional drugs.

However, new research shows that novel bioengineered proteins can target neurons in the hippocampus. Using a mouse model, the researchers found that these proteins could be delivered to the hippocampus intranasally — through the nose via a spray.

“This is an urgent topic because many potential therapeutic agents do not readily cross the blood-brain barrier or have limited effects even after intranasal delivery,” said Konrad Talbot, PhD, professor of neurosurgery and pathology at Loma Linda University, Loma Linda, California, who was not involved in the study.

This is the first time a protein drug, which is larger than many drug molecules, has been specifically delivered to the hippocampus, said Noriyasu Kamei, PhD, a professor of pharmaceutical science at Kobe Gakuin University in Kobe, Japan, and lead author of the study.
 

How Did They Do It?

“Smuggle” may be a flip term, but it’s not inaccurate.

Insulin has the ability to cross the BBB, so the team began with insulin as the vehicle. By attaching other molecules to an insulin fragment, researchers theorized they could create an insulin fusion protein that can be transported across the BBB and into the brain via a process called macropinocytosis.

They executed this technique in mice by fusing florescent proteins to insulin. To treat Alzheimer’s or other diseases, they would want to fuse therapeutic molecules to the insulin for brain delivery — a future step for their research.

Other groups are studying a similar approach using transferrin receptor instead of insulin to shuttle molecules across the BBB. However, the transferrin receptor doesn’t make it to the hippocampus, Kamei said.

A benefit of their system, Kamei pointed out, is that because the method just requires a small piece of insulin to work, it’s straightforward to produce in bacteria. Importantly, he said, the insulin fusion protein should not affect blood glucose levels.
 

Why Insulin?

Aside from its ability to cross the BBB, the team thought to use insulin as the basis of a fusion protein because of their previous work.

“I found that insulin has the unique characteristics to be accumulated specifically in the hippocampal neuronal layers,” Kamei explained. That potential for accumulation is key, as they can deliver more of a drug that way.

In their past work, Kamei and colleagues also found that it could be delivered from the nose to the brain, indicating that it may be possible to use a simple nasal spray.

“The potential for noninvasive delivery of proteins by intranasal administration to the hippocampal neurons is novel,” said John Varghese, PhD, professor of neurology at University of California Los Angeles (he was not involved in the study). He noted that it’s also possible that this method could be harnessed to treat other brain diseases.

There are other drugs that treat central nervous system diseases, such as desmopressin and buserelin, which are available as nasal sprays. However, these drugs are synthetic hormones, and though relatively small molecules, they do not cross the BBB.

There are also antibody treatments for Alzheimer’s, such as aducanumab (which will soon be discontinued), lecanemab, and donanemab; however, they aren’t always effective and they require an intravenous infusion, and while they cross the BBB to a degree, to bolster delivery to the brain, studies have proposed additional methods like focused ultrasound.

“Neuronal uptake of drugs potentially therapeutic for Alzheimer’s may be significantly enhanced by fusion of those drugs with insulin. This should be a research priority,” said Talbot.

While this is exciting and has potential, such drugs won’t be available anytime soon. Kamei would like to complete the research at a basic level in 5 years, including testing insulin fused with larger proteins such as therapeutic antibodies. If all goes well, they’ll move on to testing insulin fusion drugs in people.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM PNAS

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The Next Frontier of Antibiotic Discovery: Inside Your Gut

Article Type
Changed
Tue, 08/27/2024 - 09:29

Scientists at Stanford University and the University of Pennsylvania have discovered a new antibiotic candidate in a surprising place: the human gut. 

In mice, the antibiotic — a peptide known as prevotellin-2 — showed antimicrobial potency on par with polymyxin B, an antibiotic medication used to treat multidrug-resistant infections. Meanwhile, the peptide mainly left commensal, or beneficial, bacteria alone. The study, published in Cell, also identified several other potent antibiotic peptides with the potential to combat antimicrobial-resistant infections.

The research is part of a larger quest to find new antibiotics that can fight drug-resistant infections, a critical public health threat with more than 2.8 million cases and 35,000 deaths annually in the United States. That quest is urgent, said study author César de la Fuente, PhD, professor of bioengineering at the University of Pennsylvania, Philadelphia. 

“The main pillars that have enabled us to almost double our lifespan in the last 100 years or so have been antibiotics, vaccines, and clean water,” said Dr. de la Fuente. “Imagine taking out one of those. I think it would be pretty dramatic.” (Dr. De la Fuente’s lab has become known for finding antibiotic candidates in unusual places, like ancient genetic information of Neanderthals and woolly mammoths.)  

The first widely used antibiotic, penicillin, was discovered in 1928, when a physician studying Staphylococcus bacteria returned to his lab after summer break to find mold growing in one of his petri dishes. But many other antibiotics — like streptomycin, tetracycline, and erythromycin — were discovered from soil bacteria, which produce variations of these substances to compete with other microorganisms. 

By looking in the gut microbiome, the researchers hoped to identify peptides that the trillions of microbes use against each other in the fight for limited resources — ideally, peptides that wouldn’t broadly kill off the entire microbiome. 
 

Kill the Bad, Spare the Good

Many traditional antibiotics are small molecules. This means they can wipe out the good bacteria in your body, and because each targets a specific bacterial function, bad bacteria can become resistant to them.

Peptide antibiotics, on the other hand, don’t diffuse into the whole body. If taken orally, they stay in the gut; if taken intravenously, they generally stay in the blood. And because of how they kill bacteria, targeting the membrane, they’re also less prone to bacterial resistance.

The microbiome is like a big reservoir of pathogens, said Ami Bhatt, MD, PhD, hematologist at Stanford University in California and one of the study’s authors. Because many antibiotics kill healthy gut bacteria, “what you have left over,” Dr. Bhatt said, “is this big open niche that gets filled up with multidrug-resistant organisms like E coli [Escherichia coli] or vancomycin-resistant Enterococcus.”

Dr. Bhatt has seen cancer patients undergo successful treatment only to die of a multidrug-resistant infection, because current antibiotics fail against those pathogens. “That’s like winning the battle to lose the war.”

By investigating the microbiome, “we wanted to see if we could identify antimicrobial peptides that might spare key members of our regular microbiome, so that we wouldn’t totally disrupt the microbiome the way we do when we use broad-spectrum, small molecule–based antibiotics,” Dr. Bhatt said.

The researchers used artificial intelligence to sift through 400,000 proteins to predict, based on known antibiotics, which peptide sequences might have antimicrobial properties. From the results, they chose 78 peptides to synthesize and test.

“The application of computational approaches combined with experimental validation is very powerful and exciting,” said Jennifer Geddes-McAlister, PhD, professor of cell biology at the University of Guelph in Ontario, Canada, who was not involved in the study. “The study is robust in its approach to microbiome sampling.” 
 

 

 

The Long Journey from Lab to Clinic

More than half of the peptides the team tested effectively inhibited the growth of harmful bacteria, and prevotellin-2 (derived from the bacteria Prevotella copri)stood out as the most powerful.

“The study validates experimental data from the lab using animal models, which moves discoveries closer to the clinic,” said Dr. Geddes-McAlister. “Further testing with clinical trials is needed, but the potential for clinical application is promising.” 

Unfortunately, that’s not likely to happen anytime soon, said Dr. de la Fuente. “There is not enough economic incentive” for companies to develop new antibiotics. Ten years is his most hopeful guess for when we might see prevotellin-2, or a similar antibiotic, complete clinical trials.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Scientists at Stanford University and the University of Pennsylvania have discovered a new antibiotic candidate in a surprising place: the human gut. 

In mice, the antibiotic — a peptide known as prevotellin-2 — showed antimicrobial potency on par with polymyxin B, an antibiotic medication used to treat multidrug-resistant infections. Meanwhile, the peptide mainly left commensal, or beneficial, bacteria alone. The study, published in Cell, also identified several other potent antibiotic peptides with the potential to combat antimicrobial-resistant infections.

The research is part of a larger quest to find new antibiotics that can fight drug-resistant infections, a critical public health threat with more than 2.8 million cases and 35,000 deaths annually in the United States. That quest is urgent, said study author César de la Fuente, PhD, professor of bioengineering at the University of Pennsylvania, Philadelphia. 

“The main pillars that have enabled us to almost double our lifespan in the last 100 years or so have been antibiotics, vaccines, and clean water,” said Dr. de la Fuente. “Imagine taking out one of those. I think it would be pretty dramatic.” (Dr. De la Fuente’s lab has become known for finding antibiotic candidates in unusual places, like ancient genetic information of Neanderthals and woolly mammoths.)  

The first widely used antibiotic, penicillin, was discovered in 1928, when a physician studying Staphylococcus bacteria returned to his lab after summer break to find mold growing in one of his petri dishes. But many other antibiotics — like streptomycin, tetracycline, and erythromycin — were discovered from soil bacteria, which produce variations of these substances to compete with other microorganisms. 

By looking in the gut microbiome, the researchers hoped to identify peptides that the trillions of microbes use against each other in the fight for limited resources — ideally, peptides that wouldn’t broadly kill off the entire microbiome. 
 

Kill the Bad, Spare the Good

Many traditional antibiotics are small molecules. This means they can wipe out the good bacteria in your body, and because each targets a specific bacterial function, bad bacteria can become resistant to them.

Peptide antibiotics, on the other hand, don’t diffuse into the whole body. If taken orally, they stay in the gut; if taken intravenously, they generally stay in the blood. And because of how they kill bacteria, targeting the membrane, they’re also less prone to bacterial resistance.

The microbiome is like a big reservoir of pathogens, said Ami Bhatt, MD, PhD, hematologist at Stanford University in California and one of the study’s authors. Because many antibiotics kill healthy gut bacteria, “what you have left over,” Dr. Bhatt said, “is this big open niche that gets filled up with multidrug-resistant organisms like E coli [Escherichia coli] or vancomycin-resistant Enterococcus.”

Dr. Bhatt has seen cancer patients undergo successful treatment only to die of a multidrug-resistant infection, because current antibiotics fail against those pathogens. “That’s like winning the battle to lose the war.”

By investigating the microbiome, “we wanted to see if we could identify antimicrobial peptides that might spare key members of our regular microbiome, so that we wouldn’t totally disrupt the microbiome the way we do when we use broad-spectrum, small molecule–based antibiotics,” Dr. Bhatt said.

The researchers used artificial intelligence to sift through 400,000 proteins to predict, based on known antibiotics, which peptide sequences might have antimicrobial properties. From the results, they chose 78 peptides to synthesize and test.

“The application of computational approaches combined with experimental validation is very powerful and exciting,” said Jennifer Geddes-McAlister, PhD, professor of cell biology at the University of Guelph in Ontario, Canada, who was not involved in the study. “The study is robust in its approach to microbiome sampling.” 
 

 

 

The Long Journey from Lab to Clinic

More than half of the peptides the team tested effectively inhibited the growth of harmful bacteria, and prevotellin-2 (derived from the bacteria Prevotella copri)stood out as the most powerful.

“The study validates experimental data from the lab using animal models, which moves discoveries closer to the clinic,” said Dr. Geddes-McAlister. “Further testing with clinical trials is needed, but the potential for clinical application is promising.” 

Unfortunately, that’s not likely to happen anytime soon, said Dr. de la Fuente. “There is not enough economic incentive” for companies to develop new antibiotics. Ten years is his most hopeful guess for when we might see prevotellin-2, or a similar antibiotic, complete clinical trials.

A version of this article first appeared on Medscape.com.

Scientists at Stanford University and the University of Pennsylvania have discovered a new antibiotic candidate in a surprising place: the human gut. 

In mice, the antibiotic — a peptide known as prevotellin-2 — showed antimicrobial potency on par with polymyxin B, an antibiotic medication used to treat multidrug-resistant infections. Meanwhile, the peptide mainly left commensal, or beneficial, bacteria alone. The study, published in Cell, also identified several other potent antibiotic peptides with the potential to combat antimicrobial-resistant infections.

The research is part of a larger quest to find new antibiotics that can fight drug-resistant infections, a critical public health threat with more than 2.8 million cases and 35,000 deaths annually in the United States. That quest is urgent, said study author César de la Fuente, PhD, professor of bioengineering at the University of Pennsylvania, Philadelphia. 

“The main pillars that have enabled us to almost double our lifespan in the last 100 years or so have been antibiotics, vaccines, and clean water,” said Dr. de la Fuente. “Imagine taking out one of those. I think it would be pretty dramatic.” (Dr. De la Fuente’s lab has become known for finding antibiotic candidates in unusual places, like ancient genetic information of Neanderthals and woolly mammoths.)  

The first widely used antibiotic, penicillin, was discovered in 1928, when a physician studying Staphylococcus bacteria returned to his lab after summer break to find mold growing in one of his petri dishes. But many other antibiotics — like streptomycin, tetracycline, and erythromycin — were discovered from soil bacteria, which produce variations of these substances to compete with other microorganisms. 

By looking in the gut microbiome, the researchers hoped to identify peptides that the trillions of microbes use against each other in the fight for limited resources — ideally, peptides that wouldn’t broadly kill off the entire microbiome. 
 

Kill the Bad, Spare the Good

Many traditional antibiotics are small molecules. This means they can wipe out the good bacteria in your body, and because each targets a specific bacterial function, bad bacteria can become resistant to them.

Peptide antibiotics, on the other hand, don’t diffuse into the whole body. If taken orally, they stay in the gut; if taken intravenously, they generally stay in the blood. And because of how they kill bacteria, targeting the membrane, they’re also less prone to bacterial resistance.

The microbiome is like a big reservoir of pathogens, said Ami Bhatt, MD, PhD, hematologist at Stanford University in California and one of the study’s authors. Because many antibiotics kill healthy gut bacteria, “what you have left over,” Dr. Bhatt said, “is this big open niche that gets filled up with multidrug-resistant organisms like E coli [Escherichia coli] or vancomycin-resistant Enterococcus.”

Dr. Bhatt has seen cancer patients undergo successful treatment only to die of a multidrug-resistant infection, because current antibiotics fail against those pathogens. “That’s like winning the battle to lose the war.”

By investigating the microbiome, “we wanted to see if we could identify antimicrobial peptides that might spare key members of our regular microbiome, so that we wouldn’t totally disrupt the microbiome the way we do when we use broad-spectrum, small molecule–based antibiotics,” Dr. Bhatt said.

The researchers used artificial intelligence to sift through 400,000 proteins to predict, based on known antibiotics, which peptide sequences might have antimicrobial properties. From the results, they chose 78 peptides to synthesize and test.

“The application of computational approaches combined with experimental validation is very powerful and exciting,” said Jennifer Geddes-McAlister, PhD, professor of cell biology at the University of Guelph in Ontario, Canada, who was not involved in the study. “The study is robust in its approach to microbiome sampling.” 
 

 

 

The Long Journey from Lab to Clinic

More than half of the peptides the team tested effectively inhibited the growth of harmful bacteria, and prevotellin-2 (derived from the bacteria Prevotella copri)stood out as the most powerful.

“The study validates experimental data from the lab using animal models, which moves discoveries closer to the clinic,” said Dr. Geddes-McAlister. “Further testing with clinical trials is needed, but the potential for clinical application is promising.” 

Unfortunately, that’s not likely to happen anytime soon, said Dr. de la Fuente. “There is not enough economic incentive” for companies to develop new antibiotics. Ten years is his most hopeful guess for when we might see prevotellin-2, or a similar antibiotic, complete clinical trials.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CELL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Could Targeting ‘Zombie Cells’ Extend a Healthy Lifespan?

Article Type
Changed
Wed, 08/14/2024 - 12:12

What if a drug could help you live a longer, healthier life?

Scientists at the University of Connecticut are working on it. In a new study in Cell Metabolism, researchers described how to target specific cells to extend the lifespan and improve the health of mice late in life.

The study builds on a growing body of research, mostly in animals, testing interventions to slow aging and prolong health span, the length of time that one is not just alive but also healthy.

“Aging is the most important risk factor for every disease that we deal with in adult human beings,” said cardiologist Douglas Vaughan, MD, director of the Potocsnak Longevity Institute at Northwestern University’s Feinberg School of Medicine, Chicago. (Dr. Vaughan was not involved in the new study.) “So the big hypothesis is: If we could slow down aging just a little bit, we can push back the onset of disease.”

As we age, our cells wear out. It’s called cellular senescence — a state of irreversible cell cycle arrest — and it’s increasingly recognized as a key contributor to aging.

Senescent cells — or “zombie cells” — secrete harmful substances that disrupt tissue functioning. They’ve been linked to chronic inflammationtissue damage, and the development of age-related diseases.

Senescence can be characterized by the accumulation of cells with high levels of specific markers like p21, or p21high cells. Almost any cell can become a p21high cell, and they accumulate with age, said Ming Xu, PhD, a professor at the UConn Center on Aging, UConn Health, Farmington, Connecticut, who led the study.

By targeting and eliminating p21high senescent cells, Dr. Xu hopes to develop novel therapies that might help people live longer and enjoy more years in good health.

Such a treatment could be ready for human trials in 2-5 years, Dr. Xu said.
 

What the Researchers Did

Xu and colleagues used genetic engineering to eliminate p21high cells in mice, introducing into their genome something they describe as an inducible “suicide gene.” Giving the mice a certain drug (a low dose of tamoxifen) activated the suicide gene in all p21high cells, causing them to die. Administering this treatment once a month, from age 20 months (older age) until the end of life, significantly extended the rodents’ lifespan, reduced inflammation, and decreased gene activity linked to aging.

Treated mice lived, on average, for 33 months — 3 months longer than the untreated mice. The oldest treated mouse lived to 43 months — roughly 130 in human years.

But the treated mice didn’t just live longer; they were also healthier. In humans, walking speed and grip strength can be clues of overall health and vitality. The old, treated mice were able to walk faster and grip objects with greater strength than untreated mice of the same age.

Dr. Xu’s lab is now testing drugs that target p21high cells in hopes of finding one that would work in humans. Leveraging immunotherapy technology to target these cells could be another option, Dr. Xu said.

The team also plans to test whether eliminating p21high cells could prevent or alleviate diabetes or Alzheimer’s disease.
 

 

 

Challenges and Criticisms

The research provides “important evidence that targeting senescence and the molecular components of that pathway might provide some benefit in the long term,” Dr. Vaughan said.

But killing senescent cells could come with downsides.

“Senescence protects us from hyperproliferative responses,” potentially blocking cells from becoming malignant, Dr. Vaughan said. “There’s this effect on aging that is desirable, but at the same time, you may enhance your risk of cancer or malignancy or excessive proliferation in some cells.”

And of course, we don’t necessarily need drugs to prolong healthy life, Dr. Vaughan pointed out.

For many people, a long healthy life is already within reach. Humans live longer on average than they used to, and simple lifestyle choices — nourishing your body well, staying active, and maintaining a healthy weight — can increase one’s chances of good health.

The most consistently demonstrated intervention for extending lifespan “in almost every animal species is caloric restriction,” Dr. Vaughan said. (Dr. Xu’s team is also investigating whether fasting and exercise can lead to a decrease in p21high cells.)

As for brain health, Dr. Vaughan and colleagues at Northwestern are studying “super agers,” people who are cognitively intact into their 90s.

“The one single thing that they found that contributes to that process, and contributes to that success, is really a social network and human bonds and interaction,” Dr. Vaughan said.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

What if a drug could help you live a longer, healthier life?

Scientists at the University of Connecticut are working on it. In a new study in Cell Metabolism, researchers described how to target specific cells to extend the lifespan and improve the health of mice late in life.

The study builds on a growing body of research, mostly in animals, testing interventions to slow aging and prolong health span, the length of time that one is not just alive but also healthy.

“Aging is the most important risk factor for every disease that we deal with in adult human beings,” said cardiologist Douglas Vaughan, MD, director of the Potocsnak Longevity Institute at Northwestern University’s Feinberg School of Medicine, Chicago. (Dr. Vaughan was not involved in the new study.) “So the big hypothesis is: If we could slow down aging just a little bit, we can push back the onset of disease.”

As we age, our cells wear out. It’s called cellular senescence — a state of irreversible cell cycle arrest — and it’s increasingly recognized as a key contributor to aging.

Senescent cells — or “zombie cells” — secrete harmful substances that disrupt tissue functioning. They’ve been linked to chronic inflammationtissue damage, and the development of age-related diseases.

Senescence can be characterized by the accumulation of cells with high levels of specific markers like p21, or p21high cells. Almost any cell can become a p21high cell, and they accumulate with age, said Ming Xu, PhD, a professor at the UConn Center on Aging, UConn Health, Farmington, Connecticut, who led the study.

By targeting and eliminating p21high senescent cells, Dr. Xu hopes to develop novel therapies that might help people live longer and enjoy more years in good health.

Such a treatment could be ready for human trials in 2-5 years, Dr. Xu said.
 

What the Researchers Did

Xu and colleagues used genetic engineering to eliminate p21high cells in mice, introducing into their genome something they describe as an inducible “suicide gene.” Giving the mice a certain drug (a low dose of tamoxifen) activated the suicide gene in all p21high cells, causing them to die. Administering this treatment once a month, from age 20 months (older age) until the end of life, significantly extended the rodents’ lifespan, reduced inflammation, and decreased gene activity linked to aging.

Treated mice lived, on average, for 33 months — 3 months longer than the untreated mice. The oldest treated mouse lived to 43 months — roughly 130 in human years.

But the treated mice didn’t just live longer; they were also healthier. In humans, walking speed and grip strength can be clues of overall health and vitality. The old, treated mice were able to walk faster and grip objects with greater strength than untreated mice of the same age.

Dr. Xu’s lab is now testing drugs that target p21high cells in hopes of finding one that would work in humans. Leveraging immunotherapy technology to target these cells could be another option, Dr. Xu said.

The team also plans to test whether eliminating p21high cells could prevent or alleviate diabetes or Alzheimer’s disease.
 

 

 

Challenges and Criticisms

The research provides “important evidence that targeting senescence and the molecular components of that pathway might provide some benefit in the long term,” Dr. Vaughan said.

But killing senescent cells could come with downsides.

“Senescence protects us from hyperproliferative responses,” potentially blocking cells from becoming malignant, Dr. Vaughan said. “There’s this effect on aging that is desirable, but at the same time, you may enhance your risk of cancer or malignancy or excessive proliferation in some cells.”

And of course, we don’t necessarily need drugs to prolong healthy life, Dr. Vaughan pointed out.

For many people, a long healthy life is already within reach. Humans live longer on average than they used to, and simple lifestyle choices — nourishing your body well, staying active, and maintaining a healthy weight — can increase one’s chances of good health.

The most consistently demonstrated intervention for extending lifespan “in almost every animal species is caloric restriction,” Dr. Vaughan said. (Dr. Xu’s team is also investigating whether fasting and exercise can lead to a decrease in p21high cells.)

As for brain health, Dr. Vaughan and colleagues at Northwestern are studying “super agers,” people who are cognitively intact into their 90s.

“The one single thing that they found that contributes to that process, and contributes to that success, is really a social network and human bonds and interaction,” Dr. Vaughan said.

A version of this article appeared on Medscape.com.

What if a drug could help you live a longer, healthier life?

Scientists at the University of Connecticut are working on it. In a new study in Cell Metabolism, researchers described how to target specific cells to extend the lifespan and improve the health of mice late in life.

The study builds on a growing body of research, mostly in animals, testing interventions to slow aging and prolong health span, the length of time that one is not just alive but also healthy.

“Aging is the most important risk factor for every disease that we deal with in adult human beings,” said cardiologist Douglas Vaughan, MD, director of the Potocsnak Longevity Institute at Northwestern University’s Feinberg School of Medicine, Chicago. (Dr. Vaughan was not involved in the new study.) “So the big hypothesis is: If we could slow down aging just a little bit, we can push back the onset of disease.”

As we age, our cells wear out. It’s called cellular senescence — a state of irreversible cell cycle arrest — and it’s increasingly recognized as a key contributor to aging.

Senescent cells — or “zombie cells” — secrete harmful substances that disrupt tissue functioning. They’ve been linked to chronic inflammationtissue damage, and the development of age-related diseases.

Senescence can be characterized by the accumulation of cells with high levels of specific markers like p21, or p21high cells. Almost any cell can become a p21high cell, and they accumulate with age, said Ming Xu, PhD, a professor at the UConn Center on Aging, UConn Health, Farmington, Connecticut, who led the study.

By targeting and eliminating p21high senescent cells, Dr. Xu hopes to develop novel therapies that might help people live longer and enjoy more years in good health.

Such a treatment could be ready for human trials in 2-5 years, Dr. Xu said.
 

What the Researchers Did

Xu and colleagues used genetic engineering to eliminate p21high cells in mice, introducing into their genome something they describe as an inducible “suicide gene.” Giving the mice a certain drug (a low dose of tamoxifen) activated the suicide gene in all p21high cells, causing them to die. Administering this treatment once a month, from age 20 months (older age) until the end of life, significantly extended the rodents’ lifespan, reduced inflammation, and decreased gene activity linked to aging.

Treated mice lived, on average, for 33 months — 3 months longer than the untreated mice. The oldest treated mouse lived to 43 months — roughly 130 in human years.

But the treated mice didn’t just live longer; they were also healthier. In humans, walking speed and grip strength can be clues of overall health and vitality. The old, treated mice were able to walk faster and grip objects with greater strength than untreated mice of the same age.

Dr. Xu’s lab is now testing drugs that target p21high cells in hopes of finding one that would work in humans. Leveraging immunotherapy technology to target these cells could be another option, Dr. Xu said.

The team also plans to test whether eliminating p21high cells could prevent or alleviate diabetes or Alzheimer’s disease.
 

 

 

Challenges and Criticisms

The research provides “important evidence that targeting senescence and the molecular components of that pathway might provide some benefit in the long term,” Dr. Vaughan said.

But killing senescent cells could come with downsides.

“Senescence protects us from hyperproliferative responses,” potentially blocking cells from becoming malignant, Dr. Vaughan said. “There’s this effect on aging that is desirable, but at the same time, you may enhance your risk of cancer or malignancy or excessive proliferation in some cells.”

And of course, we don’t necessarily need drugs to prolong healthy life, Dr. Vaughan pointed out.

For many people, a long healthy life is already within reach. Humans live longer on average than they used to, and simple lifestyle choices — nourishing your body well, staying active, and maintaining a healthy weight — can increase one’s chances of good health.

The most consistently demonstrated intervention for extending lifespan “in almost every animal species is caloric restriction,” Dr. Vaughan said. (Dr. Xu’s team is also investigating whether fasting and exercise can lead to a decrease in p21high cells.)

As for brain health, Dr. Vaughan and colleagues at Northwestern are studying “super agers,” people who are cognitively intact into their 90s.

“The one single thing that they found that contributes to that process, and contributes to that success, is really a social network and human bonds and interaction,” Dr. Vaughan said.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Light During Nighttime Linked to Diabetes Risk

Article Type
Changed
Thu, 07/11/2024 - 13:14

Concerned about your patient’s type 2 diabetes risk? Along with the usual preventive strategies — like diet and exercise and, when appropriate, glucagon-like peptide 1 (GLP-1) agonists — there’s another simple, no-risk strategy that just might help: Turning off the light at night.

A study in The Lancet found that people who were exposed to the most light between 12:30 a.m. and 6 a.m. were 1.5 times more likely to develop diabetes than those who remained in darkness during that time frame.

The study builds on growing evidence linking nighttime light exposure to type 2 diabetes risk. But unlike previous large studies that relied on satellite data of outdoor light levels (an indirect measure of light exposure), the recent study looked at personal light exposure — that is, light measured directly on individuals — as recorded by a wrist-worn sensor.

“Those previous studies likely underestimated the effect,” said study author Andrew Phillips, PhD, professor of sleep health at Flinders University in Adelaide, Australia, “since they did not capture indoor light environments.”

Using data from 85,000 participants from the UK Biobank, the recent study is the largest to date linking diabetes risk to personal light exposure at night.

“This is really a phenomenal study,” said Courtney Peterson, PhD, a scientist at the University of Alabama at Birmingham’s Diabetes Research Center, who was not involved in the study. “This is the first large-scale study we have looking at people’s light exposure patterns and linking it to their long-term health.”
 

What the Study Showed

The participants wore the light sensors for a week, recording day and night light from all sources — whether from sunlight, lamps, streetlights, or digital screens. The researchers then tracked participants for 8 years.

“About half of the people that we looked at had very dim levels of light at night, so less than 1 lux — that basically means less than candlelight,” said Dr. Phillips. “They were the people who were protected against type 2 diabetes.”

Those exposed to more light at night — defined in the study as 12:30 a.m.–6 a.m. — had a higher risk for type 2 diabetes. The risk went up as a dose response, Phillips said: The brighter the light exposure, the higher the diabetes risk.

Participants in the top 10% of light exposure — who were exposed to about 48 lux , or the equivalent of relatively dim overhead lighting — were 1.5 times more likely to develop diabetes than those in the dark. That’s about the risk increase you’d get from having a family history of type 2 diabetes, the researchers said.

Even when they controlled for factors like socioeconomic status, smoking, diet, exercise, and shift work, “we still found there was this very strong relationship between light exposure and risk of type 2 diabetes,” said Dr. Phillips.
 

How Light at Night May Increase Diabetes Risk

The results are not entirely surprising, said endocrinologist Susanne Miedlich, MD, a professor at the University of Rochester Medical Center, Rochester, New York, who was not involved in the study.

Light at night can disrupt the circadian rhythm, or your body’s internal 24-hour cycle. And scientists have long known that circadian rhythm is important for all kinds of biologic processes, including how the body manages blood sugar.

One’s internal clock regulates food intake, sugar absorption, and the release of insulin. Dysregulation in the circadian rhythm is associated with insulin resistance, a precursor to type 2 diabetes.

Dr. Phillips speculated that the sleep hormone melatonin also plays a role.

“Melatonin does a lot of things, but one of the things that it does is it manages our glucose and our insulin responses,” Dr. Phillips said. “So if you’re chronically getting light exposure at night, that’s reducing a level of melatonin that, in the long term, could lead to poor metabolic outcomes.”

Previous studies have explored melatonin supplementation to help manage diabetes. “However, while melatonin clearly regulates circadian rhythms, its utility as a drug to prevent diabetes has not really panned out thus far,” Dr. Miedlich said.
 

Takeaways

Interventional studies are needed to confirm whether strategies like powering down screens, turning off lights, or using blackout curtains could reduce diabetes risk.

That said, “there’s no reason not to tell people to get healthy light exposure patterns and sleep, especially in the context of diabetes,” said Dr. Phillips.

Other known strategies for reducing diabetes risk include intensive lifestyle programs, which reduce risk by up to 58%, and GLP-1 agonists.

“Probably a GLP-1 agonist is going to be more effective,” Dr. Peterson said. “But this is still a fairly large effect without having to go through the expense of buying a GLP-1 or losing a lot of weight or making a big lifestyle change.”

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Concerned about your patient’s type 2 diabetes risk? Along with the usual preventive strategies — like diet and exercise and, when appropriate, glucagon-like peptide 1 (GLP-1) agonists — there’s another simple, no-risk strategy that just might help: Turning off the light at night.

A study in The Lancet found that people who were exposed to the most light between 12:30 a.m. and 6 a.m. were 1.5 times more likely to develop diabetes than those who remained in darkness during that time frame.

The study builds on growing evidence linking nighttime light exposure to type 2 diabetes risk. But unlike previous large studies that relied on satellite data of outdoor light levels (an indirect measure of light exposure), the recent study looked at personal light exposure — that is, light measured directly on individuals — as recorded by a wrist-worn sensor.

“Those previous studies likely underestimated the effect,” said study author Andrew Phillips, PhD, professor of sleep health at Flinders University in Adelaide, Australia, “since they did not capture indoor light environments.”

Using data from 85,000 participants from the UK Biobank, the recent study is the largest to date linking diabetes risk to personal light exposure at night.

“This is really a phenomenal study,” said Courtney Peterson, PhD, a scientist at the University of Alabama at Birmingham’s Diabetes Research Center, who was not involved in the study. “This is the first large-scale study we have looking at people’s light exposure patterns and linking it to their long-term health.”
 

What the Study Showed

The participants wore the light sensors for a week, recording day and night light from all sources — whether from sunlight, lamps, streetlights, or digital screens. The researchers then tracked participants for 8 years.

“About half of the people that we looked at had very dim levels of light at night, so less than 1 lux — that basically means less than candlelight,” said Dr. Phillips. “They were the people who were protected against type 2 diabetes.”

Those exposed to more light at night — defined in the study as 12:30 a.m.–6 a.m. — had a higher risk for type 2 diabetes. The risk went up as a dose response, Phillips said: The brighter the light exposure, the higher the diabetes risk.

Participants in the top 10% of light exposure — who were exposed to about 48 lux , or the equivalent of relatively dim overhead lighting — were 1.5 times more likely to develop diabetes than those in the dark. That’s about the risk increase you’d get from having a family history of type 2 diabetes, the researchers said.

Even when they controlled for factors like socioeconomic status, smoking, diet, exercise, and shift work, “we still found there was this very strong relationship between light exposure and risk of type 2 diabetes,” said Dr. Phillips.
 

How Light at Night May Increase Diabetes Risk

The results are not entirely surprising, said endocrinologist Susanne Miedlich, MD, a professor at the University of Rochester Medical Center, Rochester, New York, who was not involved in the study.

Light at night can disrupt the circadian rhythm, or your body’s internal 24-hour cycle. And scientists have long known that circadian rhythm is important for all kinds of biologic processes, including how the body manages blood sugar.

One’s internal clock regulates food intake, sugar absorption, and the release of insulin. Dysregulation in the circadian rhythm is associated with insulin resistance, a precursor to type 2 diabetes.

Dr. Phillips speculated that the sleep hormone melatonin also plays a role.

“Melatonin does a lot of things, but one of the things that it does is it manages our glucose and our insulin responses,” Dr. Phillips said. “So if you’re chronically getting light exposure at night, that’s reducing a level of melatonin that, in the long term, could lead to poor metabolic outcomes.”

Previous studies have explored melatonin supplementation to help manage diabetes. “However, while melatonin clearly regulates circadian rhythms, its utility as a drug to prevent diabetes has not really panned out thus far,” Dr. Miedlich said.
 

Takeaways

Interventional studies are needed to confirm whether strategies like powering down screens, turning off lights, or using blackout curtains could reduce diabetes risk.

That said, “there’s no reason not to tell people to get healthy light exposure patterns and sleep, especially in the context of diabetes,” said Dr. Phillips.

Other known strategies for reducing diabetes risk include intensive lifestyle programs, which reduce risk by up to 58%, and GLP-1 agonists.

“Probably a GLP-1 agonist is going to be more effective,” Dr. Peterson said. “But this is still a fairly large effect without having to go through the expense of buying a GLP-1 or losing a lot of weight or making a big lifestyle change.”

A version of this article first appeared on Medscape.com.

Concerned about your patient’s type 2 diabetes risk? Along with the usual preventive strategies — like diet and exercise and, when appropriate, glucagon-like peptide 1 (GLP-1) agonists — there’s another simple, no-risk strategy that just might help: Turning off the light at night.

A study in The Lancet found that people who were exposed to the most light between 12:30 a.m. and 6 a.m. were 1.5 times more likely to develop diabetes than those who remained in darkness during that time frame.

The study builds on growing evidence linking nighttime light exposure to type 2 diabetes risk. But unlike previous large studies that relied on satellite data of outdoor light levels (an indirect measure of light exposure), the recent study looked at personal light exposure — that is, light measured directly on individuals — as recorded by a wrist-worn sensor.

“Those previous studies likely underestimated the effect,” said study author Andrew Phillips, PhD, professor of sleep health at Flinders University in Adelaide, Australia, “since they did not capture indoor light environments.”

Using data from 85,000 participants from the UK Biobank, the recent study is the largest to date linking diabetes risk to personal light exposure at night.

“This is really a phenomenal study,” said Courtney Peterson, PhD, a scientist at the University of Alabama at Birmingham’s Diabetes Research Center, who was not involved in the study. “This is the first large-scale study we have looking at people’s light exposure patterns and linking it to their long-term health.”
 

What the Study Showed

The participants wore the light sensors for a week, recording day and night light from all sources — whether from sunlight, lamps, streetlights, or digital screens. The researchers then tracked participants for 8 years.

“About half of the people that we looked at had very dim levels of light at night, so less than 1 lux — that basically means less than candlelight,” said Dr. Phillips. “They were the people who were protected against type 2 diabetes.”

Those exposed to more light at night — defined in the study as 12:30 a.m.–6 a.m. — had a higher risk for type 2 diabetes. The risk went up as a dose response, Phillips said: The brighter the light exposure, the higher the diabetes risk.

Participants in the top 10% of light exposure — who were exposed to about 48 lux , or the equivalent of relatively dim overhead lighting — were 1.5 times more likely to develop diabetes than those in the dark. That’s about the risk increase you’d get from having a family history of type 2 diabetes, the researchers said.

Even when they controlled for factors like socioeconomic status, smoking, diet, exercise, and shift work, “we still found there was this very strong relationship between light exposure and risk of type 2 diabetes,” said Dr. Phillips.
 

How Light at Night May Increase Diabetes Risk

The results are not entirely surprising, said endocrinologist Susanne Miedlich, MD, a professor at the University of Rochester Medical Center, Rochester, New York, who was not involved in the study.

Light at night can disrupt the circadian rhythm, or your body’s internal 24-hour cycle. And scientists have long known that circadian rhythm is important for all kinds of biologic processes, including how the body manages blood sugar.

One’s internal clock regulates food intake, sugar absorption, and the release of insulin. Dysregulation in the circadian rhythm is associated with insulin resistance, a precursor to type 2 diabetes.

Dr. Phillips speculated that the sleep hormone melatonin also plays a role.

“Melatonin does a lot of things, but one of the things that it does is it manages our glucose and our insulin responses,” Dr. Phillips said. “So if you’re chronically getting light exposure at night, that’s reducing a level of melatonin that, in the long term, could lead to poor metabolic outcomes.”

Previous studies have explored melatonin supplementation to help manage diabetes. “However, while melatonin clearly regulates circadian rhythms, its utility as a drug to prevent diabetes has not really panned out thus far,” Dr. Miedlich said.
 

Takeaways

Interventional studies are needed to confirm whether strategies like powering down screens, turning off lights, or using blackout curtains could reduce diabetes risk.

That said, “there’s no reason not to tell people to get healthy light exposure patterns and sleep, especially in the context of diabetes,” said Dr. Phillips.

Other known strategies for reducing diabetes risk include intensive lifestyle programs, which reduce risk by up to 58%, and GLP-1 agonists.

“Probably a GLP-1 agonist is going to be more effective,” Dr. Peterson said. “But this is still a fairly large effect without having to go through the expense of buying a GLP-1 or losing a lot of weight or making a big lifestyle change.”

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM THE LANCET

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Polycystic Ovarian Syndrome: New Science Offers Old Remedy

Article Type
Changed
Wed, 06/19/2024 - 15:09

An ancient Chinese remedy for malaria could offer new hope to the 10% of reproductive-age women living with polycystic ovarian syndrome (PCOS), a poorly understood endocrine disorder that can cause hormonal imbalances, irregular periods, and cysts in the ovaries.

“PCOS is among the most common disorders of reproductive-age women,” said endocrinologist Andrea Dunaif, MD, a professor at the Icahn School of Medicine at Mount Sinai, New York City, who studies diabetes and women’s health. “It is a major risk factor for obesity, type 2 diabetes, and heart disease.” It’s also a leading cause of infertility.

Yet despite how common it is, PCOS has no Food and Drug Administration–approved treatments, though a few early-stage clinical trials are underway. Many women end up taking off-label medications such as oral contraceptives, insulin-sensitizing agents, and antiandrogens to help manage symptoms. Surgery can also be used to treat fertility problems associated with PCOS, though it may not work for everyone.

In a new study, a derivative of artemisinin — a molecule that comes from Artemisia plants, which have been used as far back as 1596 to treat malaria in China — helped relieve PCOS symptoms in rats and a small group of women.

Previously, the study’s lead researcher Qi-qun Tang, MD, PhD, had found that this derivative, called artemether, can increase thermogenesis, boosting metabolism. Dr. Tang and his team at Fudan University, Shanghai, China, wanted to see if it would help with PCOS, which is associated with metabolic problems such as insulin resistance.
 

What the Researchers Did

To simulate PCOS in rats, the team treated the rodents with insulin and human chorionic gonadotropin. Then, they tested artemether on the rats and found that it lowered androgen production in the ovaries.

“A common feature [of PCOS] is that the ovaries, and often the adrenal glands, make increased male hormones, nowhere near what a man makes but slightly above what a normal woman makes,” said Dr. Dunaif, who was not involved in the study.

Artemether “inhibits one of the steroidogenic enzymes, CYP11A1, which is important in the production of male hormones,” Dr. Tang said. It does this by increasing the enzyme’s interaction with a protein called LONP1, triggering the enzyme’s breakdown. Increased levels of LONP1 also appeared to suppress androgen production in the ovaries.

In a pilot clinical study of 19 women with PCOS, taking dihydroartemisinin — an approved drug used to treat malaria that contains active artemisinin derivatives — for 12 weeks substantially reduced serum testosterone and anti-Müllerian hormone levels (which are higher in women with PCOS). Using ultrasound, the researchers found that the antral follicle count (also higher than normal with PCOS) had been reduced. All participants had regular menstrual cycles during treatment. And no one reported significant side effects.

“Regular menstrual cycles suggest that there is ovulation, which can result in conception,” Dr. Dunaif said. Still, testing would be needed to confirm that cycles are ovulatory.

Lowering androgen levels “could improve a substantial portion of the symptoms of PCOS,” said Dr. Dunaif. But the research didn’t see an improvement in insulin sensitivity among the women, suggesting that targeting androgens may not help the metabolic symptoms.
 

What’s Next? 

A larger, placebo-controlled trial would still be needed to assess the drug’s efficacy, said Dr. Dunaif, pointing out that the human study did not have a placebo arm.

And unanswered questions remain. Are there any adrenal effects of the compound? “The enzymes that produce androgens are shared between the ovary and the adrenal [gland],” Dr. Dunaif said, but she pointed out that the study doesn’t address whether there is an adrenal benefit. It’s something to look at in future research.

Still, because artemisinin is an established drug, it may come to market faster than a new molecule would, she said. However, a pharmaceutical company would need to be willing to take on the drug. (Dr. Tang said several companies have already expressed interest.)

And while you can buy artemisinin on the Internet, Dr. Dunaif warned not to start taking it if you have PCOS. “I don’t think we’re at that point,” Dr. Dunaif said.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

An ancient Chinese remedy for malaria could offer new hope to the 10% of reproductive-age women living with polycystic ovarian syndrome (PCOS), a poorly understood endocrine disorder that can cause hormonal imbalances, irregular periods, and cysts in the ovaries.

“PCOS is among the most common disorders of reproductive-age women,” said endocrinologist Andrea Dunaif, MD, a professor at the Icahn School of Medicine at Mount Sinai, New York City, who studies diabetes and women’s health. “It is a major risk factor for obesity, type 2 diabetes, and heart disease.” It’s also a leading cause of infertility.

Yet despite how common it is, PCOS has no Food and Drug Administration–approved treatments, though a few early-stage clinical trials are underway. Many women end up taking off-label medications such as oral contraceptives, insulin-sensitizing agents, and antiandrogens to help manage symptoms. Surgery can also be used to treat fertility problems associated with PCOS, though it may not work for everyone.

In a new study, a derivative of artemisinin — a molecule that comes from Artemisia plants, which have been used as far back as 1596 to treat malaria in China — helped relieve PCOS symptoms in rats and a small group of women.

Previously, the study’s lead researcher Qi-qun Tang, MD, PhD, had found that this derivative, called artemether, can increase thermogenesis, boosting metabolism. Dr. Tang and his team at Fudan University, Shanghai, China, wanted to see if it would help with PCOS, which is associated with metabolic problems such as insulin resistance.
 

What the Researchers Did

To simulate PCOS in rats, the team treated the rodents with insulin and human chorionic gonadotropin. Then, they tested artemether on the rats and found that it lowered androgen production in the ovaries.

“A common feature [of PCOS] is that the ovaries, and often the adrenal glands, make increased male hormones, nowhere near what a man makes but slightly above what a normal woman makes,” said Dr. Dunaif, who was not involved in the study.

Artemether “inhibits one of the steroidogenic enzymes, CYP11A1, which is important in the production of male hormones,” Dr. Tang said. It does this by increasing the enzyme’s interaction with a protein called LONP1, triggering the enzyme’s breakdown. Increased levels of LONP1 also appeared to suppress androgen production in the ovaries.

In a pilot clinical study of 19 women with PCOS, taking dihydroartemisinin — an approved drug used to treat malaria that contains active artemisinin derivatives — for 12 weeks substantially reduced serum testosterone and anti-Müllerian hormone levels (which are higher in women with PCOS). Using ultrasound, the researchers found that the antral follicle count (also higher than normal with PCOS) had been reduced. All participants had regular menstrual cycles during treatment. And no one reported significant side effects.

“Regular menstrual cycles suggest that there is ovulation, which can result in conception,” Dr. Dunaif said. Still, testing would be needed to confirm that cycles are ovulatory.

Lowering androgen levels “could improve a substantial portion of the symptoms of PCOS,” said Dr. Dunaif. But the research didn’t see an improvement in insulin sensitivity among the women, suggesting that targeting androgens may not help the metabolic symptoms.
 

What’s Next? 

A larger, placebo-controlled trial would still be needed to assess the drug’s efficacy, said Dr. Dunaif, pointing out that the human study did not have a placebo arm.

And unanswered questions remain. Are there any adrenal effects of the compound? “The enzymes that produce androgens are shared between the ovary and the adrenal [gland],” Dr. Dunaif said, but she pointed out that the study doesn’t address whether there is an adrenal benefit. It’s something to look at in future research.

Still, because artemisinin is an established drug, it may come to market faster than a new molecule would, she said. However, a pharmaceutical company would need to be willing to take on the drug. (Dr. Tang said several companies have already expressed interest.)

And while you can buy artemisinin on the Internet, Dr. Dunaif warned not to start taking it if you have PCOS. “I don’t think we’re at that point,” Dr. Dunaif said.

A version of this article first appeared on Medscape.com.

An ancient Chinese remedy for malaria could offer new hope to the 10% of reproductive-age women living with polycystic ovarian syndrome (PCOS), a poorly understood endocrine disorder that can cause hormonal imbalances, irregular periods, and cysts in the ovaries.

“PCOS is among the most common disorders of reproductive-age women,” said endocrinologist Andrea Dunaif, MD, a professor at the Icahn School of Medicine at Mount Sinai, New York City, who studies diabetes and women’s health. “It is a major risk factor for obesity, type 2 diabetes, and heart disease.” It’s also a leading cause of infertility.

Yet despite how common it is, PCOS has no Food and Drug Administration–approved treatments, though a few early-stage clinical trials are underway. Many women end up taking off-label medications such as oral contraceptives, insulin-sensitizing agents, and antiandrogens to help manage symptoms. Surgery can also be used to treat fertility problems associated with PCOS, though it may not work for everyone.

In a new study, a derivative of artemisinin — a molecule that comes from Artemisia plants, which have been used as far back as 1596 to treat malaria in China — helped relieve PCOS symptoms in rats and a small group of women.

Previously, the study’s lead researcher Qi-qun Tang, MD, PhD, had found that this derivative, called artemether, can increase thermogenesis, boosting metabolism. Dr. Tang and his team at Fudan University, Shanghai, China, wanted to see if it would help with PCOS, which is associated with metabolic problems such as insulin resistance.
 

What the Researchers Did

To simulate PCOS in rats, the team treated the rodents with insulin and human chorionic gonadotropin. Then, they tested artemether on the rats and found that it lowered androgen production in the ovaries.

“A common feature [of PCOS] is that the ovaries, and often the adrenal glands, make increased male hormones, nowhere near what a man makes but slightly above what a normal woman makes,” said Dr. Dunaif, who was not involved in the study.

Artemether “inhibits one of the steroidogenic enzymes, CYP11A1, which is important in the production of male hormones,” Dr. Tang said. It does this by increasing the enzyme’s interaction with a protein called LONP1, triggering the enzyme’s breakdown. Increased levels of LONP1 also appeared to suppress androgen production in the ovaries.

In a pilot clinical study of 19 women with PCOS, taking dihydroartemisinin — an approved drug used to treat malaria that contains active artemisinin derivatives — for 12 weeks substantially reduced serum testosterone and anti-Müllerian hormone levels (which are higher in women with PCOS). Using ultrasound, the researchers found that the antral follicle count (also higher than normal with PCOS) had been reduced. All participants had regular menstrual cycles during treatment. And no one reported significant side effects.

“Regular menstrual cycles suggest that there is ovulation, which can result in conception,” Dr. Dunaif said. Still, testing would be needed to confirm that cycles are ovulatory.

Lowering androgen levels “could improve a substantial portion of the symptoms of PCOS,” said Dr. Dunaif. But the research didn’t see an improvement in insulin sensitivity among the women, suggesting that targeting androgens may not help the metabolic symptoms.
 

What’s Next? 

A larger, placebo-controlled trial would still be needed to assess the drug’s efficacy, said Dr. Dunaif, pointing out that the human study did not have a placebo arm.

And unanswered questions remain. Are there any adrenal effects of the compound? “The enzymes that produce androgens are shared between the ovary and the adrenal [gland],” Dr. Dunaif said, but she pointed out that the study doesn’t address whether there is an adrenal benefit. It’s something to look at in future research.

Still, because artemisinin is an established drug, it may come to market faster than a new molecule would, she said. However, a pharmaceutical company would need to be willing to take on the drug. (Dr. Tang said several companies have already expressed interest.)

And while you can buy artemisinin on the Internet, Dr. Dunaif warned not to start taking it if you have PCOS. “I don’t think we’re at that point,” Dr. Dunaif said.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM SCIENCE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

‘Big Breakthrough’: New Low-Field MRI Is Safer and Easier

Article Type
Changed
Tue, 05/28/2024 - 15:02

For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.

Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.

In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.

“This is a big breakthrough,” said Kevin Sheth, MD, director of the Yale Center for Brain & Mind Health, who was not involved in the study. “It is one of the first, if not the first, demonstrations of low-field MRI imaging for the entire body.”

The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.

Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.

If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.

But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
 

Improving Access to MRI

One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.

While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.

A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.

Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.

“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
 

Challenges and the Future

The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.

One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.

Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.

That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.

Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.

In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.

“This is a big breakthrough,” said Kevin Sheth, MD, director of the Yale Center for Brain & Mind Health, who was not involved in the study. “It is one of the first, if not the first, demonstrations of low-field MRI imaging for the entire body.”

The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.

Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.

If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.

But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
 

Improving Access to MRI

One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.

While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.

A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.

Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.

“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
 

Challenges and the Future

The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.

One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.

Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.

That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
 

A version of this article appeared on Medscape.com.

For years, researchers and medical companies have explored low-field MRI systems (those with a magnetic field strength of less than 1 T) — searching for a feasible alternative to the loud, expensive machines requiring special rooms with shielding to block their powerful magnetic field.

Most low-field scanners in development are for brain scans only. In 2022, the US Food and Drug Administration (FDA) cleared the first portable MRI system — Hyperfine’s Swoop, designed for use at a patient’s bedside — for head and brain scans. But the technology has not been applied to whole-body MRI — until now.

In a new study published in Science, researchers from Hong Kong described a whole-body, ultra low–field MRI.

“This is a big breakthrough,” said Kevin Sheth, MD, director of the Yale Center for Brain & Mind Health, who was not involved in the study. “It is one of the first, if not the first, demonstrations of low-field MRI imaging for the entire body.”

The device uses a 0.05 T magnet — one sixtieth the magnetic field strength of the standard 3 T MRI model common in hospitals today, said lead author Ed Wu, PhD, professor of biomedical engineering at The University of Hong Kong.

Because the field strength is so low, no protective shielding is needed. Patients and bystanders can safely use smart phones . And the scanner is safe for patients with implanted devices, like a cochlear implant or pacemaker, or any metal on their body or clothes. No hearing protection is required, either, because the machine is so quiet.

If all goes well, the technology could be commercially available in as little as a few years, Dr. Wu said.

But first, funding and FDA approval would be needed. “A company is going to have to come along and say, ‘This looks fantastic. We’re going to commercialize this, and we’re going to go through this certification process,’ ” said Andrew Webb, PhD, professor of radiology and the founding director of the C.J. Gorter MRI Center at the Leiden University Medical Center, Leiden, the Netherlands. (Dr. Webb was not involved in the study.)
 

Improving Access to MRI

One hope for this technology is to bring MRI to more people worldwide. Africa has less than one MRI scanner per million residents, whereas the United States has about 40.

While a new 3 T machine can cost about $1 million, the low-field version is much cheaper — only about $22,000 in materials cost per scanner, according to Dr. Wu.

A low magnetic field means less electricity, too — the machine can be plugged into a standard wall outlet. And because a fully shielded room isn’t needed, that could save another $100,000 in materials, Dr. Webb said.

Its ease of use could improve accessibility in countries with limited training, Dr. Webb pointed out.

“To be a technician is 2-3 years training for a regular MRI machine, a lot of it to do safety, a lot of it to do very subtle planning,” said Webb. “These [low-field] systems are much simpler.”
 

Challenges and the Future

The prototype weighs about 1.5 tons or 3000 lb. (A 3 T MRI can weigh between 6 and 13 tons or 12,000 and 26,000 lb.) That might sound like a lot, but it’s comparable to a mobile CT scanner, which is designed to be moved from room to room. Plus, “its weight can be substantially reduced if further optimized,” Dr. Wu said.

One challenge with low-field MRIs is image quality, which tends to be not as clear and detailed as those from high-power machines. To address this, the research team used deep learning (artificial intelligence) to enhance the image quality. “Computing power and large-scale data underpin our success, which tackles the physics and math problems that are traditionally considered intractable in existing MRI methodology,” Dr. Wu said.

Dr. Webb said he was impressed by the image quality shown in the study. They “look much higher quality than you would expect from such a low-field system,” he said. Still, only healthy volunteers were scanned. The true test will be using it to view subtle pathologies, Dr. Webb said.

That’s what Dr. Wu and his team are working on now — taking scans to diagnose various medical conditions. His group’s brain-only version of the low-field MRI has been used for diagnosis, he noted.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The Fascinating Way to Measure Glucose With a Phone’s Compass

Article Type
Changed
Tue, 04/23/2024 - 15:21

 

Here’s a new direction for smartphones in healthcare. 

Researchers from the National Institute of Standards and Technology (NIST), Boulder, Colorado, say a smartphone compass could be used to analyze biomarkers in body fluids — blood, sweat, urine, or saliva — to monitor or diagnose disease.

“We’re just at this point demonstrating this new way of sensing that we hope [will be] very accessible and very portable,” said Gary Zabow, PhD, a group leader in the applied physics division at NIST who supervised the research. 

In a proof-of-concept study, the researchers measured glucose levels in sangria, pinot grigio, and champagne. The detection limit reached micromolar concentrations — on par with or better than some widely used glucose sensors, such as continuous glucose monitors. They also accurately measured the pH levels of coffee, orange juice, and root beer.

More tests are needed to confirm the method works in biological fluids, so it could be a while before it’s available for clinical or commercial use. 

Still, the prospect is “exciting,” said Aydogan Ozcan, PhD, a bioengineering professor at the University of California, Los Angeles, who was not involved in the study. “It might enable new capabilities for advanced sensing applications in field settings or even at home.”

The advance builds on growing research using smartphones to put powerful medical devices in patients’ hands. A new AI-powered app can use a smartphone camera to detect skin cancer, while other apps administer cognitive tests to detect dementia. Smartphone cameras can even be harnessed for “advanced optical microscopes and sensors to the level where we could even see and detect individual DNA molecules with inexpensive optical attachments,” Dr. Ozcan said. More than six billion people worldwide own a smartphone.

The compass inside smartphones is a magnetometer — it measures magnetic fields. Normally it detects the earth’s magnetic fields, but it can also detect small, nearby magnets and changes in those magnets’ positions. 

The researchers embedded a small magnet inside a strip of “smart hydrogel — a piece of material that expands or contracts” when immersed in a solution, said Dr. Zabow.

As the hydrogel gets bigger or smaller, it moves the magnet, Dr. Zabow explained. For example, if the hydrogel is designed to expand when the solution is acidic or contract when it’s basic, it can move the magnet closer or farther from the phone’s magnetometer, providing an indicator of pH. For glucose, the hydrogel expands or contracts depending on the concentration of sugar in the liquid.

With some calibration and coding to translate that reading into a number, “you can effectively read out glucose or pH,” Dr. Zabow said.

Only a small strip of hydrogel is needed, “like a pH test strip that you use for a pool,” said first study author Mark Ferris, PhD, a postdoctoral researcher at NIST. 

Like a pool pH test strip, this test is meant to be “easy to use, and at that kind of price,” Dr. Ferris said. “It’s supposed to be something that’s cheap and disposable.” Each pH hydrogel strip is about 3 cents, and glucose strips are 16 cents, Dr. Ferris estimated. In bulk, those prices could go down.

Next the team plans to test the strips with biological fluids. But complex fluids like blood could pose a challenge, as other molecules present could react with the strip and affect the results. “It may be that you need to tweak the chemistry of the hydrogel to make sure it is really specific to one biomolecule and there is no interference from other biomolecules,” Dr. Zabow said.

The technique could be adapted to detect other biomarkers or molecules, the researchers said. It could also be used to check for chemical contaminants in tap, lake, or stream water. 
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 

Here’s a new direction for smartphones in healthcare. 

Researchers from the National Institute of Standards and Technology (NIST), Boulder, Colorado, say a smartphone compass could be used to analyze biomarkers in body fluids — blood, sweat, urine, or saliva — to monitor or diagnose disease.

“We’re just at this point demonstrating this new way of sensing that we hope [will be] very accessible and very portable,” said Gary Zabow, PhD, a group leader in the applied physics division at NIST who supervised the research. 

In a proof-of-concept study, the researchers measured glucose levels in sangria, pinot grigio, and champagne. The detection limit reached micromolar concentrations — on par with or better than some widely used glucose sensors, such as continuous glucose monitors. They also accurately measured the pH levels of coffee, orange juice, and root beer.

More tests are needed to confirm the method works in biological fluids, so it could be a while before it’s available for clinical or commercial use. 

Still, the prospect is “exciting,” said Aydogan Ozcan, PhD, a bioengineering professor at the University of California, Los Angeles, who was not involved in the study. “It might enable new capabilities for advanced sensing applications in field settings or even at home.”

The advance builds on growing research using smartphones to put powerful medical devices in patients’ hands. A new AI-powered app can use a smartphone camera to detect skin cancer, while other apps administer cognitive tests to detect dementia. Smartphone cameras can even be harnessed for “advanced optical microscopes and sensors to the level where we could even see and detect individual DNA molecules with inexpensive optical attachments,” Dr. Ozcan said. More than six billion people worldwide own a smartphone.

The compass inside smartphones is a magnetometer — it measures magnetic fields. Normally it detects the earth’s magnetic fields, but it can also detect small, nearby magnets and changes in those magnets’ positions. 

The researchers embedded a small magnet inside a strip of “smart hydrogel — a piece of material that expands or contracts” when immersed in a solution, said Dr. Zabow.

As the hydrogel gets bigger or smaller, it moves the magnet, Dr. Zabow explained. For example, if the hydrogel is designed to expand when the solution is acidic or contract when it’s basic, it can move the magnet closer or farther from the phone’s magnetometer, providing an indicator of pH. For glucose, the hydrogel expands or contracts depending on the concentration of sugar in the liquid.

With some calibration and coding to translate that reading into a number, “you can effectively read out glucose or pH,” Dr. Zabow said.

Only a small strip of hydrogel is needed, “like a pH test strip that you use for a pool,” said first study author Mark Ferris, PhD, a postdoctoral researcher at NIST. 

Like a pool pH test strip, this test is meant to be “easy to use, and at that kind of price,” Dr. Ferris said. “It’s supposed to be something that’s cheap and disposable.” Each pH hydrogel strip is about 3 cents, and glucose strips are 16 cents, Dr. Ferris estimated. In bulk, those prices could go down.

Next the team plans to test the strips with biological fluids. But complex fluids like blood could pose a challenge, as other molecules present could react with the strip and affect the results. “It may be that you need to tweak the chemistry of the hydrogel to make sure it is really specific to one biomolecule and there is no interference from other biomolecules,” Dr. Zabow said.

The technique could be adapted to detect other biomarkers or molecules, the researchers said. It could also be used to check for chemical contaminants in tap, lake, or stream water. 
 

A version of this article appeared on Medscape.com.

 

Here’s a new direction for smartphones in healthcare. 

Researchers from the National Institute of Standards and Technology (NIST), Boulder, Colorado, say a smartphone compass could be used to analyze biomarkers in body fluids — blood, sweat, urine, or saliva — to monitor or diagnose disease.

“We’re just at this point demonstrating this new way of sensing that we hope [will be] very accessible and very portable,” said Gary Zabow, PhD, a group leader in the applied physics division at NIST who supervised the research. 

In a proof-of-concept study, the researchers measured glucose levels in sangria, pinot grigio, and champagne. The detection limit reached micromolar concentrations — on par with or better than some widely used glucose sensors, such as continuous glucose monitors. They also accurately measured the pH levels of coffee, orange juice, and root beer.

More tests are needed to confirm the method works in biological fluids, so it could be a while before it’s available for clinical or commercial use. 

Still, the prospect is “exciting,” said Aydogan Ozcan, PhD, a bioengineering professor at the University of California, Los Angeles, who was not involved in the study. “It might enable new capabilities for advanced sensing applications in field settings or even at home.”

The advance builds on growing research using smartphones to put powerful medical devices in patients’ hands. A new AI-powered app can use a smartphone camera to detect skin cancer, while other apps administer cognitive tests to detect dementia. Smartphone cameras can even be harnessed for “advanced optical microscopes and sensors to the level where we could even see and detect individual DNA molecules with inexpensive optical attachments,” Dr. Ozcan said. More than six billion people worldwide own a smartphone.

The compass inside smartphones is a magnetometer — it measures magnetic fields. Normally it detects the earth’s magnetic fields, but it can also detect small, nearby magnets and changes in those magnets’ positions. 

The researchers embedded a small magnet inside a strip of “smart hydrogel — a piece of material that expands or contracts” when immersed in a solution, said Dr. Zabow.

As the hydrogel gets bigger or smaller, it moves the magnet, Dr. Zabow explained. For example, if the hydrogel is designed to expand when the solution is acidic or contract when it’s basic, it can move the magnet closer or farther from the phone’s magnetometer, providing an indicator of pH. For glucose, the hydrogel expands or contracts depending on the concentration of sugar in the liquid.

With some calibration and coding to translate that reading into a number, “you can effectively read out glucose or pH,” Dr. Zabow said.

Only a small strip of hydrogel is needed, “like a pH test strip that you use for a pool,” said first study author Mark Ferris, PhD, a postdoctoral researcher at NIST. 

Like a pool pH test strip, this test is meant to be “easy to use, and at that kind of price,” Dr. Ferris said. “It’s supposed to be something that’s cheap and disposable.” Each pH hydrogel strip is about 3 cents, and glucose strips are 16 cents, Dr. Ferris estimated. In bulk, those prices could go down.

Next the team plans to test the strips with biological fluids. But complex fluids like blood could pose a challenge, as other molecules present could react with the strip and affect the results. “It may be that you need to tweak the chemistry of the hydrogel to make sure it is really specific to one biomolecule and there is no interference from other biomolecules,” Dr. Zabow said.

The technique could be adapted to detect other biomarkers or molecules, the researchers said. It could also be used to check for chemical contaminants in tap, lake, or stream water. 
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

How a Simple Urine Test Could Reveal Early-Stage Lung Cancer

Article Type
Changed
Fri, 01/19/2024 - 14:23

Lung cancer is the deadliest cancer in the world, largely because so many patients are diagnosed late.

Screening more patients could help, yet screening rates remain critically low. In the United States, only about 6% of eligible people get screened , according to the American Lung Association. Contrast that with screening rates for breast, cervical, and colorectal cancer, which all top 70%.

But what if lung cancer detection was as simple as taking a puff on an inhaler and following up with a urine test?

Researchers at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, have developed nanosensors that target lung cancer proteins and can be delivered via inhaler or nebulizer, according to research published this month in Science Advances. If the sensors spot these proteins, they produce a signal in the urine that can be detected with a paper test strip.

“It’s a more complex version of a pregnancy test, but it’s very simple to use,” said Qian Zhong, PhD, an MIT researcher and co-lead author of the study.

Currently, the only recommended screening test for lung cancer is low-dose CT. But not everyone has easy access to screening facilities, said the other co-lead author Edward Tan, PhD, a former MIT postdoc and currently a scientist at the biotech company Prime Medicine, Cambridge, Massachusetts.

“Our focus is to provide an alternative for the early detection of lung cancer that does not rely on resource-intensive infrastructure,” said Dr. Tan. “Most developing countries don’t have such resources” — and residents in some parts of the United States don’t have easy access, either, he said.
 

How It Works

The sensors are polymer nanoparticles coated in DNA barcodes, short DNA sequences that are unique and easy to identify. The researchers engineered the particles to be targeted by protease enzymes linked to stage I lung adenocarcinoma. Upon contact, the proteases cleave off the barcodes, which make their way into the bloodstream and are excreted in urine. A test strip can detect them, revealing results about 20 minutes from the time it’s dipped.

The researchers tested this system in mice genetically engineered to develop human-like lung tumors. Using aerosol nebulizers, they delivered 20 sensors to mice with the equivalent of stage I or II cancer. Using a machine learning algorithm, they identified the four most accurate sensors. With 100% specificity, those four sensors exhibited sensitivity of 84.6%.

“One advantage of using inhalation is that it’s noninvasive, and another advantage is that it distributes across the lung quite homogeneously,” said Dr. Tan. The time from inhalation to detection is also relatively fast — in mice, the whole process took about 2 hours, and Dr. Zhong speculated that it would not be much longer in humans.
 

Other Applications and Challenges

An injectable version of this technology, also developed at MIT, has already been tested in a phase 1 clinical trial for diagnosing liver cancer and nonalcoholic steatohepatitis. The injection also works in tandem with a urine test, the researchers showed in 2021. According to Tan, his research group (led by  Sangeeta Bhatia, MD, PhD) was the first to describe this type of technology to screen for diseases.

The lab is also working toward using inhalable sensors to distinguish between viral, bacterial, and fungal pneumonia. And the technology could also be used to diagnose other lung conditions like asthma and chronic obstructive pulmonary disease, Dr. Tan said.

The tech is certainly “innovative,” remarked Gaetano Rocco, MD, a thoracic surgeon and lung cancer researcher at Memorial Sloan Kettering Cancer Center, Basking Ridge, New Jersey, who was not involved in the study.

Still, challenges may arise when applying it to people. Many factors are involved in regulating fluid volume, potentially interfering with the ability to detect the compounds in the urine, Rocco said. Diet, hydration, drug interference, renal function, and some chronic diseases could all limit effectiveness.

Another challenge: Human cancer can be more heterogeneous (containing different kinds of cancer cells), so four sensors may not be enough, Zhong said. He and colleagues are beginning to analyze human biopsy samples to see whether the same sensors that worked in mice would also work in humans. If all goes well, they hope to do studies on humans or nonhuman primates.
 

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Lung cancer is the deadliest cancer in the world, largely because so many patients are diagnosed late.

Screening more patients could help, yet screening rates remain critically low. In the United States, only about 6% of eligible people get screened , according to the American Lung Association. Contrast that with screening rates for breast, cervical, and colorectal cancer, which all top 70%.

But what if lung cancer detection was as simple as taking a puff on an inhaler and following up with a urine test?

Researchers at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, have developed nanosensors that target lung cancer proteins and can be delivered via inhaler or nebulizer, according to research published this month in Science Advances. If the sensors spot these proteins, they produce a signal in the urine that can be detected with a paper test strip.

“It’s a more complex version of a pregnancy test, but it’s very simple to use,” said Qian Zhong, PhD, an MIT researcher and co-lead author of the study.

Currently, the only recommended screening test for lung cancer is low-dose CT. But not everyone has easy access to screening facilities, said the other co-lead author Edward Tan, PhD, a former MIT postdoc and currently a scientist at the biotech company Prime Medicine, Cambridge, Massachusetts.

“Our focus is to provide an alternative for the early detection of lung cancer that does not rely on resource-intensive infrastructure,” said Dr. Tan. “Most developing countries don’t have such resources” — and residents in some parts of the United States don’t have easy access, either, he said.
 

How It Works

The sensors are polymer nanoparticles coated in DNA barcodes, short DNA sequences that are unique and easy to identify. The researchers engineered the particles to be targeted by protease enzymes linked to stage I lung adenocarcinoma. Upon contact, the proteases cleave off the barcodes, which make their way into the bloodstream and are excreted in urine. A test strip can detect them, revealing results about 20 minutes from the time it’s dipped.

The researchers tested this system in mice genetically engineered to develop human-like lung tumors. Using aerosol nebulizers, they delivered 20 sensors to mice with the equivalent of stage I or II cancer. Using a machine learning algorithm, they identified the four most accurate sensors. With 100% specificity, those four sensors exhibited sensitivity of 84.6%.

“One advantage of using inhalation is that it’s noninvasive, and another advantage is that it distributes across the lung quite homogeneously,” said Dr. Tan. The time from inhalation to detection is also relatively fast — in mice, the whole process took about 2 hours, and Dr. Zhong speculated that it would not be much longer in humans.
 

Other Applications and Challenges

An injectable version of this technology, also developed at MIT, has already been tested in a phase 1 clinical trial for diagnosing liver cancer and nonalcoholic steatohepatitis. The injection also works in tandem with a urine test, the researchers showed in 2021. According to Tan, his research group (led by  Sangeeta Bhatia, MD, PhD) was the first to describe this type of technology to screen for diseases.

The lab is also working toward using inhalable sensors to distinguish between viral, bacterial, and fungal pneumonia. And the technology could also be used to diagnose other lung conditions like asthma and chronic obstructive pulmonary disease, Dr. Tan said.

The tech is certainly “innovative,” remarked Gaetano Rocco, MD, a thoracic surgeon and lung cancer researcher at Memorial Sloan Kettering Cancer Center, Basking Ridge, New Jersey, who was not involved in the study.

Still, challenges may arise when applying it to people. Many factors are involved in regulating fluid volume, potentially interfering with the ability to detect the compounds in the urine, Rocco said. Diet, hydration, drug interference, renal function, and some chronic diseases could all limit effectiveness.

Another challenge: Human cancer can be more heterogeneous (containing different kinds of cancer cells), so four sensors may not be enough, Zhong said. He and colleagues are beginning to analyze human biopsy samples to see whether the same sensors that worked in mice would also work in humans. If all goes well, they hope to do studies on humans or nonhuman primates.
 

A version of this article appeared on Medscape.com.

Lung cancer is the deadliest cancer in the world, largely because so many patients are diagnosed late.

Screening more patients could help, yet screening rates remain critically low. In the United States, only about 6% of eligible people get screened , according to the American Lung Association. Contrast that with screening rates for breast, cervical, and colorectal cancer, which all top 70%.

But what if lung cancer detection was as simple as taking a puff on an inhaler and following up with a urine test?

Researchers at the Massachusetts Institute of Technology (MIT), Cambridge, Massachusetts, have developed nanosensors that target lung cancer proteins and can be delivered via inhaler or nebulizer, according to research published this month in Science Advances. If the sensors spot these proteins, they produce a signal in the urine that can be detected with a paper test strip.

“It’s a more complex version of a pregnancy test, but it’s very simple to use,” said Qian Zhong, PhD, an MIT researcher and co-lead author of the study.

Currently, the only recommended screening test for lung cancer is low-dose CT. But not everyone has easy access to screening facilities, said the other co-lead author Edward Tan, PhD, a former MIT postdoc and currently a scientist at the biotech company Prime Medicine, Cambridge, Massachusetts.

“Our focus is to provide an alternative for the early detection of lung cancer that does not rely on resource-intensive infrastructure,” said Dr. Tan. “Most developing countries don’t have such resources” — and residents in some parts of the United States don’t have easy access, either, he said.
 

How It Works

The sensors are polymer nanoparticles coated in DNA barcodes, short DNA sequences that are unique and easy to identify. The researchers engineered the particles to be targeted by protease enzymes linked to stage I lung adenocarcinoma. Upon contact, the proteases cleave off the barcodes, which make their way into the bloodstream and are excreted in urine. A test strip can detect them, revealing results about 20 minutes from the time it’s dipped.

The researchers tested this system in mice genetically engineered to develop human-like lung tumors. Using aerosol nebulizers, they delivered 20 sensors to mice with the equivalent of stage I or II cancer. Using a machine learning algorithm, they identified the four most accurate sensors. With 100% specificity, those four sensors exhibited sensitivity of 84.6%.

“One advantage of using inhalation is that it’s noninvasive, and another advantage is that it distributes across the lung quite homogeneously,” said Dr. Tan. The time from inhalation to detection is also relatively fast — in mice, the whole process took about 2 hours, and Dr. Zhong speculated that it would not be much longer in humans.
 

Other Applications and Challenges

An injectable version of this technology, also developed at MIT, has already been tested in a phase 1 clinical trial for diagnosing liver cancer and nonalcoholic steatohepatitis. The injection also works in tandem with a urine test, the researchers showed in 2021. According to Tan, his research group (led by  Sangeeta Bhatia, MD, PhD) was the first to describe this type of technology to screen for diseases.

The lab is also working toward using inhalable sensors to distinguish between viral, bacterial, and fungal pneumonia. And the technology could also be used to diagnose other lung conditions like asthma and chronic obstructive pulmonary disease, Dr. Tan said.

The tech is certainly “innovative,” remarked Gaetano Rocco, MD, a thoracic surgeon and lung cancer researcher at Memorial Sloan Kettering Cancer Center, Basking Ridge, New Jersey, who was not involved in the study.

Still, challenges may arise when applying it to people. Many factors are involved in regulating fluid volume, potentially interfering with the ability to detect the compounds in the urine, Rocco said. Diet, hydration, drug interference, renal function, and some chronic diseases could all limit effectiveness.

Another challenge: Human cancer can be more heterogeneous (containing different kinds of cancer cells), so four sensors may not be enough, Zhong said. He and colleagues are beginning to analyze human biopsy samples to see whether the same sensors that worked in mice would also work in humans. If all goes well, they hope to do studies on humans or nonhuman primates.
 

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article