User login
“We eat first with our eyes.”
The Roman foodie Apicius is thought to have uttered those words in the 1st century A.D. Now, some 2,000 years later, scientists may be proving him right.
Dubbed the “ventral food component,” this part resides in the brain’s visual cortex, in a region known to play a role in identifying faces, scenes, and words.
The study, published in the journal Current Biology, involved using artificial intelligence (AI) technology to build a computer model of this part of the brain. Similar models are emerging across fields of research to simulate and study complex systems of the body. A computer model of the digestive system was recently used to determine the best body position for taking a pill.
“The research is still cutting-edge,” says study author Meenakshi Khosla, PhD. “There’s a lot more to be done to understand whether this region is the same or different in different individuals, and how it is modulated by experience or familiarity with different kinds of foods.”
Pinpointing those differences could provide insights into how people choose what they eat, or even help us learn what drives eating disorders, Dr. Khosla says.
Part of what makes this study unique was the researchers’ approach, dubbed “hypothesis neutral.” Instead of setting out to prove or disprove a firm hypothesis, they simply started exploring the data to see what they could find. The goal: To go beyond “the idiosyncratic hypotheses scientists have already thought to test,” the paper says. So, they began sifting through a public database called the Natural Scenes Dataset, an inventory of brain scans from eight volunteers viewing 56,720 images.
As expected, the software analyzing the dataset spotted brain regions already known to be triggered by images of faces, bodies, words, and scenes. But to the researchers’ surprise, the analysis also revealed a previously unknown part of the brain that seemed to be responding to images of food.
“Our first reaction was, ‘That’s cute and all, but it can’t possibly be true,’ ” Dr. Khosla says.
To confirm their discovery, the researchers used the data to train a computer model of this part of the brain, a process that takes less than an hour. Then they fed the model more than 1.2 million new images.
Sure enough, the model lit up in response to food. Color didn’t matter – even black-and-white food images triggered it, though not as strongly as color ones. And the model could tell the difference between food and objects that looked like food: a banana versus a crescent moon, or a blueberry muffin versus a puppy with a muffin-like face.
From the human data, the researchers found that some people responded slightly more to processed foods like pizza than unprocessed foods like apples. They hope to explore how other things, such as liking or disliking a food, may affect a person’s response to that food.
This technology could open up other areas of research as well. Dr. Khosla hopes to use it to explore how the brain responds to social cues like body language and facial expressions.
For now, Dr. Khosla has already begun to verify the computer model in real people by scanning the brains of a new set of volunteers. “We collected pilot data in a few subjects recently and were able to localize this component,” she says.
A version of this article first appeared on Medscape.com.
“We eat first with our eyes.”
The Roman foodie Apicius is thought to have uttered those words in the 1st century A.D. Now, some 2,000 years later, scientists may be proving him right.
Dubbed the “ventral food component,” this part resides in the brain’s visual cortex, in a region known to play a role in identifying faces, scenes, and words.
The study, published in the journal Current Biology, involved using artificial intelligence (AI) technology to build a computer model of this part of the brain. Similar models are emerging across fields of research to simulate and study complex systems of the body. A computer model of the digestive system was recently used to determine the best body position for taking a pill.
“The research is still cutting-edge,” says study author Meenakshi Khosla, PhD. “There’s a lot more to be done to understand whether this region is the same or different in different individuals, and how it is modulated by experience or familiarity with different kinds of foods.”
Pinpointing those differences could provide insights into how people choose what they eat, or even help us learn what drives eating disorders, Dr. Khosla says.
Part of what makes this study unique was the researchers’ approach, dubbed “hypothesis neutral.” Instead of setting out to prove or disprove a firm hypothesis, they simply started exploring the data to see what they could find. The goal: To go beyond “the idiosyncratic hypotheses scientists have already thought to test,” the paper says. So, they began sifting through a public database called the Natural Scenes Dataset, an inventory of brain scans from eight volunteers viewing 56,720 images.
As expected, the software analyzing the dataset spotted brain regions already known to be triggered by images of faces, bodies, words, and scenes. But to the researchers’ surprise, the analysis also revealed a previously unknown part of the brain that seemed to be responding to images of food.
“Our first reaction was, ‘That’s cute and all, but it can’t possibly be true,’ ” Dr. Khosla says.
To confirm their discovery, the researchers used the data to train a computer model of this part of the brain, a process that takes less than an hour. Then they fed the model more than 1.2 million new images.
Sure enough, the model lit up in response to food. Color didn’t matter – even black-and-white food images triggered it, though not as strongly as color ones. And the model could tell the difference between food and objects that looked like food: a banana versus a crescent moon, or a blueberry muffin versus a puppy with a muffin-like face.
From the human data, the researchers found that some people responded slightly more to processed foods like pizza than unprocessed foods like apples. They hope to explore how other things, such as liking or disliking a food, may affect a person’s response to that food.
This technology could open up other areas of research as well. Dr. Khosla hopes to use it to explore how the brain responds to social cues like body language and facial expressions.
For now, Dr. Khosla has already begun to verify the computer model in real people by scanning the brains of a new set of volunteers. “We collected pilot data in a few subjects recently and were able to localize this component,” she says.
A version of this article first appeared on Medscape.com.
“We eat first with our eyes.”
The Roman foodie Apicius is thought to have uttered those words in the 1st century A.D. Now, some 2,000 years later, scientists may be proving him right.
Dubbed the “ventral food component,” this part resides in the brain’s visual cortex, in a region known to play a role in identifying faces, scenes, and words.
The study, published in the journal Current Biology, involved using artificial intelligence (AI) technology to build a computer model of this part of the brain. Similar models are emerging across fields of research to simulate and study complex systems of the body. A computer model of the digestive system was recently used to determine the best body position for taking a pill.
“The research is still cutting-edge,” says study author Meenakshi Khosla, PhD. “There’s a lot more to be done to understand whether this region is the same or different in different individuals, and how it is modulated by experience or familiarity with different kinds of foods.”
Pinpointing those differences could provide insights into how people choose what they eat, or even help us learn what drives eating disorders, Dr. Khosla says.
Part of what makes this study unique was the researchers’ approach, dubbed “hypothesis neutral.” Instead of setting out to prove or disprove a firm hypothesis, they simply started exploring the data to see what they could find. The goal: To go beyond “the idiosyncratic hypotheses scientists have already thought to test,” the paper says. So, they began sifting through a public database called the Natural Scenes Dataset, an inventory of brain scans from eight volunteers viewing 56,720 images.
As expected, the software analyzing the dataset spotted brain regions already known to be triggered by images of faces, bodies, words, and scenes. But to the researchers’ surprise, the analysis also revealed a previously unknown part of the brain that seemed to be responding to images of food.
“Our first reaction was, ‘That’s cute and all, but it can’t possibly be true,’ ” Dr. Khosla says.
To confirm their discovery, the researchers used the data to train a computer model of this part of the brain, a process that takes less than an hour. Then they fed the model more than 1.2 million new images.
Sure enough, the model lit up in response to food. Color didn’t matter – even black-and-white food images triggered it, though not as strongly as color ones. And the model could tell the difference between food and objects that looked like food: a banana versus a crescent moon, or a blueberry muffin versus a puppy with a muffin-like face.
From the human data, the researchers found that some people responded slightly more to processed foods like pizza than unprocessed foods like apples. They hope to explore how other things, such as liking or disliking a food, may affect a person’s response to that food.
This technology could open up other areas of research as well. Dr. Khosla hopes to use it to explore how the brain responds to social cues like body language and facial expressions.
For now, Dr. Khosla has already begun to verify the computer model in real people by scanning the brains of a new set of volunteers. “We collected pilot data in a few subjects recently and were able to localize this component,” she says.
A version of this article first appeared on Medscape.com.
FROM CURRENT BIOLOGY