The cell that might trigger Alzheimer’s disease

Article Type
Changed
Fri, 02/04/2022 - 08:00

It all started with genetic data. A gene here, a gene there. Eventually the story became clearer: If scientists are to one day find a cure for Alzheimer’s disease, they should look to the immune system.

Over the past couple decades, researchers have identified numerous genes involved in various immune system functions that may also contribute to Alzheimer’s disease. Some of the prime suspects are genes that control microglia, now the focus of intense research in developing new Alzheimer’s drugs.

Microglia are amoeba-like cells that scour the brain for injuries and invaders. They help clear dead or impaired brain cells and literally gobble up invading microbes. Without them, we’d be in trouble.

In a normal brain, a protein called beta-amyloid is cleared away through our lymphatic system by microglia as molecular junk. But sometimes it builds up. Certain gene mutations are one culprit in this toxic accumulation. Traumatic brain injury is another, and, perhaps, impaired microglial function.

One thing everyone agrees on is that in people with Alzheimer’s disease, too much amyloid accumulates between their brain cells and in the vessels that supply the brain with blood. Once amyloid begins to clog networks of neurons, it triggers the accumulation of another protein, called tau, inside of these brain cells. The presence of tau sends microglia and other immune mechanisms into overdrive, resulting in the inflammatory immune response that many experts believe ultimately saps brain vitality in Alzheimer’s disease.
 

The gene scene

To date, nearly a dozen genes involved in immune and microglial function have been tied to Alzheimer’s disease. The first was CD33, identified in 2008.

“When we got the results, I literally ran to my colleague’s office next door and said, you gotta see this!” said Harvard neuroscientist Rudolph Tanzi. Dr. Tanzi, who goes by Rudy, led the CD33 research. The discovery was quickly named a top medical breakthrough of 2008 by Time magazine.

“We were laughing because what they didn’t know is we had no idea what this gene did,” he joked. But over time, research by Dr. Tanzi and his group revealed that CD33 is a kind of microglial on-off switch, activating the cells as part of an inflammatory pathway.

“We kind of got it all going when it came to the genetics,” he said.

Microglia normally recognize molecular patterns associated with microbes and cellular damage as unwanted. This is how they know to take action – to devour unfamiliar pathogens and dead tissue. Dr. Tanzi believes microglia sense any sign of brain damage as an infection, which causes them to become hyperactive.

Much of our modern human immune system, he explained, evolved many hundreds of thousands of years ago. Our lifespans at the time were far shorter than they are today, and the majority of people didn’t live long enough to develop dementia or the withered brain cells that come with it. So our immune system, he said, assumes any faulty brain tissue is due to a microbe, not dementia. Microglia react aggressively, clearing the area to prevent the spread of infection.

“They say, ‘We better wipe out this part of the brain that’s infected, even if it’s not.’ They don’t know,” quipped Dr. Tanzi. “That’s what causes neuroinflammation. And CD33 turns this response on. The microglia become killers, not just janitors.”
 

 

 

A brake on overactive microglia

If CD33 is the yin, a gene called TREM2 is the yang. Discovered a few years after CD33, TREM2 reins in microglial activation, returning them to their role as cellular housekeepers.

Neurologist David Holtzman, MD, of Washington University in St. Louis, who studies TREM2, agrees that wherever you find amyloid, tau, or dead brain cells, there are microglia raring to go and ready to scavenge.

“I think at first a lot of people thought these cells were reacting to Alzheimer’s pathology, and not necessarily a cause of the disease,” he said.

It was the discovery of TREM2 on the heels of CD33 that really shifted the thinking, in part because it produces a protein that in the brain is only found in microglia. “Many of us [in the field] immediately said, ‘Look, there’s now a risk factor that is only expressed in microglia. It must be that innate immune cells are important in some way in the pathogenesis of the disease,’ “ he added.

Dr. Holtzman sees microglial activation in impending dementia as a double-edged sword. In the beginning, microglia clear unwanted amyloid to maintain brain health. But once accumulated amyloid and tau have done enough damage, the neuroinflammation that comes with microglial activation does more harm than good. Neurons die en masse and dementia sets in.

But not all researchers are convinced.

Serge Revist, PhD, is a professor in the department of molecular medicine at the Laval University Medical School in Quebec. Based on his lab’s research, he believes that while impaired immune activity is involved in Alzheimer’s disease, it is not the root cause. “I don’t think it is the immune cells that do the damage, I still think it is the beta-amyloid itself,” he said, “In my lab, in mouse studies, we’ve never found that immune cells were directly responsible for killing neurons.”

He does believe that in some patients with Alzheimer’s disease, microglia may not be able to handle the excess amyloid that accumulates in the disease and that developing treatments that improve the ability of microglia and the immune system to clear the protein could be effective.
 

Microglial medicines

The biological cascade leading to Alzheimer’s disease is a tangled one. Gene variants influencing the accumulation and clearance of amyloid are likely a major contributor. But immune activity caused by early life infection might also be involved, at least in some cases. This infectious theory of Alzheimer’s disease was first proposed by Dr. Tanzi’s now-deceased colleague Robert Moir, PhD. Dr. Tanzi’s group even has evidence that amyloid itself is antimicrobial and evolved to protect us from pathogens, only to become a problem when overactive and aggregated.

And the same goes for microglia, cells whose over-ambition might cause much of the brain degeneration seen in Alzheimer’s disease.

In theory, if a treatment could decrease CD33 activity or increase that of TREM2, doctors might one day may be able to slow or even stop the progression of dementia. Instead of going after amyloid itself – the mechanism behind so many failed investigational Alzheimer’s drugs – a therapy that quells the immune response to amyloid might be the answer in treating dementia.

“There are a number of scientists and companies trying to figure out how to influence genes like TREM2 and CD33 and to both decrease amyloid and act on the downstream consequences of the protein,” said Dr. Holtzman. “All of this is to say that somewhere in the biology that causes Alzheimer’s disease, the immune system is involved.”

It seems that in many cases, the most common form of a dementia might be due to a well-intentioned immune cell going rogue. “I think you’d hear this from basically any researcher worth their salt,” said Dr. Tanzi. “I feel strongly that without microglial activation, you will not get Alzheimer’s disease.”

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

It all started with genetic data. A gene here, a gene there. Eventually the story became clearer: If scientists are to one day find a cure for Alzheimer’s disease, they should look to the immune system.

Over the past couple decades, researchers have identified numerous genes involved in various immune system functions that may also contribute to Alzheimer’s disease. Some of the prime suspects are genes that control microglia, now the focus of intense research in developing new Alzheimer’s drugs.

Microglia are amoeba-like cells that scour the brain for injuries and invaders. They help clear dead or impaired brain cells and literally gobble up invading microbes. Without them, we’d be in trouble.

In a normal brain, a protein called beta-amyloid is cleared away through our lymphatic system by microglia as molecular junk. But sometimes it builds up. Certain gene mutations are one culprit in this toxic accumulation. Traumatic brain injury is another, and, perhaps, impaired microglial function.

One thing everyone agrees on is that in people with Alzheimer’s disease, too much amyloid accumulates between their brain cells and in the vessels that supply the brain with blood. Once amyloid begins to clog networks of neurons, it triggers the accumulation of another protein, called tau, inside of these brain cells. The presence of tau sends microglia and other immune mechanisms into overdrive, resulting in the inflammatory immune response that many experts believe ultimately saps brain vitality in Alzheimer’s disease.
 

The gene scene

To date, nearly a dozen genes involved in immune and microglial function have been tied to Alzheimer’s disease. The first was CD33, identified in 2008.

“When we got the results, I literally ran to my colleague’s office next door and said, you gotta see this!” said Harvard neuroscientist Rudolph Tanzi. Dr. Tanzi, who goes by Rudy, led the CD33 research. The discovery was quickly named a top medical breakthrough of 2008 by Time magazine.

“We were laughing because what they didn’t know is we had no idea what this gene did,” he joked. But over time, research by Dr. Tanzi and his group revealed that CD33 is a kind of microglial on-off switch, activating the cells as part of an inflammatory pathway.

“We kind of got it all going when it came to the genetics,” he said.

Microglia normally recognize molecular patterns associated with microbes and cellular damage as unwanted. This is how they know to take action – to devour unfamiliar pathogens and dead tissue. Dr. Tanzi believes microglia sense any sign of brain damage as an infection, which causes them to become hyperactive.

Much of our modern human immune system, he explained, evolved many hundreds of thousands of years ago. Our lifespans at the time were far shorter than they are today, and the majority of people didn’t live long enough to develop dementia or the withered brain cells that come with it. So our immune system, he said, assumes any faulty brain tissue is due to a microbe, not dementia. Microglia react aggressively, clearing the area to prevent the spread of infection.

“They say, ‘We better wipe out this part of the brain that’s infected, even if it’s not.’ They don’t know,” quipped Dr. Tanzi. “That’s what causes neuroinflammation. And CD33 turns this response on. The microglia become killers, not just janitors.”
 

 

 

A brake on overactive microglia

If CD33 is the yin, a gene called TREM2 is the yang. Discovered a few years after CD33, TREM2 reins in microglial activation, returning them to their role as cellular housekeepers.

Neurologist David Holtzman, MD, of Washington University in St. Louis, who studies TREM2, agrees that wherever you find amyloid, tau, or dead brain cells, there are microglia raring to go and ready to scavenge.

“I think at first a lot of people thought these cells were reacting to Alzheimer’s pathology, and not necessarily a cause of the disease,” he said.

It was the discovery of TREM2 on the heels of CD33 that really shifted the thinking, in part because it produces a protein that in the brain is only found in microglia. “Many of us [in the field] immediately said, ‘Look, there’s now a risk factor that is only expressed in microglia. It must be that innate immune cells are important in some way in the pathogenesis of the disease,’ “ he added.

Dr. Holtzman sees microglial activation in impending dementia as a double-edged sword. In the beginning, microglia clear unwanted amyloid to maintain brain health. But once accumulated amyloid and tau have done enough damage, the neuroinflammation that comes with microglial activation does more harm than good. Neurons die en masse and dementia sets in.

But not all researchers are convinced.

Serge Revist, PhD, is a professor in the department of molecular medicine at the Laval University Medical School in Quebec. Based on his lab’s research, he believes that while impaired immune activity is involved in Alzheimer’s disease, it is not the root cause. “I don’t think it is the immune cells that do the damage, I still think it is the beta-amyloid itself,” he said, “In my lab, in mouse studies, we’ve never found that immune cells were directly responsible for killing neurons.”

He does believe that in some patients with Alzheimer’s disease, microglia may not be able to handle the excess amyloid that accumulates in the disease and that developing treatments that improve the ability of microglia and the immune system to clear the protein could be effective.
 

Microglial medicines

The biological cascade leading to Alzheimer’s disease is a tangled one. Gene variants influencing the accumulation and clearance of amyloid are likely a major contributor. But immune activity caused by early life infection might also be involved, at least in some cases. This infectious theory of Alzheimer’s disease was first proposed by Dr. Tanzi’s now-deceased colleague Robert Moir, PhD. Dr. Tanzi’s group even has evidence that amyloid itself is antimicrobial and evolved to protect us from pathogens, only to become a problem when overactive and aggregated.

And the same goes for microglia, cells whose over-ambition might cause much of the brain degeneration seen in Alzheimer’s disease.

In theory, if a treatment could decrease CD33 activity or increase that of TREM2, doctors might one day may be able to slow or even stop the progression of dementia. Instead of going after amyloid itself – the mechanism behind so many failed investigational Alzheimer’s drugs – a therapy that quells the immune response to amyloid might be the answer in treating dementia.

“There are a number of scientists and companies trying to figure out how to influence genes like TREM2 and CD33 and to both decrease amyloid and act on the downstream consequences of the protein,” said Dr. Holtzman. “All of this is to say that somewhere in the biology that causes Alzheimer’s disease, the immune system is involved.”

It seems that in many cases, the most common form of a dementia might be due to a well-intentioned immune cell going rogue. “I think you’d hear this from basically any researcher worth their salt,” said Dr. Tanzi. “I feel strongly that without microglial activation, you will not get Alzheimer’s disease.”

A version of this article first appeared on Medscape.com.

It all started with genetic data. A gene here, a gene there. Eventually the story became clearer: If scientists are to one day find a cure for Alzheimer’s disease, they should look to the immune system.

Over the past couple decades, researchers have identified numerous genes involved in various immune system functions that may also contribute to Alzheimer’s disease. Some of the prime suspects are genes that control microglia, now the focus of intense research in developing new Alzheimer’s drugs.

Microglia are amoeba-like cells that scour the brain for injuries and invaders. They help clear dead or impaired brain cells and literally gobble up invading microbes. Without them, we’d be in trouble.

In a normal brain, a protein called beta-amyloid is cleared away through our lymphatic system by microglia as molecular junk. But sometimes it builds up. Certain gene mutations are one culprit in this toxic accumulation. Traumatic brain injury is another, and, perhaps, impaired microglial function.

One thing everyone agrees on is that in people with Alzheimer’s disease, too much amyloid accumulates between their brain cells and in the vessels that supply the brain with blood. Once amyloid begins to clog networks of neurons, it triggers the accumulation of another protein, called tau, inside of these brain cells. The presence of tau sends microglia and other immune mechanisms into overdrive, resulting in the inflammatory immune response that many experts believe ultimately saps brain vitality in Alzheimer’s disease.
 

The gene scene

To date, nearly a dozen genes involved in immune and microglial function have been tied to Alzheimer’s disease. The first was CD33, identified in 2008.

“When we got the results, I literally ran to my colleague’s office next door and said, you gotta see this!” said Harvard neuroscientist Rudolph Tanzi. Dr. Tanzi, who goes by Rudy, led the CD33 research. The discovery was quickly named a top medical breakthrough of 2008 by Time magazine.

“We were laughing because what they didn’t know is we had no idea what this gene did,” he joked. But over time, research by Dr. Tanzi and his group revealed that CD33 is a kind of microglial on-off switch, activating the cells as part of an inflammatory pathway.

“We kind of got it all going when it came to the genetics,” he said.

Microglia normally recognize molecular patterns associated with microbes and cellular damage as unwanted. This is how they know to take action – to devour unfamiliar pathogens and dead tissue. Dr. Tanzi believes microglia sense any sign of brain damage as an infection, which causes them to become hyperactive.

Much of our modern human immune system, he explained, evolved many hundreds of thousands of years ago. Our lifespans at the time were far shorter than they are today, and the majority of people didn’t live long enough to develop dementia or the withered brain cells that come with it. So our immune system, he said, assumes any faulty brain tissue is due to a microbe, not dementia. Microglia react aggressively, clearing the area to prevent the spread of infection.

“They say, ‘We better wipe out this part of the brain that’s infected, even if it’s not.’ They don’t know,” quipped Dr. Tanzi. “That’s what causes neuroinflammation. And CD33 turns this response on. The microglia become killers, not just janitors.”
 

 

 

A brake on overactive microglia

If CD33 is the yin, a gene called TREM2 is the yang. Discovered a few years after CD33, TREM2 reins in microglial activation, returning them to their role as cellular housekeepers.

Neurologist David Holtzman, MD, of Washington University in St. Louis, who studies TREM2, agrees that wherever you find amyloid, tau, or dead brain cells, there are microglia raring to go and ready to scavenge.

“I think at first a lot of people thought these cells were reacting to Alzheimer’s pathology, and not necessarily a cause of the disease,” he said.

It was the discovery of TREM2 on the heels of CD33 that really shifted the thinking, in part because it produces a protein that in the brain is only found in microglia. “Many of us [in the field] immediately said, ‘Look, there’s now a risk factor that is only expressed in microglia. It must be that innate immune cells are important in some way in the pathogenesis of the disease,’ “ he added.

Dr. Holtzman sees microglial activation in impending dementia as a double-edged sword. In the beginning, microglia clear unwanted amyloid to maintain brain health. But once accumulated amyloid and tau have done enough damage, the neuroinflammation that comes with microglial activation does more harm than good. Neurons die en masse and dementia sets in.

But not all researchers are convinced.

Serge Revist, PhD, is a professor in the department of molecular medicine at the Laval University Medical School in Quebec. Based on his lab’s research, he believes that while impaired immune activity is involved in Alzheimer’s disease, it is not the root cause. “I don’t think it is the immune cells that do the damage, I still think it is the beta-amyloid itself,” he said, “In my lab, in mouse studies, we’ve never found that immune cells were directly responsible for killing neurons.”

He does believe that in some patients with Alzheimer’s disease, microglia may not be able to handle the excess amyloid that accumulates in the disease and that developing treatments that improve the ability of microglia and the immune system to clear the protein could be effective.
 

Microglial medicines

The biological cascade leading to Alzheimer’s disease is a tangled one. Gene variants influencing the accumulation and clearance of amyloid are likely a major contributor. But immune activity caused by early life infection might also be involved, at least in some cases. This infectious theory of Alzheimer’s disease was first proposed by Dr. Tanzi’s now-deceased colleague Robert Moir, PhD. Dr. Tanzi’s group even has evidence that amyloid itself is antimicrobial and evolved to protect us from pathogens, only to become a problem when overactive and aggregated.

And the same goes for microglia, cells whose over-ambition might cause much of the brain degeneration seen in Alzheimer’s disease.

In theory, if a treatment could decrease CD33 activity or increase that of TREM2, doctors might one day may be able to slow or even stop the progression of dementia. Instead of going after amyloid itself – the mechanism behind so many failed investigational Alzheimer’s drugs – a therapy that quells the immune response to amyloid might be the answer in treating dementia.

“There are a number of scientists and companies trying to figure out how to influence genes like TREM2 and CD33 and to both decrease amyloid and act on the downstream consequences of the protein,” said Dr. Holtzman. “All of this is to say that somewhere in the biology that causes Alzheimer’s disease, the immune system is involved.”

It seems that in many cases, the most common form of a dementia might be due to a well-intentioned immune cell going rogue. “I think you’d hear this from basically any researcher worth their salt,” said Dr. Tanzi. “I feel strongly that without microglial activation, you will not get Alzheimer’s disease.”

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A paleolithic raw bar, and the human brush with extinction

Article Type
Changed
Fri, 03/26/2021 - 14:10

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

Publications
Topics
Sections

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”

Courtesy Dr. Bret Stetka

“He was a bold man that first ate an oyster.” – Jonathan Swift

That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.

Courtesy Dr. Bret Stetka
Dr. Bret Stetka

Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.

For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.

The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.

Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.

Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!

“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.

“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”

MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.

One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.

Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
 

 

 

A sea of vitamins

Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.

Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.

By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.

Unless you ask Michael Crawford.

Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.

In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.

Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.

In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.

Recent studies show that some of omega-3s’ purported health benefits were exaggerated, but they do appear to benefit the brain, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.

Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.

Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.

Not Dr. Crawford.

He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”

Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.

For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.

University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”

Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.

Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.

Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.

In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.

Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

'Living brain implants' may restore stroke mobility

Article Type
Changed
Tue, 04/06/2021 - 10:33

 

Restoring movement following a stroke can be challenging, but recent proof-of-concept research may offer an effective way to do just that. Researchers behind the ongoing Cortimo trial successfully performed a procedure on a patient 2 years removed from a stroke, in which microelectrode arrays were implanted into his brain to decode signals driving motor function. These signals then allowed him to operate a powered brace worn on his paralyzed arm.

This news organization spoke with the trial’s principal investigator, Mijail D. Serruya, MD, PhD, an assistant professor of neurology at Thomas Jefferson University Hospital, Philadelphia, about the trial’s initial findings, what this technology may ultimately look like, and the implications for stroke patients in knowing that restorative interventions may be on the horizon.
 

How did you first get involved with implanting electrodes to help stroke patients with recovery?

I was involved in the first human application of a microelectrode array in a young man who had quadriplegia because of a spinal cord injury. We showed that we could record signal directly from his motor cortex and use it to move a cursor on the screen, and open and close a prosthetic hand and arm.

I was naive and thought that this would soon be a widely available clinical medical device. Now it’s nearly 15 years later, and while it certainly has been safely used in multiple labs to record signals from people with spinal cord injury, amyotrophic lateral sclerosis (ALS), or locked-in syndrome from a brain stem stroke, it still requires a team of technicians and a percutaneous connector. It really has not gotten out of the university.

A few years ago I spoke with Robert Rosenwasser, MD, chairman of the department of neurosurgery at Thomas Jefferson, who runs a very busy stroke center and performed the surgery in this trial. We put our heads together and said: “Maybe the time is now to see whether we can move this technology to this much more prevalent condition of a hemispheric stroke.” And that’s what we did.
 

How did the idea of using computer brain electrode interfaces begin?

Around 20 years ago, if you had someone who had severe paralysis and you wanted to restore movement, the question was, where can you get a good control signal from? Obviously, if someone can talk, they can use a voice-actuated system with speech recognition and maybe you can track their eye gaze. But if they’re trying to move their limbs, you want a motor control signal.

In someone who has end-stage ALS or a brain stem stroke, you can’t even record residual muscle activity; you have almost nothing to work with. The only thing left is to try to record directly from the brain itself.

It’s important to clarify that brain-computer interfaces are not necessarily stimulating the brain to inject the signal. They’re just recording the endogenous activity that the brain makes. In comparison, a deep brain stimulator is usually not recording anything; it’s just delivering energy to the brain and hoping for the best.

But what we’re doing is asking, if the person is trying to move the paralyzed limb but can’t, can we get to the source of the signal and then do something with it?
 

What’s the process for measuring that in, for example, someone who has a localized lesion in the motor cortex?

The first step is a scan. People have been doing functional MRI on patients who have had a stroke as long as we’ve had fMRI. We know that people can actually activate on MRI areas of their brain around the stroke, but obviously not in the stroke because it’s been lesioned. However, we do know that the circuit adjacent to it and other regions do appear able to be modulated.

So by having a person either imagine trying to do what they want to do or doing what they can do, if they have some tiny residual movement, you can then identify a kind of hot spot on the fMRI where the brain gobbles up all the oxygen because it’s so active. Then that gives you an anatomical target for the surgeon to place the electrode arrays.
 

The Cortimo trial’s enticing findings

What are the most striking results that you’ve seen so far with the device?

The first thing is that we were able to get such recordings at all. We knew from fMRIs that there were fluctuations in oxygen changing when the person was trying to do something they couldn’t do. But nobody knew that you would see this whole population of individual neurons chattering away when you place these electrode arrays in the motor cortex right next to the stroke, and make sense of what we’re recording.

Obviously, that’s very encouraging and gives us hope that many months or years after a stroke, people’s brains are able to maintain this representation of all these different movements and plans. It’s almost like it’s trapped on the other side of the stroke and some of the signals can’t get out.

The other discovery we’re pleased with is that we can actually decode signals in real time and the person can use it to do something, such as trigger the brain to open and close the hand. That’s very different from all the prior research with brain array interfaces.

Furthermore, the gentleman who participated actually had strokes in other parts of his brain affecting his vision; he had homonymous hemianopia. That raised the question of what happens if you affect parts of the brain that have to do with attention and visual processing. Could a system like this work? And again, the answer appears to be yes.
 

What are the next steps for this technology before it can potentially become available in the clinic?

For this to work, the system clearly has to be fully implantable. What we used was percutaneous. The risk-benefit may be acceptable for someone who has quadriplegia because of, for example, spinal cord injury or end-stage ALS who may already have a tracheostomy and a percutaneous endoscopic gastrostomy. But for someone who is hemiparetic and ambulatory, that may not be acceptable. And a fully implantable system would also have much better patient compliance.

Also, when you’re recording from lots and lots of individual brain cells at many, many samples a second on many, many channels, it’s certainly an engineering challenge. It’s not just a single channel that you occasionally query; it’s hundreds of thousands of channels of this complicated data stream.

But these are solvable challenges. People have been making a lot of progress. It’s really a matter of funding and the engineering expertise, rather than some sort of fundamental scientific breakthrough.

With that said, I think it could be within the next 5-10 years that we could actually have a product that expands the toolbox of what can be done for patients who’ve had a stroke, if they’re motivated and there’s no real contraindication.
 

 

 

Creating a novel device

On that point, are you partnering with engineering and technology companies?

The hope is that we and other groups working on this can do for the interface sort of what Celera Genomics did for the Human Genome Project. By having enough interest and investment, you may be able to propel the field forward to widespread use rather than just a purely academic, lab-science type of project.

We are in discussion with different companies to see how we can move ahead with this, and we would be pleased to work with whomever is interested. It may be that different companies have different pieces of the puzzle – a better sensor or a better wireless transmitter.

The plan is to move as quickly as we can to a fully implantable system. And then the benchmark for any kind of clinical advancement is to do a prospective trial. With devices, if you can get a big enough effect size, then you sometimes don’t need quite as many patients to prove it. If paralysis is striking enough and you can reverse that, then you can convince the Food and Drug Administration of its safety and efficacy, and the various insurance companies, that it’s actually reasonable and necessary.
 

How long will an implantable device last?

That’s a key question and concern. If you have someone like our participant, who’s in his early 40s, will it keep working 10, 20, 30, 40 years? For the rest of his life? Deep brain stimulators and cochlear implants do function for those long durations, but their designs are quite different. There’s a macroelectrode that’s just delivering current, which is very different from listening in on this microscopic scale. There are different technical considerations.

One possible solution is to make the device out of living tissue, which is something I just wrote about with my colleague D. Kacy Cullen. Living electrodes and amplifiers may seem a bit like science fiction, but on the other hand, we have over a century of plastic surgeons, neurosurgeons, and orthopedic surgeons doing all kinds of complicated modifications of the body, moving nerves and vessels around. It makes you realize that, in a sense, they’ve already done living electrodes by doing a nerve transfer. So the question becomes whether we can refine that living electrode technology, which could then open up more possibilities.
 

Are there any final messages you’d like to share with clinician audience of this news organization?

Regardless of our specialty, we’re always telling our patients about the benefits of things like eating healthy, exercise, and sleep. Now we can point to the fact that, 2 years after stroke, all of these brain areas are still active, and devices that can potentially reverse and unparalyze your limbs may be available in the coming 5- or 10-plus years. That gives clinicians more justification to tell their patients to really stay on top of those things so that they can be in as optimal brain-mind health as possible to someday benefit from them.

Patients and their families need to be part of the conversation of where this is all going. That’s one thing that’s totally different for brain devices versus other devices, where a person’s psychological state doesn’t necessarily matter. But with a brain device, your mental state, psychosocial situation, exercise, sleep – the way you think about and approach it – actually changes to the structure of the brain pretty dramatically.

I don’t want to cause unreasonable hope that we’re going to snap our fingers and it’s going to be cured. But I do think it’s fair to raise a possibility as a way to say that keeping oneself really healthy is justified.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Publications
Topics
Sections

 

Restoring movement following a stroke can be challenging, but recent proof-of-concept research may offer an effective way to do just that. Researchers behind the ongoing Cortimo trial successfully performed a procedure on a patient 2 years removed from a stroke, in which microelectrode arrays were implanted into his brain to decode signals driving motor function. These signals then allowed him to operate a powered brace worn on his paralyzed arm.

This news organization spoke with the trial’s principal investigator, Mijail D. Serruya, MD, PhD, an assistant professor of neurology at Thomas Jefferson University Hospital, Philadelphia, about the trial’s initial findings, what this technology may ultimately look like, and the implications for stroke patients in knowing that restorative interventions may be on the horizon.
 

How did you first get involved with implanting electrodes to help stroke patients with recovery?

I was involved in the first human application of a microelectrode array in a young man who had quadriplegia because of a spinal cord injury. We showed that we could record signal directly from his motor cortex and use it to move a cursor on the screen, and open and close a prosthetic hand and arm.

I was naive and thought that this would soon be a widely available clinical medical device. Now it’s nearly 15 years later, and while it certainly has been safely used in multiple labs to record signals from people with spinal cord injury, amyotrophic lateral sclerosis (ALS), or locked-in syndrome from a brain stem stroke, it still requires a team of technicians and a percutaneous connector. It really has not gotten out of the university.

A few years ago I spoke with Robert Rosenwasser, MD, chairman of the department of neurosurgery at Thomas Jefferson, who runs a very busy stroke center and performed the surgery in this trial. We put our heads together and said: “Maybe the time is now to see whether we can move this technology to this much more prevalent condition of a hemispheric stroke.” And that’s what we did.
 

How did the idea of using computer brain electrode interfaces begin?

Around 20 years ago, if you had someone who had severe paralysis and you wanted to restore movement, the question was, where can you get a good control signal from? Obviously, if someone can talk, they can use a voice-actuated system with speech recognition and maybe you can track their eye gaze. But if they’re trying to move their limbs, you want a motor control signal.

In someone who has end-stage ALS or a brain stem stroke, you can’t even record residual muscle activity; you have almost nothing to work with. The only thing left is to try to record directly from the brain itself.

It’s important to clarify that brain-computer interfaces are not necessarily stimulating the brain to inject the signal. They’re just recording the endogenous activity that the brain makes. In comparison, a deep brain stimulator is usually not recording anything; it’s just delivering energy to the brain and hoping for the best.

But what we’re doing is asking, if the person is trying to move the paralyzed limb but can’t, can we get to the source of the signal and then do something with it?
 

What’s the process for measuring that in, for example, someone who has a localized lesion in the motor cortex?

The first step is a scan. People have been doing functional MRI on patients who have had a stroke as long as we’ve had fMRI. We know that people can actually activate on MRI areas of their brain around the stroke, but obviously not in the stroke because it’s been lesioned. However, we do know that the circuit adjacent to it and other regions do appear able to be modulated.

So by having a person either imagine trying to do what they want to do or doing what they can do, if they have some tiny residual movement, you can then identify a kind of hot spot on the fMRI where the brain gobbles up all the oxygen because it’s so active. Then that gives you an anatomical target for the surgeon to place the electrode arrays.
 

The Cortimo trial’s enticing findings

What are the most striking results that you’ve seen so far with the device?

The first thing is that we were able to get such recordings at all. We knew from fMRIs that there were fluctuations in oxygen changing when the person was trying to do something they couldn’t do. But nobody knew that you would see this whole population of individual neurons chattering away when you place these electrode arrays in the motor cortex right next to the stroke, and make sense of what we’re recording.

Obviously, that’s very encouraging and gives us hope that many months or years after a stroke, people’s brains are able to maintain this representation of all these different movements and plans. It’s almost like it’s trapped on the other side of the stroke and some of the signals can’t get out.

The other discovery we’re pleased with is that we can actually decode signals in real time and the person can use it to do something, such as trigger the brain to open and close the hand. That’s very different from all the prior research with brain array interfaces.

Furthermore, the gentleman who participated actually had strokes in other parts of his brain affecting his vision; he had homonymous hemianopia. That raised the question of what happens if you affect parts of the brain that have to do with attention and visual processing. Could a system like this work? And again, the answer appears to be yes.
 

What are the next steps for this technology before it can potentially become available in the clinic?

For this to work, the system clearly has to be fully implantable. What we used was percutaneous. The risk-benefit may be acceptable for someone who has quadriplegia because of, for example, spinal cord injury or end-stage ALS who may already have a tracheostomy and a percutaneous endoscopic gastrostomy. But for someone who is hemiparetic and ambulatory, that may not be acceptable. And a fully implantable system would also have much better patient compliance.

Also, when you’re recording from lots and lots of individual brain cells at many, many samples a second on many, many channels, it’s certainly an engineering challenge. It’s not just a single channel that you occasionally query; it’s hundreds of thousands of channels of this complicated data stream.

But these are solvable challenges. People have been making a lot of progress. It’s really a matter of funding and the engineering expertise, rather than some sort of fundamental scientific breakthrough.

With that said, I think it could be within the next 5-10 years that we could actually have a product that expands the toolbox of what can be done for patients who’ve had a stroke, if they’re motivated and there’s no real contraindication.
 

 

 

Creating a novel device

On that point, are you partnering with engineering and technology companies?

The hope is that we and other groups working on this can do for the interface sort of what Celera Genomics did for the Human Genome Project. By having enough interest and investment, you may be able to propel the field forward to widespread use rather than just a purely academic, lab-science type of project.

We are in discussion with different companies to see how we can move ahead with this, and we would be pleased to work with whomever is interested. It may be that different companies have different pieces of the puzzle – a better sensor or a better wireless transmitter.

The plan is to move as quickly as we can to a fully implantable system. And then the benchmark for any kind of clinical advancement is to do a prospective trial. With devices, if you can get a big enough effect size, then you sometimes don’t need quite as many patients to prove it. If paralysis is striking enough and you can reverse that, then you can convince the Food and Drug Administration of its safety and efficacy, and the various insurance companies, that it’s actually reasonable and necessary.
 

How long will an implantable device last?

That’s a key question and concern. If you have someone like our participant, who’s in his early 40s, will it keep working 10, 20, 30, 40 years? For the rest of his life? Deep brain stimulators and cochlear implants do function for those long durations, but their designs are quite different. There’s a macroelectrode that’s just delivering current, which is very different from listening in on this microscopic scale. There are different technical considerations.

One possible solution is to make the device out of living tissue, which is something I just wrote about with my colleague D. Kacy Cullen. Living electrodes and amplifiers may seem a bit like science fiction, but on the other hand, we have over a century of plastic surgeons, neurosurgeons, and orthopedic surgeons doing all kinds of complicated modifications of the body, moving nerves and vessels around. It makes you realize that, in a sense, they’ve already done living electrodes by doing a nerve transfer. So the question becomes whether we can refine that living electrode technology, which could then open up more possibilities.
 

Are there any final messages you’d like to share with clinician audience of this news organization?

Regardless of our specialty, we’re always telling our patients about the benefits of things like eating healthy, exercise, and sleep. Now we can point to the fact that, 2 years after stroke, all of these brain areas are still active, and devices that can potentially reverse and unparalyze your limbs may be available in the coming 5- or 10-plus years. That gives clinicians more justification to tell their patients to really stay on top of those things so that they can be in as optimal brain-mind health as possible to someday benefit from them.

Patients and their families need to be part of the conversation of where this is all going. That’s one thing that’s totally different for brain devices versus other devices, where a person’s psychological state doesn’t necessarily matter. But with a brain device, your mental state, psychosocial situation, exercise, sleep – the way you think about and approach it – actually changes to the structure of the brain pretty dramatically.

I don’t want to cause unreasonable hope that we’re going to snap our fingers and it’s going to be cured. But I do think it’s fair to raise a possibility as a way to say that keeping oneself really healthy is justified.

A version of this article first appeared on Medscape.com.

 

Restoring movement following a stroke can be challenging, but recent proof-of-concept research may offer an effective way to do just that. Researchers behind the ongoing Cortimo trial successfully performed a procedure on a patient 2 years removed from a stroke, in which microelectrode arrays were implanted into his brain to decode signals driving motor function. These signals then allowed him to operate a powered brace worn on his paralyzed arm.

This news organization spoke with the trial’s principal investigator, Mijail D. Serruya, MD, PhD, an assistant professor of neurology at Thomas Jefferson University Hospital, Philadelphia, about the trial’s initial findings, what this technology may ultimately look like, and the implications for stroke patients in knowing that restorative interventions may be on the horizon.
 

How did you first get involved with implanting electrodes to help stroke patients with recovery?

I was involved in the first human application of a microelectrode array in a young man who had quadriplegia because of a spinal cord injury. We showed that we could record signal directly from his motor cortex and use it to move a cursor on the screen, and open and close a prosthetic hand and arm.

I was naive and thought that this would soon be a widely available clinical medical device. Now it’s nearly 15 years later, and while it certainly has been safely used in multiple labs to record signals from people with spinal cord injury, amyotrophic lateral sclerosis (ALS), or locked-in syndrome from a brain stem stroke, it still requires a team of technicians and a percutaneous connector. It really has not gotten out of the university.

A few years ago I spoke with Robert Rosenwasser, MD, chairman of the department of neurosurgery at Thomas Jefferson, who runs a very busy stroke center and performed the surgery in this trial. We put our heads together and said: “Maybe the time is now to see whether we can move this technology to this much more prevalent condition of a hemispheric stroke.” And that’s what we did.
 

How did the idea of using computer brain electrode interfaces begin?

Around 20 years ago, if you had someone who had severe paralysis and you wanted to restore movement, the question was, where can you get a good control signal from? Obviously, if someone can talk, they can use a voice-actuated system with speech recognition and maybe you can track their eye gaze. But if they’re trying to move their limbs, you want a motor control signal.

In someone who has end-stage ALS or a brain stem stroke, you can’t even record residual muscle activity; you have almost nothing to work with. The only thing left is to try to record directly from the brain itself.

It’s important to clarify that brain-computer interfaces are not necessarily stimulating the brain to inject the signal. They’re just recording the endogenous activity that the brain makes. In comparison, a deep brain stimulator is usually not recording anything; it’s just delivering energy to the brain and hoping for the best.

But what we’re doing is asking, if the person is trying to move the paralyzed limb but can’t, can we get to the source of the signal and then do something with it?
 

What’s the process for measuring that in, for example, someone who has a localized lesion in the motor cortex?

The first step is a scan. People have been doing functional MRI on patients who have had a stroke as long as we’ve had fMRI. We know that people can actually activate on MRI areas of their brain around the stroke, but obviously not in the stroke because it’s been lesioned. However, we do know that the circuit adjacent to it and other regions do appear able to be modulated.

So by having a person either imagine trying to do what they want to do or doing what they can do, if they have some tiny residual movement, you can then identify a kind of hot spot on the fMRI where the brain gobbles up all the oxygen because it’s so active. Then that gives you an anatomical target for the surgeon to place the electrode arrays.
 

The Cortimo trial’s enticing findings

What are the most striking results that you’ve seen so far with the device?

The first thing is that we were able to get such recordings at all. We knew from fMRIs that there were fluctuations in oxygen changing when the person was trying to do something they couldn’t do. But nobody knew that you would see this whole population of individual neurons chattering away when you place these electrode arrays in the motor cortex right next to the stroke, and make sense of what we’re recording.

Obviously, that’s very encouraging and gives us hope that many months or years after a stroke, people’s brains are able to maintain this representation of all these different movements and plans. It’s almost like it’s trapped on the other side of the stroke and some of the signals can’t get out.

The other discovery we’re pleased with is that we can actually decode signals in real time and the person can use it to do something, such as trigger the brain to open and close the hand. That’s very different from all the prior research with brain array interfaces.

Furthermore, the gentleman who participated actually had strokes in other parts of his brain affecting his vision; he had homonymous hemianopia. That raised the question of what happens if you affect parts of the brain that have to do with attention and visual processing. Could a system like this work? And again, the answer appears to be yes.
 

What are the next steps for this technology before it can potentially become available in the clinic?

For this to work, the system clearly has to be fully implantable. What we used was percutaneous. The risk-benefit may be acceptable for someone who has quadriplegia because of, for example, spinal cord injury or end-stage ALS who may already have a tracheostomy and a percutaneous endoscopic gastrostomy. But for someone who is hemiparetic and ambulatory, that may not be acceptable. And a fully implantable system would also have much better patient compliance.

Also, when you’re recording from lots and lots of individual brain cells at many, many samples a second on many, many channels, it’s certainly an engineering challenge. It’s not just a single channel that you occasionally query; it’s hundreds of thousands of channels of this complicated data stream.

But these are solvable challenges. People have been making a lot of progress. It’s really a matter of funding and the engineering expertise, rather than some sort of fundamental scientific breakthrough.

With that said, I think it could be within the next 5-10 years that we could actually have a product that expands the toolbox of what can be done for patients who’ve had a stroke, if they’re motivated and there’s no real contraindication.
 

 

 

Creating a novel device

On that point, are you partnering with engineering and technology companies?

The hope is that we and other groups working on this can do for the interface sort of what Celera Genomics did for the Human Genome Project. By having enough interest and investment, you may be able to propel the field forward to widespread use rather than just a purely academic, lab-science type of project.

We are in discussion with different companies to see how we can move ahead with this, and we would be pleased to work with whomever is interested. It may be that different companies have different pieces of the puzzle – a better sensor or a better wireless transmitter.

The plan is to move as quickly as we can to a fully implantable system. And then the benchmark for any kind of clinical advancement is to do a prospective trial. With devices, if you can get a big enough effect size, then you sometimes don’t need quite as many patients to prove it. If paralysis is striking enough and you can reverse that, then you can convince the Food and Drug Administration of its safety and efficacy, and the various insurance companies, that it’s actually reasonable and necessary.
 

How long will an implantable device last?

That’s a key question and concern. If you have someone like our participant, who’s in his early 40s, will it keep working 10, 20, 30, 40 years? For the rest of his life? Deep brain stimulators and cochlear implants do function for those long durations, but their designs are quite different. There’s a macroelectrode that’s just delivering current, which is very different from listening in on this microscopic scale. There are different technical considerations.

One possible solution is to make the device out of living tissue, which is something I just wrote about with my colleague D. Kacy Cullen. Living electrodes and amplifiers may seem a bit like science fiction, but on the other hand, we have over a century of plastic surgeons, neurosurgeons, and orthopedic surgeons doing all kinds of complicated modifications of the body, moving nerves and vessels around. It makes you realize that, in a sense, they’ve already done living electrodes by doing a nerve transfer. So the question becomes whether we can refine that living electrode technology, which could then open up more possibilities.
 

Are there any final messages you’d like to share with clinician audience of this news organization?

Regardless of our specialty, we’re always telling our patients about the benefits of things like eating healthy, exercise, and sleep. Now we can point to the fact that, 2 years after stroke, all of these brain areas are still active, and devices that can potentially reverse and unparalyze your limbs may be available in the coming 5- or 10-plus years. That gives clinicians more justification to tell their patients to really stay on top of those things so that they can be in as optimal brain-mind health as possible to someday benefit from them.

Patients and their families need to be part of the conversation of where this is all going. That’s one thing that’s totally different for brain devices versus other devices, where a person’s psychological state doesn’t necessarily matter. But with a brain device, your mental state, psychosocial situation, exercise, sleep – the way you think about and approach it – actually changes to the structure of the brain pretty dramatically.

I don’t want to cause unreasonable hope that we’re going to snap our fingers and it’s going to be cured. But I do think it’s fair to raise a possibility as a way to say that keeping oneself really healthy is justified.

A version of this article first appeared on Medscape.com.

Issue
Neurology Reviews- 29(4)
Issue
Neurology Reviews- 29(4)
Publications
Publications
Topics
Article Type
Sections
Citation Override
Publish date: February 26, 2021
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads