User login
Arcalyst gets FDA nod as first therapy for recurrent pericarditis
The Food and Drug Administration has approved rilonacept (Arcalyst) to treat recurrent pericarditis and reduce the risk for recurrence in adults and children 12 years and older.
Approval of the weekly subcutaneous injection offers patients the first and only FDA-approved therapy for recurrent pericarditis, the agency said in a release.
Recurrent pericarditis is characterized by a remitting relapsing inflammation of the pericardium, and therapeutic options have been limited to NSAIDs, colchicine, and corticosteroids.
Rilonacept is a recombinant fusion protein that blocks interleukin-1 alpha and interleukin-1 beta signaling. It is already approved by the FDA to treat a group of rare inherited inflammatory diseases called cryopyrin-associated periodic syndromes.
The new indication is based on the pivotal phase 3 RHAPSODY trial in 86 patients with acute symptoms of recurrent pericarditis and systemic inflammation. After randomization, pericarditis recurred in 2 of 30 patients (7%) treated with rilonacept and in 23 of 31 patients (74%) treated with placebo, representing a 96% reduction in the relative risk for recurrence with rilonacept.
Patients who received rilonacept were also pain free or had minimal pain on 98% of trial days, whereas those who received placebo had minimal or no pain on 46% of trial days.
The most common adverse effects of rilonacept are injection-site reactions and upper-respiratory tract infections.
Serious, life-threatening infections have been reported in patients taking rilonacept, according to the FDA. Patients with active or chronic infections should not take the drug.
The FDA label also advises that patients should avoid live vaccines while taking rilonacept and that it should be discontinued if a hypersensitivity reaction occurs.
The commercial launch is expected in April, according to the company.
A version of this article first appeared on Medscape.com.
The Food and Drug Administration has approved rilonacept (Arcalyst) to treat recurrent pericarditis and reduce the risk for recurrence in adults and children 12 years and older.
Approval of the weekly subcutaneous injection offers patients the first and only FDA-approved therapy for recurrent pericarditis, the agency said in a release.
Recurrent pericarditis is characterized by a remitting relapsing inflammation of the pericardium, and therapeutic options have been limited to NSAIDs, colchicine, and corticosteroids.
Rilonacept is a recombinant fusion protein that blocks interleukin-1 alpha and interleukin-1 beta signaling. It is already approved by the FDA to treat a group of rare inherited inflammatory diseases called cryopyrin-associated periodic syndromes.
The new indication is based on the pivotal phase 3 RHAPSODY trial in 86 patients with acute symptoms of recurrent pericarditis and systemic inflammation. After randomization, pericarditis recurred in 2 of 30 patients (7%) treated with rilonacept and in 23 of 31 patients (74%) treated with placebo, representing a 96% reduction in the relative risk for recurrence with rilonacept.
Patients who received rilonacept were also pain free or had minimal pain on 98% of trial days, whereas those who received placebo had minimal or no pain on 46% of trial days.
The most common adverse effects of rilonacept are injection-site reactions and upper-respiratory tract infections.
Serious, life-threatening infections have been reported in patients taking rilonacept, according to the FDA. Patients with active or chronic infections should not take the drug.
The FDA label also advises that patients should avoid live vaccines while taking rilonacept and that it should be discontinued if a hypersensitivity reaction occurs.
The commercial launch is expected in April, according to the company.
A version of this article first appeared on Medscape.com.
The Food and Drug Administration has approved rilonacept (Arcalyst) to treat recurrent pericarditis and reduce the risk for recurrence in adults and children 12 years and older.
Approval of the weekly subcutaneous injection offers patients the first and only FDA-approved therapy for recurrent pericarditis, the agency said in a release.
Recurrent pericarditis is characterized by a remitting relapsing inflammation of the pericardium, and therapeutic options have been limited to NSAIDs, colchicine, and corticosteroids.
Rilonacept is a recombinant fusion protein that blocks interleukin-1 alpha and interleukin-1 beta signaling. It is already approved by the FDA to treat a group of rare inherited inflammatory diseases called cryopyrin-associated periodic syndromes.
The new indication is based on the pivotal phase 3 RHAPSODY trial in 86 patients with acute symptoms of recurrent pericarditis and systemic inflammation. After randomization, pericarditis recurred in 2 of 30 patients (7%) treated with rilonacept and in 23 of 31 patients (74%) treated with placebo, representing a 96% reduction in the relative risk for recurrence with rilonacept.
Patients who received rilonacept were also pain free or had minimal pain on 98% of trial days, whereas those who received placebo had minimal or no pain on 46% of trial days.
The most common adverse effects of rilonacept are injection-site reactions and upper-respiratory tract infections.
Serious, life-threatening infections have been reported in patients taking rilonacept, according to the FDA. Patients with active or chronic infections should not take the drug.
The FDA label also advises that patients should avoid live vaccines while taking rilonacept and that it should be discontinued if a hypersensitivity reaction occurs.
The commercial launch is expected in April, according to the company.
A version of this article first appeared on Medscape.com.
Metoprolol increases severity, but not risk, of COPD exacerbations
Background: Beta-blockers are underutilized in patients with both COPD and established cardiovascular indications for beta-blocker therapy, despite evidence suggesting overall benefit. Prior observational studies have associated beta-blockers with improved outcomes in COPD in the absence of cardiovascular indications; however, this has not been previously evaluated in a randomized trial.
Study design: Placebo-controlled, double-blind, prospective, randomized trial.
Setting: A total of 26 centers in the United States.
Synopsis: The BLOCK COPD trial randomized more than 500 patients with moderate to severe COPD and no established indication for beta-blocker therapy to extended-release metoprolol or placebo. There was no significant difference in the primary endpoint of time until first exacerbation. While there was no difference in the overall risk of exacerbations of COPD, the trial was terminated early because of increased risk of severe or very severe exacerbations of COPD in the metoprolol group (hazard ratio, 1.91; 95% confidence interval, 1.20-2.83). These were defined as exacerbations requiring hospitalization and mechanical ventilation, respectively.
Importantly, this trial excluded patients with established indications for beta-blocker therapy, and study findings should not be applied to this population.
Bottom line: Metoprolol is not associated with increased risk of COPD exacerbations, but is associated with increased severity of COPD exacerbations in patients with moderate to severe COPD who have no established indications for beta-blockers.
Citation: Dransfield MT et al. Metoprolol for the prevention of acute exacerbations of COPD. N Engl J Med. 2019 Oct 20. doi: 10.1056/NEJMoa1908142.
Dr. Gerstenberger is a hospitalist and clinical assistant professor of medicine at the University of Utah, Salt Lake City.
Background: Beta-blockers are underutilized in patients with both COPD and established cardiovascular indications for beta-blocker therapy, despite evidence suggesting overall benefit. Prior observational studies have associated beta-blockers with improved outcomes in COPD in the absence of cardiovascular indications; however, this has not been previously evaluated in a randomized trial.
Study design: Placebo-controlled, double-blind, prospective, randomized trial.
Setting: A total of 26 centers in the United States.
Synopsis: The BLOCK COPD trial randomized more than 500 patients with moderate to severe COPD and no established indication for beta-blocker therapy to extended-release metoprolol or placebo. There was no significant difference in the primary endpoint of time until first exacerbation. While there was no difference in the overall risk of exacerbations of COPD, the trial was terminated early because of increased risk of severe or very severe exacerbations of COPD in the metoprolol group (hazard ratio, 1.91; 95% confidence interval, 1.20-2.83). These were defined as exacerbations requiring hospitalization and mechanical ventilation, respectively.
Importantly, this trial excluded patients with established indications for beta-blocker therapy, and study findings should not be applied to this population.
Bottom line: Metoprolol is not associated with increased risk of COPD exacerbations, but is associated with increased severity of COPD exacerbations in patients with moderate to severe COPD who have no established indications for beta-blockers.
Citation: Dransfield MT et al. Metoprolol for the prevention of acute exacerbations of COPD. N Engl J Med. 2019 Oct 20. doi: 10.1056/NEJMoa1908142.
Dr. Gerstenberger is a hospitalist and clinical assistant professor of medicine at the University of Utah, Salt Lake City.
Background: Beta-blockers are underutilized in patients with both COPD and established cardiovascular indications for beta-blocker therapy, despite evidence suggesting overall benefit. Prior observational studies have associated beta-blockers with improved outcomes in COPD in the absence of cardiovascular indications; however, this has not been previously evaluated in a randomized trial.
Study design: Placebo-controlled, double-blind, prospective, randomized trial.
Setting: A total of 26 centers in the United States.
Synopsis: The BLOCK COPD trial randomized more than 500 patients with moderate to severe COPD and no established indication for beta-blocker therapy to extended-release metoprolol or placebo. There was no significant difference in the primary endpoint of time until first exacerbation. While there was no difference in the overall risk of exacerbations of COPD, the trial was terminated early because of increased risk of severe or very severe exacerbations of COPD in the metoprolol group (hazard ratio, 1.91; 95% confidence interval, 1.20-2.83). These were defined as exacerbations requiring hospitalization and mechanical ventilation, respectively.
Importantly, this trial excluded patients with established indications for beta-blocker therapy, and study findings should not be applied to this population.
Bottom line: Metoprolol is not associated with increased risk of COPD exacerbations, but is associated with increased severity of COPD exacerbations in patients with moderate to severe COPD who have no established indications for beta-blockers.
Citation: Dransfield MT et al. Metoprolol for the prevention of acute exacerbations of COPD. N Engl J Med. 2019 Oct 20. doi: 10.1056/NEJMoa1908142.
Dr. Gerstenberger is a hospitalist and clinical assistant professor of medicine at the University of Utah, Salt Lake City.
A paleolithic raw bar, and the human brush with extinction
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
Biomarkers predict efficacy of DKN-01 in endometrial cancer
Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.
Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.
“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).
She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.
“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
Focus on monotherapy
Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.
Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.
There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.
The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.
“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.
When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.
Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.
“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.
She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).
Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
Monotherapy or combination?
In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”
She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.
“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”
Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.
The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.
Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.
Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.
“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).
She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.
“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
Focus on monotherapy
Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.
Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.
There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.
The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.
“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.
When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.
Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.
“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.
She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).
Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
Monotherapy or combination?
In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”
She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.
“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”
Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.
The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.
Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.
Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.
“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).
She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.
“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
Focus on monotherapy
Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.
Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.
There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.
The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.
“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.
When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.
Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.
“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.
She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).
Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
Monotherapy or combination?
In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”
She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.
“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”
Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.
The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.
FROM SGO 2021
Will psoriasis patients embrace proactive topical therapy?
Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?
Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.
. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.
“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.
“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.
Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”
Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.
PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.
The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.
Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.
“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.
Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”
“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.
He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.
Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.
The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.
Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?
Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.
. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.
“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.
“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.
Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”
Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.
PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.
The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.
Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.
“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.
Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”
“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.
He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.
Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.
The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.
Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?
Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.
. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.
“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.
“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.
Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”
Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.
PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.
The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.
Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.
“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.
Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”
“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.
He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.
Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.
The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.
FROM INNOVATIONS IN DERMATOLOGY
The significance of mismatch repair deficiency in endometrial cancer
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.
In U.S., lockdowns added 2 pounds per month
Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.
Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.
“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.
Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.
The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.
Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.
“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”
The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.
Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.
“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”
A version of this article first appeared on WebMD.com.
Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.
Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.
“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.
Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.
The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.
Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.
“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”
The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.
Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.
“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”
A version of this article first appeared on WebMD.com.
Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.
Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.
“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.
Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.
The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.
Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.
“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”
The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.
Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.
“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”
A version of this article first appeared on WebMD.com.
Drug-resistant TB trial stopped early after successful results
Médecins Sans Frontières (MSF/Doctors Without Borders) announced early closure of its phase 2/3 trial of a 6-month multidrug regimen for multidrug-resistant tuberculosis (MDR-TB) because an independent data safety and monitoring board (DSMB) determined that the drug combination in the study regimen was superior to current therapy, according to a press release.
The trial, called TB PRACTECAL, compared the current local standard of care with a 6-month regimen of bedaquiline, pretomanid, linezolid, and moxifloxacin. The interim analysis included 242 patients and the randomized, controlled trial was conducted in sites in Belarus, South Africa, and Uzbekistan.
The preliminary data will be shared with the World Health Organization soon and will also be submitted to a peer-reviewed journal. If it withstands further reviews, as is anticipated, the trial would support the first solely oral regimen for MDR-TB.
In 2019, an estimated 465,000 people developed MDR-TB and 182,000 died. The global burden of TB at that time was about 10 million new cases, many with coexisting HIV.
Current treatment for MDR-TB lasts 9-20 months and is complicated by the need for painful shots and toxic antibiotics. Side effects can include psychiatric problems from quinolones, isoniazid, ethambutol, or cycloserine; deafness from aminoglycosides; and bone marrow suppression from linezolid, among other toxicities.
It’s hoped that the shorter regimen will reduce toxicity and improve patient compliance. Poor adherence to treatment is a major driver of further drug resistance. Current regimens require up to 20 pills per day as well as daily injections.
In a prepared statement from MSF, David Moore, MD, MSc, London School of Hygiene and Tropical Medicine, a member of the TB-PRACTECAL trial’s steering committee, concluded: “The findings could transform the way we treat patients with drug-resistant forms of TB worldwide, who have been neglected for too long.”
This good news is particularly welcome as, in the time of COVID-19, “an estimated 1.4 million fewer people received care for tuberculosis in 2020 than in 2019,” according to the WHO. The drop, an overall 21% reduction in patients beginning treatment, ranged as high as 42% in Indonesia.
Although awaiting complete data, Madhukar Pai, MD, PhD, associate director of the McGill International TB Centre, McGill University, Montreal, shares Dr. Moore’s enthusiasm. In an interview, Dr. Pai compared MDR-TB with extensively drug-resistant TB (XDR-TB).
“I’m excited about the possibility that these trial results might help shorten MDR-TB treatment to 6 months,” said Dr. Pai. “That will be a huge relief to all patients battling drug-resistant disease. The 6-month BPaL regimen (bedaquiline, pretomanid, and linezolid) regimen works well in XDR-TB. So, I would expect the TB PRACTECAL regimen with one added drug (moxifloxacin) to work well in MDR-TB, which is less severe than XDR-TB. Between these two regimens, if we can bring down MDR and XDR treatment to 6 months, all oral, that would be a huge advance.”
The expense of bedaquiline has been a long-standing concern in the global health community. Janssen, a subsidiary of Johnson & Johnson, has reduced the price to $340 per 6-month treatment course for more than 135 eligible low- and middle-income countries.
Previously, the tiered pricing structure was different for low-, middle-, and high-income countries (U.S. $900, $3,000, and $30,000, respectively). “The global TB community has asked Janssen to drop the price of bedaquiline to a level no higher than $32 per month – double the price at which researchers estimated bedaquiline could be sold for a profit,” according to the Treatment Action Group A major source of contention over pricing has been that there has been considerable public investment in the drug›s development.
Dr. Pai concluded: “Bedaquiline is likely the most important drug in both 6-month regimens. We need to work harder to make bedaquiline, an excellent drug, more affordable and accessible.”
While the full data is not yet publicly available, TB PRACTECAL was a randomized, controlled, multicenter study. The fact that enrollment was discontinued early by the DSMB suggests the efficacy data was compelling and that this completely oral regimen will become the standard of care.
Dr. Stone is an infectious disease specialist and author of Resilience: One Family’s Story of Hope and Triumph Over Evil and of Conducting Clinical Research, the essential guide to the topic. A version of this article first appeared on Medscape.com.
Médecins Sans Frontières (MSF/Doctors Without Borders) announced early closure of its phase 2/3 trial of a 6-month multidrug regimen for multidrug-resistant tuberculosis (MDR-TB) because an independent data safety and monitoring board (DSMB) determined that the drug combination in the study regimen was superior to current therapy, according to a press release.
The trial, called TB PRACTECAL, compared the current local standard of care with a 6-month regimen of bedaquiline, pretomanid, linezolid, and moxifloxacin. The interim analysis included 242 patients and the randomized, controlled trial was conducted in sites in Belarus, South Africa, and Uzbekistan.
The preliminary data will be shared with the World Health Organization soon and will also be submitted to a peer-reviewed journal. If it withstands further reviews, as is anticipated, the trial would support the first solely oral regimen for MDR-TB.
In 2019, an estimated 465,000 people developed MDR-TB and 182,000 died. The global burden of TB at that time was about 10 million new cases, many with coexisting HIV.
Current treatment for MDR-TB lasts 9-20 months and is complicated by the need for painful shots and toxic antibiotics. Side effects can include psychiatric problems from quinolones, isoniazid, ethambutol, or cycloserine; deafness from aminoglycosides; and bone marrow suppression from linezolid, among other toxicities.
It’s hoped that the shorter regimen will reduce toxicity and improve patient compliance. Poor adherence to treatment is a major driver of further drug resistance. Current regimens require up to 20 pills per day as well as daily injections.
In a prepared statement from MSF, David Moore, MD, MSc, London School of Hygiene and Tropical Medicine, a member of the TB-PRACTECAL trial’s steering committee, concluded: “The findings could transform the way we treat patients with drug-resistant forms of TB worldwide, who have been neglected for too long.”
This good news is particularly welcome as, in the time of COVID-19, “an estimated 1.4 million fewer people received care for tuberculosis in 2020 than in 2019,” according to the WHO. The drop, an overall 21% reduction in patients beginning treatment, ranged as high as 42% in Indonesia.
Although awaiting complete data, Madhukar Pai, MD, PhD, associate director of the McGill International TB Centre, McGill University, Montreal, shares Dr. Moore’s enthusiasm. In an interview, Dr. Pai compared MDR-TB with extensively drug-resistant TB (XDR-TB).
“I’m excited about the possibility that these trial results might help shorten MDR-TB treatment to 6 months,” said Dr. Pai. “That will be a huge relief to all patients battling drug-resistant disease. The 6-month BPaL regimen (bedaquiline, pretomanid, and linezolid) regimen works well in XDR-TB. So, I would expect the TB PRACTECAL regimen with one added drug (moxifloxacin) to work well in MDR-TB, which is less severe than XDR-TB. Between these two regimens, if we can bring down MDR and XDR treatment to 6 months, all oral, that would be a huge advance.”
The expense of bedaquiline has been a long-standing concern in the global health community. Janssen, a subsidiary of Johnson & Johnson, has reduced the price to $340 per 6-month treatment course for more than 135 eligible low- and middle-income countries.
Previously, the tiered pricing structure was different for low-, middle-, and high-income countries (U.S. $900, $3,000, and $30,000, respectively). “The global TB community has asked Janssen to drop the price of bedaquiline to a level no higher than $32 per month – double the price at which researchers estimated bedaquiline could be sold for a profit,” according to the Treatment Action Group A major source of contention over pricing has been that there has been considerable public investment in the drug›s development.
Dr. Pai concluded: “Bedaquiline is likely the most important drug in both 6-month regimens. We need to work harder to make bedaquiline, an excellent drug, more affordable and accessible.”
While the full data is not yet publicly available, TB PRACTECAL was a randomized, controlled, multicenter study. The fact that enrollment was discontinued early by the DSMB suggests the efficacy data was compelling and that this completely oral regimen will become the standard of care.
Dr. Stone is an infectious disease specialist and author of Resilience: One Family’s Story of Hope and Triumph Over Evil and of Conducting Clinical Research, the essential guide to the topic. A version of this article first appeared on Medscape.com.
Médecins Sans Frontières (MSF/Doctors Without Borders) announced early closure of its phase 2/3 trial of a 6-month multidrug regimen for multidrug-resistant tuberculosis (MDR-TB) because an independent data safety and monitoring board (DSMB) determined that the drug combination in the study regimen was superior to current therapy, according to a press release.
The trial, called TB PRACTECAL, compared the current local standard of care with a 6-month regimen of bedaquiline, pretomanid, linezolid, and moxifloxacin. The interim analysis included 242 patients and the randomized, controlled trial was conducted in sites in Belarus, South Africa, and Uzbekistan.
The preliminary data will be shared with the World Health Organization soon and will also be submitted to a peer-reviewed journal. If it withstands further reviews, as is anticipated, the trial would support the first solely oral regimen for MDR-TB.
In 2019, an estimated 465,000 people developed MDR-TB and 182,000 died. The global burden of TB at that time was about 10 million new cases, many with coexisting HIV.
Current treatment for MDR-TB lasts 9-20 months and is complicated by the need for painful shots and toxic antibiotics. Side effects can include psychiatric problems from quinolones, isoniazid, ethambutol, or cycloserine; deafness from aminoglycosides; and bone marrow suppression from linezolid, among other toxicities.
It’s hoped that the shorter regimen will reduce toxicity and improve patient compliance. Poor adherence to treatment is a major driver of further drug resistance. Current regimens require up to 20 pills per day as well as daily injections.
In a prepared statement from MSF, David Moore, MD, MSc, London School of Hygiene and Tropical Medicine, a member of the TB-PRACTECAL trial’s steering committee, concluded: “The findings could transform the way we treat patients with drug-resistant forms of TB worldwide, who have been neglected for too long.”
This good news is particularly welcome as, in the time of COVID-19, “an estimated 1.4 million fewer people received care for tuberculosis in 2020 than in 2019,” according to the WHO. The drop, an overall 21% reduction in patients beginning treatment, ranged as high as 42% in Indonesia.
Although awaiting complete data, Madhukar Pai, MD, PhD, associate director of the McGill International TB Centre, McGill University, Montreal, shares Dr. Moore’s enthusiasm. In an interview, Dr. Pai compared MDR-TB with extensively drug-resistant TB (XDR-TB).
“I’m excited about the possibility that these trial results might help shorten MDR-TB treatment to 6 months,” said Dr. Pai. “That will be a huge relief to all patients battling drug-resistant disease. The 6-month BPaL regimen (bedaquiline, pretomanid, and linezolid) regimen works well in XDR-TB. So, I would expect the TB PRACTECAL regimen with one added drug (moxifloxacin) to work well in MDR-TB, which is less severe than XDR-TB. Between these two regimens, if we can bring down MDR and XDR treatment to 6 months, all oral, that would be a huge advance.”
The expense of bedaquiline has been a long-standing concern in the global health community. Janssen, a subsidiary of Johnson & Johnson, has reduced the price to $340 per 6-month treatment course for more than 135 eligible low- and middle-income countries.
Previously, the tiered pricing structure was different for low-, middle-, and high-income countries (U.S. $900, $3,000, and $30,000, respectively). “The global TB community has asked Janssen to drop the price of bedaquiline to a level no higher than $32 per month – double the price at which researchers estimated bedaquiline could be sold for a profit,” according to the Treatment Action Group A major source of contention over pricing has been that there has been considerable public investment in the drug›s development.
Dr. Pai concluded: “Bedaquiline is likely the most important drug in both 6-month regimens. We need to work harder to make bedaquiline, an excellent drug, more affordable and accessible.”
While the full data is not yet publicly available, TB PRACTECAL was a randomized, controlled, multicenter study. The fact that enrollment was discontinued early by the DSMB suggests the efficacy data was compelling and that this completely oral regimen will become the standard of care.
Dr. Stone is an infectious disease specialist and author of Resilience: One Family’s Story of Hope and Triumph Over Evil and of Conducting Clinical Research, the essential guide to the topic. A version of this article first appeared on Medscape.com.
Vitamin D may protect against COVID-19, especially in Black patients
Higher levels of vitamin D than traditionally considered sufficient may help prevent COVID-19 infection – particularly in Black patients, shows a new single-center, retrospective study looking at the role of vitamin D in prevention of infection.
The study, published recently in JAMA Network Open, noted that expert opinion varies as to what “sufficient” levels of vitamin D are, some define this as 30 ng/mL, while others cite 40 ng/mL or greater.
In their discussion, the authors also noted that their results showed the “risk of positive COVID-19 test results decreased significantly with increased vitamin D level of 30 ng/mL or greater when measured as a continuous variable.”
“These new results tell us that having vitamin D levels above those normally considered sufficient is associated with decreased risk of testing positive for COVID-19, at least in Black individuals,” lead author, David Meltzer, MD, chief of hospital medicine at the University of Chicago, said in a press release from his institution.
“These findings suggest that randomized clinical trials to determine whether increasing vitamin D levels to greater than 30-40 ng/mL affect COVID-19 risk are warranted, especially in Black individuals,” he and his coauthors said.
Vitamin D at time of testing most strongly associated with COVID risk
An earlier study by the same researchers found that vitamin D deficiency (less than 20 ng/mL) may raise the risk of testing positive for COVID-19 in people from various ethnicities, as reported by this news organization.
Data for this latest study were drawn from electronic health records for 4,638 individuals at the University of Chicago Medicine and were used to examine whether the likelihood of a positive COVID-19 test was associated with a person’s most recent vitamin D level (within the previous year), and whether there was any effect of ethnicity on this outcome.
Mean age was 52.8 years, 69% were women, 49% were Black, 43% White, and 8% were another race/ethnicity. A total of 27% of the individuals were deficient in vitamin D (less than 20 ng/mL), 27% had insufficient levels (20-30 ng/mL), 22% had sufficient levels (30-40 ng/mL), and the remaining 24% had levels of 40 ng/mL or greater.
In total, 333 (7%) of people tested positive for COVID-19, including 102 (5%) Whites and 211 (9%) Blacks. And 36% of Black individuals who tested positive for COVID-19 were classified as vitamin D deficient, compared with 16% of Whites.
A positive test result for COVID-19 was not significantly associated with vitamin D levels in white individuals but was in Black individuals.
In Black people, compared with levels of at least 40 ng/mL, vitamin D levels of 30-40 ng/mL were associated with an incidence rate ratio of 2.64 for COVID-19 positivity (P = .01). For levels of 20-30 ng/mL, the IRR was 1.69 (P = 0.21); and for less than 20 ng/mL the IRR was 2.55 (P = .009).
The researchers also found that the risk of positive test results with lower vitamin D levels increased when those levels were lower just prior to the positive COVID-19 test, lending “support [to] the idea that vitamin D level at the time of testing is most strongly associated with COVID-19 risk,” they wrote.
Try upping vitamin D levels to 40 ng/mL or greater to prevent COVID?
In their discussion, the authors noted that significant association of vitamin D levels with COVID-19 risk in Blacks but not in Whites, “could reflect their higher COVID-19 risk, to which socioeconomic factors and structural inequities clearly contribute.
“Biological susceptibility to vitamin D deficiency may also be less frequent in White than Black individuals, since lighter skin increases vitamin D production in response to sunlight, and vitamin D binding proteins may vary by race and affect vitamin D bioavailability.”
Given less than 10% of U.S. adults have a vitamin D level greater than 40 ng/mL, the study findings increase the urgency to consider whether increased sun exposure or supplementation could reduce COVID-19 risk, according to the authors.
“When increased sun exposure is impractical, achieving vitamin D levels of 40 ng/mL or greater typically requires greater supplementation than currently recommended for most individuals of 600-800 IU/d vitamin D3,” they added.
However, Dr. Meltzer also acknowledged that “this is an observational study. We can see that there’s an association between vitamin D levels and likelihood of a COVID-19 diagnosis, but we don’t know exactly why that is, or whether these results are due to the vitamin D directly or other related biological factors.”
All in all, the authors suggested that randomized clinical trials are needed to understand if vitamin D can reduce COVID-19 risk, and as such they should include doses of supplements likely to increase vitamin D to at least 40 ng/mL, and perhaps even higher, although they pointed out that the latter must be achieved safely.
“Studies should also consider the role of vitamin D testing, loading doses, dose adjustments for individuals who are obese or overweight, risks for hypercalcemia, and strategies to monitor for and mitigate hypercalcemia, and that non-White populations, such as Black individuals, may have greater needs for supplementation,” they outlined.
They are now recruiting participants for two separate clinical trials testing the efficacy of vitamin D supplements for preventing COVID-19.
The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Higher levels of vitamin D than traditionally considered sufficient may help prevent COVID-19 infection – particularly in Black patients, shows a new single-center, retrospective study looking at the role of vitamin D in prevention of infection.
The study, published recently in JAMA Network Open, noted that expert opinion varies as to what “sufficient” levels of vitamin D are, some define this as 30 ng/mL, while others cite 40 ng/mL or greater.
In their discussion, the authors also noted that their results showed the “risk of positive COVID-19 test results decreased significantly with increased vitamin D level of 30 ng/mL or greater when measured as a continuous variable.”
“These new results tell us that having vitamin D levels above those normally considered sufficient is associated with decreased risk of testing positive for COVID-19, at least in Black individuals,” lead author, David Meltzer, MD, chief of hospital medicine at the University of Chicago, said in a press release from his institution.
“These findings suggest that randomized clinical trials to determine whether increasing vitamin D levels to greater than 30-40 ng/mL affect COVID-19 risk are warranted, especially in Black individuals,” he and his coauthors said.
Vitamin D at time of testing most strongly associated with COVID risk
An earlier study by the same researchers found that vitamin D deficiency (less than 20 ng/mL) may raise the risk of testing positive for COVID-19 in people from various ethnicities, as reported by this news organization.
Data for this latest study were drawn from electronic health records for 4,638 individuals at the University of Chicago Medicine and were used to examine whether the likelihood of a positive COVID-19 test was associated with a person’s most recent vitamin D level (within the previous year), and whether there was any effect of ethnicity on this outcome.
Mean age was 52.8 years, 69% were women, 49% were Black, 43% White, and 8% were another race/ethnicity. A total of 27% of the individuals were deficient in vitamin D (less than 20 ng/mL), 27% had insufficient levels (20-30 ng/mL), 22% had sufficient levels (30-40 ng/mL), and the remaining 24% had levels of 40 ng/mL or greater.
In total, 333 (7%) of people tested positive for COVID-19, including 102 (5%) Whites and 211 (9%) Blacks. And 36% of Black individuals who tested positive for COVID-19 were classified as vitamin D deficient, compared with 16% of Whites.
A positive test result for COVID-19 was not significantly associated with vitamin D levels in white individuals but was in Black individuals.
In Black people, compared with levels of at least 40 ng/mL, vitamin D levels of 30-40 ng/mL were associated with an incidence rate ratio of 2.64 for COVID-19 positivity (P = .01). For levels of 20-30 ng/mL, the IRR was 1.69 (P = 0.21); and for less than 20 ng/mL the IRR was 2.55 (P = .009).
The researchers also found that the risk of positive test results with lower vitamin D levels increased when those levels were lower just prior to the positive COVID-19 test, lending “support [to] the idea that vitamin D level at the time of testing is most strongly associated with COVID-19 risk,” they wrote.
Try upping vitamin D levels to 40 ng/mL or greater to prevent COVID?
In their discussion, the authors noted that significant association of vitamin D levels with COVID-19 risk in Blacks but not in Whites, “could reflect their higher COVID-19 risk, to which socioeconomic factors and structural inequities clearly contribute.
“Biological susceptibility to vitamin D deficiency may also be less frequent in White than Black individuals, since lighter skin increases vitamin D production in response to sunlight, and vitamin D binding proteins may vary by race and affect vitamin D bioavailability.”
Given less than 10% of U.S. adults have a vitamin D level greater than 40 ng/mL, the study findings increase the urgency to consider whether increased sun exposure or supplementation could reduce COVID-19 risk, according to the authors.
“When increased sun exposure is impractical, achieving vitamin D levels of 40 ng/mL or greater typically requires greater supplementation than currently recommended for most individuals of 600-800 IU/d vitamin D3,” they added.
However, Dr. Meltzer also acknowledged that “this is an observational study. We can see that there’s an association between vitamin D levels and likelihood of a COVID-19 diagnosis, but we don’t know exactly why that is, or whether these results are due to the vitamin D directly or other related biological factors.”
All in all, the authors suggested that randomized clinical trials are needed to understand if vitamin D can reduce COVID-19 risk, and as such they should include doses of supplements likely to increase vitamin D to at least 40 ng/mL, and perhaps even higher, although they pointed out that the latter must be achieved safely.
“Studies should also consider the role of vitamin D testing, loading doses, dose adjustments for individuals who are obese or overweight, risks for hypercalcemia, and strategies to monitor for and mitigate hypercalcemia, and that non-White populations, such as Black individuals, may have greater needs for supplementation,” they outlined.
They are now recruiting participants for two separate clinical trials testing the efficacy of vitamin D supplements for preventing COVID-19.
The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Higher levels of vitamin D than traditionally considered sufficient may help prevent COVID-19 infection – particularly in Black patients, shows a new single-center, retrospective study looking at the role of vitamin D in prevention of infection.
The study, published recently in JAMA Network Open, noted that expert opinion varies as to what “sufficient” levels of vitamin D are, some define this as 30 ng/mL, while others cite 40 ng/mL or greater.
In their discussion, the authors also noted that their results showed the “risk of positive COVID-19 test results decreased significantly with increased vitamin D level of 30 ng/mL or greater when measured as a continuous variable.”
“These new results tell us that having vitamin D levels above those normally considered sufficient is associated with decreased risk of testing positive for COVID-19, at least in Black individuals,” lead author, David Meltzer, MD, chief of hospital medicine at the University of Chicago, said in a press release from his institution.
“These findings suggest that randomized clinical trials to determine whether increasing vitamin D levels to greater than 30-40 ng/mL affect COVID-19 risk are warranted, especially in Black individuals,” he and his coauthors said.
Vitamin D at time of testing most strongly associated with COVID risk
An earlier study by the same researchers found that vitamin D deficiency (less than 20 ng/mL) may raise the risk of testing positive for COVID-19 in people from various ethnicities, as reported by this news organization.
Data for this latest study were drawn from electronic health records for 4,638 individuals at the University of Chicago Medicine and were used to examine whether the likelihood of a positive COVID-19 test was associated with a person’s most recent vitamin D level (within the previous year), and whether there was any effect of ethnicity on this outcome.
Mean age was 52.8 years, 69% were women, 49% were Black, 43% White, and 8% were another race/ethnicity. A total of 27% of the individuals were deficient in vitamin D (less than 20 ng/mL), 27% had insufficient levels (20-30 ng/mL), 22% had sufficient levels (30-40 ng/mL), and the remaining 24% had levels of 40 ng/mL or greater.
In total, 333 (7%) of people tested positive for COVID-19, including 102 (5%) Whites and 211 (9%) Blacks. And 36% of Black individuals who tested positive for COVID-19 were classified as vitamin D deficient, compared with 16% of Whites.
A positive test result for COVID-19 was not significantly associated with vitamin D levels in white individuals but was in Black individuals.
In Black people, compared with levels of at least 40 ng/mL, vitamin D levels of 30-40 ng/mL were associated with an incidence rate ratio of 2.64 for COVID-19 positivity (P = .01). For levels of 20-30 ng/mL, the IRR was 1.69 (P = 0.21); and for less than 20 ng/mL the IRR was 2.55 (P = .009).
The researchers also found that the risk of positive test results with lower vitamin D levels increased when those levels were lower just prior to the positive COVID-19 test, lending “support [to] the idea that vitamin D level at the time of testing is most strongly associated with COVID-19 risk,” they wrote.
Try upping vitamin D levels to 40 ng/mL or greater to prevent COVID?
In their discussion, the authors noted that significant association of vitamin D levels with COVID-19 risk in Blacks but not in Whites, “could reflect their higher COVID-19 risk, to which socioeconomic factors and structural inequities clearly contribute.
“Biological susceptibility to vitamin D deficiency may also be less frequent in White than Black individuals, since lighter skin increases vitamin D production in response to sunlight, and vitamin D binding proteins may vary by race and affect vitamin D bioavailability.”
Given less than 10% of U.S. adults have a vitamin D level greater than 40 ng/mL, the study findings increase the urgency to consider whether increased sun exposure or supplementation could reduce COVID-19 risk, according to the authors.
“When increased sun exposure is impractical, achieving vitamin D levels of 40 ng/mL or greater typically requires greater supplementation than currently recommended for most individuals of 600-800 IU/d vitamin D3,” they added.
However, Dr. Meltzer also acknowledged that “this is an observational study. We can see that there’s an association between vitamin D levels and likelihood of a COVID-19 diagnosis, but we don’t know exactly why that is, or whether these results are due to the vitamin D directly or other related biological factors.”
All in all, the authors suggested that randomized clinical trials are needed to understand if vitamin D can reduce COVID-19 risk, and as such they should include doses of supplements likely to increase vitamin D to at least 40 ng/mL, and perhaps even higher, although they pointed out that the latter must be achieved safely.
“Studies should also consider the role of vitamin D testing, loading doses, dose adjustments for individuals who are obese or overweight, risks for hypercalcemia, and strategies to monitor for and mitigate hypercalcemia, and that non-White populations, such as Black individuals, may have greater needs for supplementation,” they outlined.
They are now recruiting participants for two separate clinical trials testing the efficacy of vitamin D supplements for preventing COVID-19.
The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Poor survival with COVID in patients who have had HSCT
Among individuals who have received a hematopoietic stem cell transplant (HSCT), often used in the treatment of blood cancers, rates of survival are poor for those who develop COVID-19.
The probability of survival 30 days after being diagnosed with COVID-19 is only 68% for persons who have received an allogeneic HSCT and 67% for autologous HSCT recipients, according to new data from the Center for International Blood and Marrow Transplant Research.
These findings underscore the need for “stringent surveillance and aggressive treatment measures” in this population, Akshay Sharma, MBBS, of St. Jude Children’s Research Hospital, Memphis, and colleagues wrote.
The findings were published online March 1, 2021, in The Lancet Haematology.
The study is “of importance for physicians caring for HSCT recipients worldwide,” Mathieu Leclerc, MD, and Sébastien Maury, MD, Hôpital Henri Mondor, Créteil, France, commented in an accompanying editorial.
Study details
For their study, Dr. Sharma and colleagues analyzed outcomes for all HSCT recipients who developed COVID-19 and whose cases were reported to the CIBMTR. Of 318 such patients, 184 had undergone allogeneic HSCT, and 134 had undergone autologous HSCT.
Overall, about half of these patients (49%) had mild COVID-19.
Severe COVID-19 that required mechanical ventilation developed in 15% and 13% of the allogeneic and autologous HSCT recipients, respectively.
About one-fifth of patients died: 22% and 19% of allogeneic and autologous HSCT recipients, respectively.
Factors associated with greater mortality risk included age of 50 years or older (hazard ratio, 2.53), male sex (HR, 3.53), and development of COVID-19 within 12 months of undergoing HSCT (HR, 2.67).
Among autologous HSCT recipients, lymphoma was associated with higher mortality risk in comparison with a plasma cell disorder or myeloma (HR, 2.41), the authors noted.
“Two important messages can be drawn from the results reported by Sharma and colleagues,” Dr. Leclerc and Dr. Maury wrote in their editorial. “The first is the confirmation that the prognosis of COVID-19 is particularly poor in HSCT recipients, and that its prevention, in the absence of any specific curative treatment with sufficient efficacy, should be at the forefront of concerns.”
The second relates to the risk factors for death among HSCT recipients who develop COVID-19. In addition to previously known risk factors, such as age and gender, the investigators identified transplant-specific factors potentially associated with prognosis – namely, the nearly threefold increase in death among allogeneic HSCT recipients who develop COVID-19 within 12 months of transplant, they explained.
However, the findings are limited by a substantial amount of missing data, short follow-up, and the possibility of selection bias, they noted.
“Further large and well-designed studies with longer follow-up are needed to confirm and refine the results,” the editorialists wrote.
“[A] better understanding of the distinctive features of COVID-19 infection in HSCT recipients will be a necessary and essential step toward improvement of the remarkably poor prognosis observed in this setting,” they added.
The study was funded by the American Society of Hematology; the Leukemia and Lymphoma Society; the National Cancer Institute; the National Heart, Lung and Blood Institute; the National Institute of Allergy and Infectious Diseases; the National Institutes of Health; the Health Resources and Services Administration; and the Office of Naval Research. Dr. Sharma receives support for the conduct of industry-sponsored trials from Vertex Pharmaceuticals, CRISPR Therapeutics, and Novartis and consulting fees from Spotlight Therapeutics. Dr. Leclerc and Dr. Maury disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Among individuals who have received a hematopoietic stem cell transplant (HSCT), often used in the treatment of blood cancers, rates of survival are poor for those who develop COVID-19.
The probability of survival 30 days after being diagnosed with COVID-19 is only 68% for persons who have received an allogeneic HSCT and 67% for autologous HSCT recipients, according to new data from the Center for International Blood and Marrow Transplant Research.
These findings underscore the need for “stringent surveillance and aggressive treatment measures” in this population, Akshay Sharma, MBBS, of St. Jude Children’s Research Hospital, Memphis, and colleagues wrote.
The findings were published online March 1, 2021, in The Lancet Haematology.
The study is “of importance for physicians caring for HSCT recipients worldwide,” Mathieu Leclerc, MD, and Sébastien Maury, MD, Hôpital Henri Mondor, Créteil, France, commented in an accompanying editorial.
Study details
For their study, Dr. Sharma and colleagues analyzed outcomes for all HSCT recipients who developed COVID-19 and whose cases were reported to the CIBMTR. Of 318 such patients, 184 had undergone allogeneic HSCT, and 134 had undergone autologous HSCT.
Overall, about half of these patients (49%) had mild COVID-19.
Severe COVID-19 that required mechanical ventilation developed in 15% and 13% of the allogeneic and autologous HSCT recipients, respectively.
About one-fifth of patients died: 22% and 19% of allogeneic and autologous HSCT recipients, respectively.
Factors associated with greater mortality risk included age of 50 years or older (hazard ratio, 2.53), male sex (HR, 3.53), and development of COVID-19 within 12 months of undergoing HSCT (HR, 2.67).
Among autologous HSCT recipients, lymphoma was associated with higher mortality risk in comparison with a plasma cell disorder or myeloma (HR, 2.41), the authors noted.
“Two important messages can be drawn from the results reported by Sharma and colleagues,” Dr. Leclerc and Dr. Maury wrote in their editorial. “The first is the confirmation that the prognosis of COVID-19 is particularly poor in HSCT recipients, and that its prevention, in the absence of any specific curative treatment with sufficient efficacy, should be at the forefront of concerns.”
The second relates to the risk factors for death among HSCT recipients who develop COVID-19. In addition to previously known risk factors, such as age and gender, the investigators identified transplant-specific factors potentially associated with prognosis – namely, the nearly threefold increase in death among allogeneic HSCT recipients who develop COVID-19 within 12 months of transplant, they explained.
However, the findings are limited by a substantial amount of missing data, short follow-up, and the possibility of selection bias, they noted.
“Further large and well-designed studies with longer follow-up are needed to confirm and refine the results,” the editorialists wrote.
“[A] better understanding of the distinctive features of COVID-19 infection in HSCT recipients will be a necessary and essential step toward improvement of the remarkably poor prognosis observed in this setting,” they added.
The study was funded by the American Society of Hematology; the Leukemia and Lymphoma Society; the National Cancer Institute; the National Heart, Lung and Blood Institute; the National Institute of Allergy and Infectious Diseases; the National Institutes of Health; the Health Resources and Services Administration; and the Office of Naval Research. Dr. Sharma receives support for the conduct of industry-sponsored trials from Vertex Pharmaceuticals, CRISPR Therapeutics, and Novartis and consulting fees from Spotlight Therapeutics. Dr. Leclerc and Dr. Maury disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Among individuals who have received a hematopoietic stem cell transplant (HSCT), often used in the treatment of blood cancers, rates of survival are poor for those who develop COVID-19.
The probability of survival 30 days after being diagnosed with COVID-19 is only 68% for persons who have received an allogeneic HSCT and 67% for autologous HSCT recipients, according to new data from the Center for International Blood and Marrow Transplant Research.
These findings underscore the need for “stringent surveillance and aggressive treatment measures” in this population, Akshay Sharma, MBBS, of St. Jude Children’s Research Hospital, Memphis, and colleagues wrote.
The findings were published online March 1, 2021, in The Lancet Haematology.
The study is “of importance for physicians caring for HSCT recipients worldwide,” Mathieu Leclerc, MD, and Sébastien Maury, MD, Hôpital Henri Mondor, Créteil, France, commented in an accompanying editorial.
Study details
For their study, Dr. Sharma and colleagues analyzed outcomes for all HSCT recipients who developed COVID-19 and whose cases were reported to the CIBMTR. Of 318 such patients, 184 had undergone allogeneic HSCT, and 134 had undergone autologous HSCT.
Overall, about half of these patients (49%) had mild COVID-19.
Severe COVID-19 that required mechanical ventilation developed in 15% and 13% of the allogeneic and autologous HSCT recipients, respectively.
About one-fifth of patients died: 22% and 19% of allogeneic and autologous HSCT recipients, respectively.
Factors associated with greater mortality risk included age of 50 years or older (hazard ratio, 2.53), male sex (HR, 3.53), and development of COVID-19 within 12 months of undergoing HSCT (HR, 2.67).
Among autologous HSCT recipients, lymphoma was associated with higher mortality risk in comparison with a plasma cell disorder or myeloma (HR, 2.41), the authors noted.
“Two important messages can be drawn from the results reported by Sharma and colleagues,” Dr. Leclerc and Dr. Maury wrote in their editorial. “The first is the confirmation that the prognosis of COVID-19 is particularly poor in HSCT recipients, and that its prevention, in the absence of any specific curative treatment with sufficient efficacy, should be at the forefront of concerns.”
The second relates to the risk factors for death among HSCT recipients who develop COVID-19. In addition to previously known risk factors, such as age and gender, the investigators identified transplant-specific factors potentially associated with prognosis – namely, the nearly threefold increase in death among allogeneic HSCT recipients who develop COVID-19 within 12 months of transplant, they explained.
However, the findings are limited by a substantial amount of missing data, short follow-up, and the possibility of selection bias, they noted.
“Further large and well-designed studies with longer follow-up are needed to confirm and refine the results,” the editorialists wrote.
“[A] better understanding of the distinctive features of COVID-19 infection in HSCT recipients will be a necessary and essential step toward improvement of the remarkably poor prognosis observed in this setting,” they added.
The study was funded by the American Society of Hematology; the Leukemia and Lymphoma Society; the National Cancer Institute; the National Heart, Lung and Blood Institute; the National Institute of Allergy and Infectious Diseases; the National Institutes of Health; the Health Resources and Services Administration; and the Office of Naval Research. Dr. Sharma receives support for the conduct of industry-sponsored trials from Vertex Pharmaceuticals, CRISPR Therapeutics, and Novartis and consulting fees from Spotlight Therapeutics. Dr. Leclerc and Dr. Maury disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.