User login
A paleolithic raw bar, and the human brush with extinction
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
This essay is adapted from the newly released book, “A History of the Human Brain: From the Sea Sponge to CRISPR, How Our Brain Evolved.”
“He was a bold man that first ate an oyster.” – Jonathan Swift
That man or, just as likely, that woman, may have done so out of necessity. It was either eat this glistening, gray blob of briny goo or perish.
Beginning 190,000 years ago, a glacial age we identify today as Marine Isotope Stage 6, or MIS6, had set in, cooling and drying out much of the planet. There was widespread drought, leaving the African plains a harsher, more barren substrate for survival – an arena of competition, desperation, and starvation for many species, including ours. Some estimates have the sapiens population dipping to just a few hundred people during MIS6. Like other apes today, we were an endangered species. But through some nexus of intelligence, ecological exploitation, and luck, we managed. Anthropologists argue over what part of Africa would’ve been hospitable enough to rescue sapiens from Darwinian oblivion. Arizona State University archaeologist Curtis Marean, PhD, believes the continent’s southern shore is a good candidate.
For 2 decades, Dr. Marean has overseen excavations at a site called Pinnacle Point on the South African coast. The region has over 9,000 plant species, including the world’s most diverse population of geophytes, plants with underground energy-storage organs such as bulbs, tubers, and rhizomes. These subterranean stores are rich in calories and carbohydrates, and, by virtue of being buried, are protected from most other species (save the occasional tool-wielding chimpanzee). They are also adapted to cold climates and, when cooked, easily digested. All in all, a coup for hunter-gatherers.
The other enticement at Pinnacle Point could be found with a few easy steps toward the sea. Mollusks. Geological samples from MIS6 show South Africa’s shores were packed with mussels, oysters, clams, and a variety of sea snails. We almost certainly turned to them for nutrition.
Dr. Marean’s research suggests that, sometime around 160,000 years ago, at least one group of sapiens began supplementing their terrestrial diet by exploiting the region’s rich shellfish beds. This is the oldest evidence to date of humans consistently feasting on seafood – easy, predictable, immobile calories. No hunting required. As inland Africa dried up, learning to shuck mussels and oysters was a key adaptation to coastal living, one that supported our later migration out of the continent.
Dr. Marean believes the change in behavior was possible thanks to our already keen brains, which supported an ability to track tides, especially spring tides. Spring tides occur twice a month with each new and full moon and result in the greatest difference between high and low tidewaters. The people of Pinnacle Point learned to exploit this cycle. “By tracking tides, we would have had easy, reliable access to high-quality proteins and fats from shellfish every 2 weeks as the ocean receded,” he says. “Whereas you can’t rely on land animals to always be in the same place at the same time.” Work by Jan De Vynck, PhD, a professor at Nelson Mandela University in South Africa, supports this idea, showing that foraging shellfish beds under optimal tidal conditions can yield a staggering 3,500 calories per hour!
“I don’t know if we owe our existence to seafood, but it was certainly important for the population [that Dr.] Curtis studies. That place is full of mussels,” said Ian Tattersall, PhD, curator emeritus with the American Museum of Natural History in New York.
“And I like the idea that during a population bottleneck we got creative and learned how to focus on marine resources.” Innovations, Dr. Tattersall explained, typically occur in small, fixed populations. Large populations have too much genetic inertia to support radical innovation; the status quo is enough to survive. “If you’re looking for evolutionary innovation, you have to look at smaller groups.”
MIS6 wasn’t the only near-extinction in our past. During the Pleistocene epoch, roughly 2.5 million to 12,000 years ago, humans tended to maintain a small population, hovering around a million and later growing to maybe 8 million at most. Periodically, our numbers dipped as climate shifts, natural disasters, and food shortages brought us dangerously close to extinction. Modern humans are descended from the hearty survivors of these bottlenecks.
One especially dire stretch occurred around 1 million years ago. Our effective population (the number of breeding individuals) shriveled to around 18,000, smaller than that of other apes at the time. Worse, our genetic diversity – the insurance policy on evolutionary success and the ability to adapt – plummeted. A similar near extinction may have occurred around 75,000 years ago, the result of a massive volcanic eruption in Sumatra.
Our smarts and adaptability helped us endure these tough times – omnivorism helped us weather scarcity.
A sea of vitamins
Both Dr. Marean and Dr. Tattersall agree that the sapiens hanging on in southern Africa couldn’t have lived entirely on shellfish.
Most likely they also spent time hunting and foraging roots inland, making pilgrimages to the sea during spring tides. Dr. Marean believes coastal cuisine may have allowed a paltry human population to hang on until climate change led to more hospitable terrain. He’s not entirely sold on the idea that marine life was necessarily a driver of human brain evolution.
By the time we incorporated seafood into our diets we were already smart, our brains shaped through millennia of selection for intelligence. “Being a marine forager requires a certain degree of sophisticated smarts,” he said. It requires tracking the lunar cycle and planning excursions to the coast at the right times. Shellfish were simply another source of calories.
Unless you ask Michael Crawford.
Dr. Crawford is a professor at Imperial College London and a strident believer that our brains are those of sea creatures. Sort of.
In 1972, he copublished a paper concluding that the brain is structurally and functionally dependent on an omega-3 fatty acid called docosahexaenoic acid, or DHA. The human brain is composed of nearly 60% fat, so it’s not surprising that certain fats are important to brain health. Nearly 50 years after Dr. Crawford’s study, omega-3 supplements are now a multi-billion-dollar business.
Omega-3s, or more formally, omega-3 polyunsaturated fatty acids (PUFAs), are essential fats, meaning they aren’t produced by the body and must be obtained through diet. We get them from vegetable oils, nuts, seeds, and animals that eat such things. But take an informal poll, and you’ll find most people probably associate omega fatty acids with fish and other seafood.
In the 1970s and 1980s, scientists took notice of the low rates of heart disease in Eskimo communities. Research linked their cardiovascular health to a high-fish diet (though fish cannot produce omega-3s, they source them from algae), and eventually the medical and scientific communities began to rethink fat. Study after study found omega-3 fatty acids to be healthy. They were linked with a lower risk for heart disease and overall mortality. All those decades of parents forcing various fish oils on their grimacing children now had some science behind them. There is such a thing as a good fat.
, especially DHA and eicosapentaenoic acid, or EPA. Omega fats provide structure to neuronal cell membranes and are crucial in neuron-to-neuron communication. They increase levels of a protein called brain-derived neurotrophic factor (BDNF), which supports neuronal growth and survival. A growing body of evidence shows omega-3 supplementation may slow down the process of neurodegeneration, the gradual deterioration of the brain that results in Alzheimer’s disease and other forms of dementia.
Popping a daily omega-3 supplement or, better still, eating a seafood-rich diet, may increase blood flow to the brain. In 2019, the International Society for Nutritional Psychiatry Research recommended omega-3s as an adjunct therapy for major depressive disorder. PUFAs appear to reduce the risk for and severity of mood disorders such as depression and to boost attention in children with ADHD as effectively as drug therapies.
Many researchers claim there would’ve been plenty of DHA available on land to support early humans, and marine foods were just one of many sources.
Not Dr. Crawford.
He believes that brain development and function are not only dependent on DHA but, in fact, DHA sourced from the sea was critical to mammalian brain evolution. “The animal brain evolved 600 million years ago in the ocean and was dependent on DHA, as well as compounds such as iodine, which is also in short supply on land,” he said. “To build a brain, you need these building blocks, which were rich at sea and on rocky shores.”
Dr. Crawford cites his early biochemical work showing DHA isn’t readily accessible from the muscle tissue of land animals. Using DHA tagged with a radioactive isotope, he and his colleagues in the 1970s found that “ready-made” DHA, like that found in shellfish, is incorporated into the developing rat brain with 10-fold greater efficiency than plant- and land animal–sourced DHA, where it exists as its metabolic precursor alpha-linolenic acid. “I’m afraid the idea that ample DHA was available from the fats of animals on the savanna is just not true,” he disputes. According to Dr. Crawford, our tiny, wormlike ancestors were able to evolve primitive nervous systems and flit through the silt thanks to the abundance of healthy fat to be had by living in the ocean and consuming algae.
For over 40 years, Dr. Crawford has argued that rising rates of mental illness are a result of post–World War II dietary changes, especially the move toward land-sourced food and the medical community’s subsequent support of low-fat diets. He feels that omega-3s from seafood were critical to humans’ rapid neural march toward higher cognition, and are therefore critical to brain health. “The continued rise in mental illness is an incredibly important threat to mankind and society, and moving away from marine foods is a major contributor,” said Dr. Crawford.
University of Sherbrooke (Que.) physiology professor Stephen Cunnane, PhD, tends to agree that aquatically sourced nutrients were crucial to human evolution. It’s the importance of coastal living he’s not sure about. He believes hominins would’ve incorporated fish from lakes and rivers into their diet for millions of years. In his view, it wasn’t just omega-3s that contributed to our big brains, but a cluster of nutrients found in fish: iodine, iron, zinc, copper, and selenium among them. “I think DHA was hugely important to our evolution and brain health but I don’t think it was a magic bullet all by itself,” he said. “Numerous other nutrients found in fish and shellfish were very probably important, too, and are now known to be good for the brain.”
Dr. Marean agrees. “Accessing the marine food chain could have had a huge impact on fertility, survival, and overall health, including brain health, in part, due to the high return on omega-3 fatty acids and other nutrients.” But, he speculates, before MIS6, hominins would have had access to plenty of brain-healthy terrestrial nutrition, including meat from animals that consumed omega-3–rich plants and grains.
Dr. Cunnane agrees with Dr. Marean to a degree. He’s confident that higher intelligence evolved gradually over millions of years as mutations inching the cognitive needle forward conferred survival and reproductive advantages – but he maintains that certain advantages like, say, being able to shuck an oyster, allowed an already intelligent brain to thrive.
Foraging marine life in the waters off of Africa likely played an important role in keeping some of our ancestors alive and supported our subsequent propagation throughout the world. By this point, the human brain was already a marvel of consciousness and computing, not too dissimilar to the one we carry around today.
In all likelihood, Pleistocene humans probably got their nutrients and calories wherever they could. If we lived inland, we hunted. Maybe we speared the occasional catfish. We sourced nutrients from fruits, leaves, and nuts. A few times a month, those of us near the coast enjoyed a feast of mussels and oysters.
Dr. Stetka is an editorial director at Medscape.com, a former neuroscience researcher, and a nonpracticing physician. A version of this article first appeared on Medscape.
Biomarkers predict efficacy of DKN-01 in endometrial cancer
Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.
Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.
“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).
She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.
“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
Focus on monotherapy
Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.
Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.
There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.
The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.
“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.
When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.
Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.
“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.
She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).
Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
Monotherapy or combination?
In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”
She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.
“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”
Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.
The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.
Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.
Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.
“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).
She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.
“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
Focus on monotherapy
Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.
Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.
There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.
The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.
“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.
When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.
Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.
“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.
She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).
Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
Monotherapy or combination?
In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”
She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.
“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”
Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.
The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.
Among 29 patients with heavily pretreated EEC, outcomes of DKN-01 monotherapy were best in patients with Wnt activating mutations, high levels of DKK1 expression, or PIK3CA activating mutations.
Patients in these groups had better disease control rates and progression-free survival (PFS), reported Rebecca C. Arend, MD, of the University of Alabama at Birmingham.
“Future development will focus on biomarker-selected patients, specifically patients with Wnt activating mutations, high tumoral DKK1, and PIK3CA activating mutations,” Dr. Arend said at the Society of Gynecologic Oncology’s Virtual Annual Meeting on Women’s Cancer (Abstract 10717).
She explained that DKK1 has been shown to modulate signaling in the Wnt/beta-catenin pathway, a key regulator of cellular functions in humans and animals that has been highly conserved throughout evolution.
“DKK1 activates P13 kinase/AKT signaling by binding to the CKAP4 receptor to promote tumor growth,” Dr. Arend explained.
Focus on monotherapy
Dr. Arend and colleagues conducted a phase 2 basket trial of DKN-01 either as monotherapy or in combination with paclitaxel in patients with EEC, epithelial ovarian cancer, and carcinosarcoma (malignant mixed Mullerian tumor). The trial design required at least 50% of patients to have Wnt signaling alterations.
Dr. Arend reported results for 29 patients with EEC who received DKN-01 monotherapy.
There were nine patients with Wnt activating mutations. None of them achieved a complete response (CR) or partial response (PR), but six had stable disease (SD), for a disease control rate of 67%. Of the 20 patients without Wnt activating mutations, 1 achieved a CR, 1 achieved a PR, and 3 had SD, for a disease control rate of 25%.
The median PFS was 5.5 months in patients with Wnt activating mutations and 1.8 months in patients without the mutations.
“Importantly, patients whose tumors have a Wnt activating mutation have a correlation with increased tumoral expression of DKK1 by 14.4-fold higher,” Dr. Arend noted.
When she and her colleagues analyzed patients by DKK1 expression, the team found that high levels of DKK1 correlated with better clinical outcomes. The disease control rate was 57% for patients in the highest third of DKK1 expression (1 PR, 3 SD) vs. 7% (1 SD) for those in the lowest two-thirds. The median PFS was 3 months and 1.8 months, respectively.
Of the seven patients whose tumors could not be evaluated for DKK1 expression, one patient had a CR and 5 had SD, for a disease control rate of 86%. The median PFS in this group was 8.0 months. Three of these patients had known Wnt activating mutations.
“Given this correlation [between] higher DKK1 expression [and] Wnt activating mutations, one could consider that, at a minimum, these patients would have had a higher DKK1 expression as well,” Dr. Arend said.
She and her colleagues also found that patients with PIK3CA activating mutations and two or fewer prior lines of therapy had a 33% overall response rate (1 CR, 1 PR), compared with 0% for patients without these mutations who had two or fewer prior therapies. Patients with PIK3CA activating mutations also had a better disease control rate (67% vs. 40%) and median PFS (5.6 months vs. 1.8 months).
Although Dr. Arend did not present safety data from the study at SGO 2021, she reported some data in a video investor call for Leap Therapeutics, which is developing DKN-01. She said the most common treatment-emergent adverse events with DKN-01 were nausea in 28.8% of patients, fatigue in 26.7%, and constipation in 11.5%. Serious events included acute kidney injury, dyspnea, nausea, and peripheral edema (occurring in 1.9% of patients each).
Monotherapy or combination?
In the question-and-answer session following Dr. Arend’s presentation, comoderator Joyce Liu, MD, of the Dana-Farber Cancer Institute in Boston, said that “even in the DKK1-high tumors, the activity of DKN-01 as a monotherapy is a little bit limited.”
She asked whether the future of targeting inhibitors in the Wnt/beta-catenin pathway will be limited to biomarker-specific populations or if agents such as DKN-01 should be used in combinations.
“I do think that we need a lot more data to determine,” Dr. Arend replied. “I think that there may be a subset of patients, especially those that don’t tolerate the [lenvatinib/pembrolizumab] combo who may have an upregulation of beta-catenin or a Wnt mutation who could benefit from monotherapy.”
Dr. Arend added that data from her lab and others suggest that DKN-01 in combination with other agents holds promise for improving outcomes in biomarker-selected populations.
The current study is funded by Leap Therapeutics. Dr. Arend disclosed advisory board activity for the company and others. Dr. Liu reported personal fees from several companies, not including Leap Therapeutics.
FROM SGO 2021
Will psoriasis patients embrace proactive topical therapy?
Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?
Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.
. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.
“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.
“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.
Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”
Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.
PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.
The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.
Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.
“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.
Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”
“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.
He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.
Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.
The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.
Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?
Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.
. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.
“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.
“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.
Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”
Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.
PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.
The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.
Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.
“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.
Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”
“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.
He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.
Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.
The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.
Long-term proactive topical management of plaque psoriasis with twice-weekly calcipotriene/betamethasone dipropionate foam has been shown in a high-quality randomized trial to be more effective than conventional reactive management – but will patients go for it?
Bruce E. Strober, MD, PhD, has his doubts, and he shared them with Linda Stein Gold, MD, after she presented updated results from the 52-week PSO-LONG trial at Innovations in Dermatology: Virtual Spring Conference 2021.
. And while they did so in this study with an assist in the form of monthly office visits and nudging from investigators, in real-world clinical practice that’s unlikely to happen, according to Dr. Strober, of Yale University, New Haven, Conn.
“It makes sense to do what’s being done in this study, there’s no doubt, but I’m concerned about adherence and whether patients are really going to do it,” he said.
“Adherence is going to be everything here, and you know patients don’t like to apply topicals to their body. Once they’re clear they’re just going to walk away from the topical,” Dr. Strober predicted.
Dr. Stein Gold countered: “When a study goes on for a full year, it starts to reflect real life.”
Moreover, the PSO-LONG trial provides the first high-quality evidence physicians can share with patients demonstrating that proactive management pays off in terms of fewer relapses and more time in remission over the long haul, added Dr. Stein Gold, director of dermatology clinical research at the Henry Ford Health System in Detroit.
PSO-LONG was a double-blind, international, phase 3 study including 545 adults with plaque psoriasis who had clear or almost-clear skin after 4 weeks of once-daily calcipotriene 0.005%/betamethasone dipropionate 0.064% (Cal/BD) foam (Enstilar), and were then randomized to twice-weekly proactive management or to a reactive approach involving application of vehicle on the same twice-weekly schedule. Relapses resulted in rescue therapy with 4 weeks of once-daily Cal/BD foam.
The primary endpoint was the median time to first relapse: 56 days with the proactive approach, a significant improvement over the 30 days with the reactive approach. Over the course of 52 weeks, the proactive group spent an additional 41 days in remission, compared with the reactive group. Patients randomized to twice-weekly Cal/BD foam averaged 3.1 relapses per year, compared with 4.8 with reactive management. The side-effect profiles in the two study arms were similar.
Mean Physician Global Assessment scores and Psoriasis Area and Activity Index scores for the proactive group clearly separated from the reactive group by week 4, with those differences maintained throughout the year. The area under the curve for distribution for the Physician Global Assessment score was 15% lower in the proactive group, and 20% lower for the modified PASI score.
“These results suggest that proactive management – a concept that’s been used for atopic dermatitis – could be applied to patients with psoriasis to prolong remission,” Dr. Stein Gold concluded at the conference, sponsored by MedscapeLIVE! and the producers of the Hawaii Dermatology Seminar and Caribbean Dermatology Symposium.
Asked how confident she is that patients in the real world truly will do this, Dr. Stein Gold replied: “You know, I don’t know. We hope so. Now we can tell them we actually have some data that supports treating the cleared areas. And it’s only twice a week, separated on Mondays and Thursdays.”
“I take a much more reactive approach,” Dr. Strober said. “I advise patients to get back in there with their topical steroid as soon as they see any signs of recurrence.
He added that he’s eager to see if a proactive management approach such as the one that was successful in PSO-LONG is also beneficial using some of the promising topical agents with nonsteroidal mechanisms of action, which are advancing through the developmental pipeline.
Late in 2020, the Food and Drug Administration approved an expanded indication for Cal/BD foam, which includes the PSO-LONG data on the efficacy and safety of long-term twice-weekly therapy in adults in product labeling. The combination spray/foam was previously approved by the FDA as once-daily therapy in psoriasis patients aged 12 years and older, but only for up to 4 weeks because of safety concerns regarding longer use of the potent topical steroid as daily therapy.
The PSO-LONG trial was funded by LEO Pharma. Dr. Stein Gold reported serving as a paid investigator and/or consultant to LEO and numerous other pharmaceutical companies. Dr. Strober, reported serving as a consultant to more than two dozen pharmaceutical companies. MedscapeLIVE! and this news organization are owned by the same parent company.
FROM INNOVATIONS IN DERMATOLOGY
The significance of mismatch repair deficiency in endometrial cancer
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.
Women with Lynch syndrome are known to carry an approximately 60% lifetime risk of endometrial cancer. These cancers result from inherited deleterious mutations in genes that code for mismatch repair proteins. However, mismatch repair deficiency (MMR-d) is not exclusively found in the tumors of patients with Lynch syndrome, and much is being learned about this group of endometrial cancers, their behavior, and their vulnerability to targeted therapies.
During the processes of DNA replication, recombination, or chemical and physical damage, mismatches in base pairs frequently occurs. Mismatch repair proteins function to identify and repair such errors, and the loss of their function causes the accumulation of the insertions or deletions of short, repetitive sequences of DNA. This phenomenon can be measured using polymerase chain reaction (PCR) screening of known microsatellites to look for the accumulation of errors, a phenotype which is called microsatellite instability (MSI). The accumulation of errors in DNA sequences is thought to lead to mutations in cancer-related genes.
The four predominant mismatch repair genes include MLH1, MSH2, MSH 6, and PMS2. These genes may possess loss of function through a germline/inherited mechanism, such as Lynch syndrome, or can be sporadically acquired. Approximately 20%-30% of endometrial cancers exhibit MMR-d with acquired, sporadic losses in function being the majority of cases and only approximately 10% a result of Lynch syndrome. Mutations in PMS2 are the dominant genotype of Lynch syndrome, whereas loss of function in MLH1 is most frequent aberration in sporadic cases of MMR-d endometrial cancer.1
Endometrial cancers can be tested for MMR-d by performing immunohistochemistry to look for loss of expression in the four most common MMR genes. If there is loss of expression of MLH1, additional triage testing can be performed to determine if this loss is caused by the epigenetic phenomenon of hypermethylation. When present, this excludes Lynch syndrome and suggests a sporadic form origin of the disease. If there is loss of expression of the MMR genes (including loss of MLH1 and subsequent negative testing for promotor methylation), the patient should receive genetic testing for the presence of a germline mutation indicating Lynch syndrome. As an adjunct or alternative to immunohistochemistry, PCR studies or next-generation sequencing can be used to measure the presence of microsatellite instability in a process that identifies the expansion or reduction in repetitive DNA sequences of the tumor, compared with normal tumor.2
It is of the highest importance to identify endometrial cancers caused by Lynch syndrome because this enables providers to offer cascade testing of relatives, and to intensify screening or preventative measures for the many other cancers (such as colon, upper gastrointestinal, breast, and urothelial) for which these patients are at risk. Therefore, routine screening for MMR-d tumors is recommended in all cases of endometrial cancer, not simply those of a young age at diagnosis or for whom a strong family history exists.3 Using family history factors, primary tumor site, and age as a trigger for screening for Lynch syndrome, such as the Bethesda Guidelines, is associated with a 82% sensitivity in identifying Lynch syndrome. In a meta-analysis including testing results from 1,159 women with endometrial cancer, 43% of patients who were diagnosed with Lynch syndrome via molecular analysis would have been missed by clinical screening using Bethesda Guidelines.2
Discovering cases of Lynch syndrome is not the only benefit of routine testing for MMR-d in endometrial cancers. There is also significant value in the characterization of sporadic mismatch repair–deficient tumors because this information provides prognostic information and guides therapy. Tumors with a microsatellite-high phenotype/MMR-d were identified as one of the four distinct molecular subgroups of endometrial cancer by the Cancer Genome Atlas.4 Patients with this molecular profile exhibited “intermediate” prognostic outcomes, performing better than the “serous-like” cancers with p53 mutations, yet worse than patients with a POLE ultramutated group who rarely experience recurrences or death, even in the setting of unfavorable histology.
Beyond prognostication, the molecular profile of endometrial cancers also influence their responsiveness to therapeutics, highlighting the importance of splitting, not lumping endometrial cancers into relevant molecular subgroups when designing research and practicing clinical medicine. The PORTEC-3 trial studied 410 women with high-risk endometrial cancer, and randomized participants to receive either adjuvant radiation alone, or radiation with chemotherapy.5 There were no differences in progression-free survival between the two therapeutic strategies when analyzed in aggregate. However, when analyzed by Cancer Genome Atlas molecular subgroup, it was noted that there was a clear benefit from chemotherapy for patients with p53 mutations. For patients with MMR-d tumors, no such benefit was observed. Patients assigned this molecular subgroup did no better with the addition of platinum and taxane chemotherapy over radiation alone. Unfortunately, for patients with MMR-d tumors, recurrence rates remained high, suggesting that we can and need to discover more effective therapies for these tumors than what is available with conventional radiation or platinum and taxane chemotherapy. Targeted therapy may be the solution to this problem. Through microsatellite instability, MMR-d tumors create somatic mutations which result in neoantigens, an immunogenic environment. This state up-regulates checkpoint inhibitor proteins, which serve as an actionable target for anti-PD-L1 antibodies, such as the drug pembrolizumab which has been shown to be highly active against MMR-d endometrial cancer. In the landmark, KEYNOTE-158 trial, patients with advanced, recurrent solid tumors that exhibited MMR-d were treated with pembrolizumab.6 This included 49 patients with endometrial cancer, among whom there was a 79% response rate. Subsequently, pembrolizumab was granted Food and Drug Administration approval for use in advanced, recurrent MMR-d/MSI-high endometrial cancer. Trials are currently enrolling patients to explore the utility of this drug in the up-front setting in both early- and late-stage disease with a hope that this targeted therapy can do what conventional cytotoxic chemotherapy has failed to do.
Therefore, given the clinical significance of mismatch repair deficiency, all patients with endometrial cancer should be investigated for loss of expression in these proteins, and if present, considered for the possibility of Lynch syndrome. While most will not have an inherited cause, this information regarding their tumor biology remains critically important in both prognostication and decision-making surrounding other therapies and their eligibility for promising clinical trials.
Dr. Rossi is assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no conflicts of interest to declare. Email her at [email protected].
References
1. Simpkins SB et al. Hum. Mol. Genet. 1999;8:661-6.
2. Kahn R et al. Cancer. 2019 Sep 15;125(18):2172-3183.
3. SGO Clinical Practice Statement: Screening for Lynch Syndrome in Endometrial Cancer. https://www.sgo.org/clinical-practice/guidelines/screening-for-lynch-syndrome-in-endometrial-cancer/
4. Kandoth et al. Nature. 2013;497(7447):67-73.
5. Leon-Castillo A et al. J Clin Oncol. 2020 Oct 10;38(29):3388-97.
6. Marabelle A et al. J Clin Oncol. 2020 Jan 1;38(1):1-10.
In U.S., lockdowns added 2 pounds per month
Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.
Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.
“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.
Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.
The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.
Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.
“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”
The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.
Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.
“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”
A version of this article first appeared on WebMD.com.
Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.
Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.
“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.
Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.
The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.
Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.
“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”
The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.
Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.
“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”
A version of this article first appeared on WebMD.com.
Americans gained nearly 2 pounds per month under COVID-19 shelter-in-place orders in 2020, according to a new study published March 22, 2021, in JAMA Network Open.
Those who kept the same lockdown habits could have gained 20 pounds during the past year, the study authors said.
“We know that weight gain is a public health problem in the U.S. already, so anything making it worse is definitely concerning, and shelter-in-place orders are so ubiquitous that the sheer number of people affected by this makes it extremely relevant,” Gregory Marcus, MD, the senior author and a cardiologist at the University of California, San Francisco, told the New York Times.
Dr. Marcus and colleagues analyzed more than 7,000 weight measurements from 269 people in 37 states who used Bluetooth-connected scales from Feb. 1 to June 1, 2020. Among the participants, about 52% were women, 77% were White, and they had an average age of 52 years.
The research team found that participants had a steady weight gain of more than half a pound every 10 days. That equals about 1.5-2 pounds per month.
Many of the participants were losing weight before the shelter-in-place orders went into effect, Dr. Marcus said. The lockdown effects could be even greater for those who weren’t losing weight before.
“It’s reasonable to assume these individuals are more engaged with their health in general, and more disciplined and on top of things,” he said. “That suggests we could be underestimating – that this is the tip of the iceberg.”
The small study doesn’t represent all of the nation and can’t be generalized to the U.S. population, the study authors noted, but it’s an indicator of what happened during the pandemic. The participants’ weight increased regardless of their location and chronic medical conditions.
Overall, people don’t move around as much during lockdowns, the UCSF researchers reported in another study published in Annals of Internal Medicine in November 2020. According to smartphone data, daily step counts decreased by 27% in March 2020. The step counts increased again throughout the summer but still remained lower than before the COVID-19 pandemic.
“The detrimental health outcomes suggested by these data demonstrate a need to identify concurrent strategies to mitigate weight gain,” the authors wrote in the JAMA Network Open study, “such as encouraging healthy diets and exploring ways to enhance physical activity, as local governments consider new constraints in response to SARS-CoV-2 and potential future pandemics.”
A version of this article first appeared on WebMD.com.
Drug-resistant TB trial stopped early after successful results
Médecins Sans Frontières (MSF/Doctors Without Borders) announced early closure of its phase 2/3 trial of a 6-month multidrug regimen for multidrug-resistant tuberculosis (MDR-TB) because an independent data safety and monitoring board (DSMB) determined that the drug combination in the study regimen was superior to current therapy, according to a press release.
The trial, called TB PRACTECAL, compared the current local standard of care with a 6-month regimen of bedaquiline, pretomanid, linezolid, and moxifloxacin. The interim analysis included 242 patients and the randomized, controlled trial was conducted in sites in Belarus, South Africa, and Uzbekistan.
The preliminary data will be shared with the World Health Organization soon and will also be submitted to a peer-reviewed journal. If it withstands further reviews, as is anticipated, the trial would support the first solely oral regimen for MDR-TB.
In 2019, an estimated 465,000 people developed MDR-TB and 182,000 died. The global burden of TB at that time was about 10 million new cases, many with coexisting HIV.
Current treatment for MDR-TB lasts 9-20 months and is complicated by the need for painful shots and toxic antibiotics. Side effects can include psychiatric problems from quinolones, isoniazid, ethambutol, or cycloserine; deafness from aminoglycosides; and bone marrow suppression from linezolid, among other toxicities.
It’s hoped that the shorter regimen will reduce toxicity and improve patient compliance. Poor adherence to treatment is a major driver of further drug resistance. Current regimens require up to 20 pills per day as well as daily injections.
In a prepared statement from MSF, David Moore, MD, MSc, London School of Hygiene and Tropical Medicine, a member of the TB-PRACTECAL trial’s steering committee, concluded: “The findings could transform the way we treat patients with drug-resistant forms of TB worldwide, who have been neglected for too long.”
This good news is particularly welcome as, in the time of COVID-19, “an estimated 1.4 million fewer people received care for tuberculosis in 2020 than in 2019,” according to the WHO. The drop, an overall 21% reduction in patients beginning treatment, ranged as high as 42% in Indonesia.
Although awaiting complete data, Madhukar Pai, MD, PhD, associate director of the McGill International TB Centre, McGill University, Montreal, shares Dr. Moore’s enthusiasm. In an interview, Dr. Pai compared MDR-TB with extensively drug-resistant TB (XDR-TB).
“I’m excited about the possibility that these trial results might help shorten MDR-TB treatment to 6 months,” said Dr. Pai. “That will be a huge relief to all patients battling drug-resistant disease. The 6-month BPaL regimen (bedaquiline, pretomanid, and linezolid) regimen works well in XDR-TB. So, I would expect the TB PRACTECAL regimen with one added drug (moxifloxacin) to work well in MDR-TB, which is less severe than XDR-TB. Between these two regimens, if we can bring down MDR and XDR treatment to 6 months, all oral, that would be a huge advance.”
The expense of bedaquiline has been a long-standing concern in the global health community. Janssen, a subsidiary of Johnson & Johnson, has reduced the price to $340 per 6-month treatment course for more than 135 eligible low- and middle-income countries.
Previously, the tiered pricing structure was different for low-, middle-, and high-income countries (U.S. $900, $3,000, and $30,000, respectively). “The global TB community has asked Janssen to drop the price of bedaquiline to a level no higher than $32 per month – double the price at which researchers estimated bedaquiline could be sold for a profit,” according to the Treatment Action Group A major source of contention over pricing has been that there has been considerable public investment in the drug›s development.
Dr. Pai concluded: “Bedaquiline is likely the most important drug in both 6-month regimens. We need to work harder to make bedaquiline, an excellent drug, more affordable and accessible.”
While the full data is not yet publicly available, TB PRACTECAL was a randomized, controlled, multicenter study. The fact that enrollment was discontinued early by the DSMB suggests the efficacy data was compelling and that this completely oral regimen will become the standard of care.
Dr. Stone is an infectious disease specialist and author of Resilience: One Family’s Story of Hope and Triumph Over Evil and of Conducting Clinical Research, the essential guide to the topic. A version of this article first appeared on Medscape.com.
Médecins Sans Frontières (MSF/Doctors Without Borders) announced early closure of its phase 2/3 trial of a 6-month multidrug regimen for multidrug-resistant tuberculosis (MDR-TB) because an independent data safety and monitoring board (DSMB) determined that the drug combination in the study regimen was superior to current therapy, according to a press release.
The trial, called TB PRACTECAL, compared the current local standard of care with a 6-month regimen of bedaquiline, pretomanid, linezolid, and moxifloxacin. The interim analysis included 242 patients and the randomized, controlled trial was conducted in sites in Belarus, South Africa, and Uzbekistan.
The preliminary data will be shared with the World Health Organization soon and will also be submitted to a peer-reviewed journal. If it withstands further reviews, as is anticipated, the trial would support the first solely oral regimen for MDR-TB.
In 2019, an estimated 465,000 people developed MDR-TB and 182,000 died. The global burden of TB at that time was about 10 million new cases, many with coexisting HIV.
Current treatment for MDR-TB lasts 9-20 months and is complicated by the need for painful shots and toxic antibiotics. Side effects can include psychiatric problems from quinolones, isoniazid, ethambutol, or cycloserine; deafness from aminoglycosides; and bone marrow suppression from linezolid, among other toxicities.
It’s hoped that the shorter regimen will reduce toxicity and improve patient compliance. Poor adherence to treatment is a major driver of further drug resistance. Current regimens require up to 20 pills per day as well as daily injections.
In a prepared statement from MSF, David Moore, MD, MSc, London School of Hygiene and Tropical Medicine, a member of the TB-PRACTECAL trial’s steering committee, concluded: “The findings could transform the way we treat patients with drug-resistant forms of TB worldwide, who have been neglected for too long.”
This good news is particularly welcome as, in the time of COVID-19, “an estimated 1.4 million fewer people received care for tuberculosis in 2020 than in 2019,” according to the WHO. The drop, an overall 21% reduction in patients beginning treatment, ranged as high as 42% in Indonesia.
Although awaiting complete data, Madhukar Pai, MD, PhD, associate director of the McGill International TB Centre, McGill University, Montreal, shares Dr. Moore’s enthusiasm. In an interview, Dr. Pai compared MDR-TB with extensively drug-resistant TB (XDR-TB).
“I’m excited about the possibility that these trial results might help shorten MDR-TB treatment to 6 months,” said Dr. Pai. “That will be a huge relief to all patients battling drug-resistant disease. The 6-month BPaL regimen (bedaquiline, pretomanid, and linezolid) regimen works well in XDR-TB. So, I would expect the TB PRACTECAL regimen with one added drug (moxifloxacin) to work well in MDR-TB, which is less severe than XDR-TB. Between these two regimens, if we can bring down MDR and XDR treatment to 6 months, all oral, that would be a huge advance.”
The expense of bedaquiline has been a long-standing concern in the global health community. Janssen, a subsidiary of Johnson & Johnson, has reduced the price to $340 per 6-month treatment course for more than 135 eligible low- and middle-income countries.
Previously, the tiered pricing structure was different for low-, middle-, and high-income countries (U.S. $900, $3,000, and $30,000, respectively). “The global TB community has asked Janssen to drop the price of bedaquiline to a level no higher than $32 per month – double the price at which researchers estimated bedaquiline could be sold for a profit,” according to the Treatment Action Group A major source of contention over pricing has been that there has been considerable public investment in the drug›s development.
Dr. Pai concluded: “Bedaquiline is likely the most important drug in both 6-month regimens. We need to work harder to make bedaquiline, an excellent drug, more affordable and accessible.”
While the full data is not yet publicly available, TB PRACTECAL was a randomized, controlled, multicenter study. The fact that enrollment was discontinued early by the DSMB suggests the efficacy data was compelling and that this completely oral regimen will become the standard of care.
Dr. Stone is an infectious disease specialist and author of Resilience: One Family’s Story of Hope and Triumph Over Evil and of Conducting Clinical Research, the essential guide to the topic. A version of this article first appeared on Medscape.com.
Médecins Sans Frontières (MSF/Doctors Without Borders) announced early closure of its phase 2/3 trial of a 6-month multidrug regimen for multidrug-resistant tuberculosis (MDR-TB) because an independent data safety and monitoring board (DSMB) determined that the drug combination in the study regimen was superior to current therapy, according to a press release.
The trial, called TB PRACTECAL, compared the current local standard of care with a 6-month regimen of bedaquiline, pretomanid, linezolid, and moxifloxacin. The interim analysis included 242 patients and the randomized, controlled trial was conducted in sites in Belarus, South Africa, and Uzbekistan.
The preliminary data will be shared with the World Health Organization soon and will also be submitted to a peer-reviewed journal. If it withstands further reviews, as is anticipated, the trial would support the first solely oral regimen for MDR-TB.
In 2019, an estimated 465,000 people developed MDR-TB and 182,000 died. The global burden of TB at that time was about 10 million new cases, many with coexisting HIV.
Current treatment for MDR-TB lasts 9-20 months and is complicated by the need for painful shots and toxic antibiotics. Side effects can include psychiatric problems from quinolones, isoniazid, ethambutol, or cycloserine; deafness from aminoglycosides; and bone marrow suppression from linezolid, among other toxicities.
It’s hoped that the shorter regimen will reduce toxicity and improve patient compliance. Poor adherence to treatment is a major driver of further drug resistance. Current regimens require up to 20 pills per day as well as daily injections.
In a prepared statement from MSF, David Moore, MD, MSc, London School of Hygiene and Tropical Medicine, a member of the TB-PRACTECAL trial’s steering committee, concluded: “The findings could transform the way we treat patients with drug-resistant forms of TB worldwide, who have been neglected for too long.”
This good news is particularly welcome as, in the time of COVID-19, “an estimated 1.4 million fewer people received care for tuberculosis in 2020 than in 2019,” according to the WHO. The drop, an overall 21% reduction in patients beginning treatment, ranged as high as 42% in Indonesia.
Although awaiting complete data, Madhukar Pai, MD, PhD, associate director of the McGill International TB Centre, McGill University, Montreal, shares Dr. Moore’s enthusiasm. In an interview, Dr. Pai compared MDR-TB with extensively drug-resistant TB (XDR-TB).
“I’m excited about the possibility that these trial results might help shorten MDR-TB treatment to 6 months,” said Dr. Pai. “That will be a huge relief to all patients battling drug-resistant disease. The 6-month BPaL regimen (bedaquiline, pretomanid, and linezolid) regimen works well in XDR-TB. So, I would expect the TB PRACTECAL regimen with one added drug (moxifloxacin) to work well in MDR-TB, which is less severe than XDR-TB. Between these two regimens, if we can bring down MDR and XDR treatment to 6 months, all oral, that would be a huge advance.”
The expense of bedaquiline has been a long-standing concern in the global health community. Janssen, a subsidiary of Johnson & Johnson, has reduced the price to $340 per 6-month treatment course for more than 135 eligible low- and middle-income countries.
Previously, the tiered pricing structure was different for low-, middle-, and high-income countries (U.S. $900, $3,000, and $30,000, respectively). “The global TB community has asked Janssen to drop the price of bedaquiline to a level no higher than $32 per month – double the price at which researchers estimated bedaquiline could be sold for a profit,” according to the Treatment Action Group A major source of contention over pricing has been that there has been considerable public investment in the drug›s development.
Dr. Pai concluded: “Bedaquiline is likely the most important drug in both 6-month regimens. We need to work harder to make bedaquiline, an excellent drug, more affordable and accessible.”
While the full data is not yet publicly available, TB PRACTECAL was a randomized, controlled, multicenter study. The fact that enrollment was discontinued early by the DSMB suggests the efficacy data was compelling and that this completely oral regimen will become the standard of care.
Dr. Stone is an infectious disease specialist and author of Resilience: One Family’s Story of Hope and Triumph Over Evil and of Conducting Clinical Research, the essential guide to the topic. A version of this article first appeared on Medscape.com.
Vitamin D may protect against COVID-19, especially in Black patients
Higher levels of vitamin D than traditionally considered sufficient may help prevent COVID-19 infection – particularly in Black patients, shows a new single-center, retrospective study looking at the role of vitamin D in prevention of infection.
The study, published recently in JAMA Network Open, noted that expert opinion varies as to what “sufficient” levels of vitamin D are, some define this as 30 ng/mL, while others cite 40 ng/mL or greater.
In their discussion, the authors also noted that their results showed the “risk of positive COVID-19 test results decreased significantly with increased vitamin D level of 30 ng/mL or greater when measured as a continuous variable.”
“These new results tell us that having vitamin D levels above those normally considered sufficient is associated with decreased risk of testing positive for COVID-19, at least in Black individuals,” lead author, David Meltzer, MD, chief of hospital medicine at the University of Chicago, said in a press release from his institution.
“These findings suggest that randomized clinical trials to determine whether increasing vitamin D levels to greater than 30-40 ng/mL affect COVID-19 risk are warranted, especially in Black individuals,” he and his coauthors said.
Vitamin D at time of testing most strongly associated with COVID risk
An earlier study by the same researchers found that vitamin D deficiency (less than 20 ng/mL) may raise the risk of testing positive for COVID-19 in people from various ethnicities, as reported by this news organization.
Data for this latest study were drawn from electronic health records for 4,638 individuals at the University of Chicago Medicine and were used to examine whether the likelihood of a positive COVID-19 test was associated with a person’s most recent vitamin D level (within the previous year), and whether there was any effect of ethnicity on this outcome.
Mean age was 52.8 years, 69% were women, 49% were Black, 43% White, and 8% were another race/ethnicity. A total of 27% of the individuals were deficient in vitamin D (less than 20 ng/mL), 27% had insufficient levels (20-30 ng/mL), 22% had sufficient levels (30-40 ng/mL), and the remaining 24% had levels of 40 ng/mL or greater.
In total, 333 (7%) of people tested positive for COVID-19, including 102 (5%) Whites and 211 (9%) Blacks. And 36% of Black individuals who tested positive for COVID-19 were classified as vitamin D deficient, compared with 16% of Whites.
A positive test result for COVID-19 was not significantly associated with vitamin D levels in white individuals but was in Black individuals.
In Black people, compared with levels of at least 40 ng/mL, vitamin D levels of 30-40 ng/mL were associated with an incidence rate ratio of 2.64 for COVID-19 positivity (P = .01). For levels of 20-30 ng/mL, the IRR was 1.69 (P = 0.21); and for less than 20 ng/mL the IRR was 2.55 (P = .009).
The researchers also found that the risk of positive test results with lower vitamin D levels increased when those levels were lower just prior to the positive COVID-19 test, lending “support [to] the idea that vitamin D level at the time of testing is most strongly associated with COVID-19 risk,” they wrote.
Try upping vitamin D levels to 40 ng/mL or greater to prevent COVID?
In their discussion, the authors noted that significant association of vitamin D levels with COVID-19 risk in Blacks but not in Whites, “could reflect their higher COVID-19 risk, to which socioeconomic factors and structural inequities clearly contribute.
“Biological susceptibility to vitamin D deficiency may also be less frequent in White than Black individuals, since lighter skin increases vitamin D production in response to sunlight, and vitamin D binding proteins may vary by race and affect vitamin D bioavailability.”
Given less than 10% of U.S. adults have a vitamin D level greater than 40 ng/mL, the study findings increase the urgency to consider whether increased sun exposure or supplementation could reduce COVID-19 risk, according to the authors.
“When increased sun exposure is impractical, achieving vitamin D levels of 40 ng/mL or greater typically requires greater supplementation than currently recommended for most individuals of 600-800 IU/d vitamin D3,” they added.
However, Dr. Meltzer also acknowledged that “this is an observational study. We can see that there’s an association between vitamin D levels and likelihood of a COVID-19 diagnosis, but we don’t know exactly why that is, or whether these results are due to the vitamin D directly or other related biological factors.”
All in all, the authors suggested that randomized clinical trials are needed to understand if vitamin D can reduce COVID-19 risk, and as such they should include doses of supplements likely to increase vitamin D to at least 40 ng/mL, and perhaps even higher, although they pointed out that the latter must be achieved safely.
“Studies should also consider the role of vitamin D testing, loading doses, dose adjustments for individuals who are obese or overweight, risks for hypercalcemia, and strategies to monitor for and mitigate hypercalcemia, and that non-White populations, such as Black individuals, may have greater needs for supplementation,” they outlined.
They are now recruiting participants for two separate clinical trials testing the efficacy of vitamin D supplements for preventing COVID-19.
The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Higher levels of vitamin D than traditionally considered sufficient may help prevent COVID-19 infection – particularly in Black patients, shows a new single-center, retrospective study looking at the role of vitamin D in prevention of infection.
The study, published recently in JAMA Network Open, noted that expert opinion varies as to what “sufficient” levels of vitamin D are, some define this as 30 ng/mL, while others cite 40 ng/mL or greater.
In their discussion, the authors also noted that their results showed the “risk of positive COVID-19 test results decreased significantly with increased vitamin D level of 30 ng/mL or greater when measured as a continuous variable.”
“These new results tell us that having vitamin D levels above those normally considered sufficient is associated with decreased risk of testing positive for COVID-19, at least in Black individuals,” lead author, David Meltzer, MD, chief of hospital medicine at the University of Chicago, said in a press release from his institution.
“These findings suggest that randomized clinical trials to determine whether increasing vitamin D levels to greater than 30-40 ng/mL affect COVID-19 risk are warranted, especially in Black individuals,” he and his coauthors said.
Vitamin D at time of testing most strongly associated with COVID risk
An earlier study by the same researchers found that vitamin D deficiency (less than 20 ng/mL) may raise the risk of testing positive for COVID-19 in people from various ethnicities, as reported by this news organization.
Data for this latest study were drawn from electronic health records for 4,638 individuals at the University of Chicago Medicine and were used to examine whether the likelihood of a positive COVID-19 test was associated with a person’s most recent vitamin D level (within the previous year), and whether there was any effect of ethnicity on this outcome.
Mean age was 52.8 years, 69% were women, 49% were Black, 43% White, and 8% were another race/ethnicity. A total of 27% of the individuals were deficient in vitamin D (less than 20 ng/mL), 27% had insufficient levels (20-30 ng/mL), 22% had sufficient levels (30-40 ng/mL), and the remaining 24% had levels of 40 ng/mL or greater.
In total, 333 (7%) of people tested positive for COVID-19, including 102 (5%) Whites and 211 (9%) Blacks. And 36% of Black individuals who tested positive for COVID-19 were classified as vitamin D deficient, compared with 16% of Whites.
A positive test result for COVID-19 was not significantly associated with vitamin D levels in white individuals but was in Black individuals.
In Black people, compared with levels of at least 40 ng/mL, vitamin D levels of 30-40 ng/mL were associated with an incidence rate ratio of 2.64 for COVID-19 positivity (P = .01). For levels of 20-30 ng/mL, the IRR was 1.69 (P = 0.21); and for less than 20 ng/mL the IRR was 2.55 (P = .009).
The researchers also found that the risk of positive test results with lower vitamin D levels increased when those levels were lower just prior to the positive COVID-19 test, lending “support [to] the idea that vitamin D level at the time of testing is most strongly associated with COVID-19 risk,” they wrote.
Try upping vitamin D levels to 40 ng/mL or greater to prevent COVID?
In their discussion, the authors noted that significant association of vitamin D levels with COVID-19 risk in Blacks but not in Whites, “could reflect their higher COVID-19 risk, to which socioeconomic factors and structural inequities clearly contribute.
“Biological susceptibility to vitamin D deficiency may also be less frequent in White than Black individuals, since lighter skin increases vitamin D production in response to sunlight, and vitamin D binding proteins may vary by race and affect vitamin D bioavailability.”
Given less than 10% of U.S. adults have a vitamin D level greater than 40 ng/mL, the study findings increase the urgency to consider whether increased sun exposure or supplementation could reduce COVID-19 risk, according to the authors.
“When increased sun exposure is impractical, achieving vitamin D levels of 40 ng/mL or greater typically requires greater supplementation than currently recommended for most individuals of 600-800 IU/d vitamin D3,” they added.
However, Dr. Meltzer also acknowledged that “this is an observational study. We can see that there’s an association between vitamin D levels and likelihood of a COVID-19 diagnosis, but we don’t know exactly why that is, or whether these results are due to the vitamin D directly or other related biological factors.”
All in all, the authors suggested that randomized clinical trials are needed to understand if vitamin D can reduce COVID-19 risk, and as such they should include doses of supplements likely to increase vitamin D to at least 40 ng/mL, and perhaps even higher, although they pointed out that the latter must be achieved safely.
“Studies should also consider the role of vitamin D testing, loading doses, dose adjustments for individuals who are obese or overweight, risks for hypercalcemia, and strategies to monitor for and mitigate hypercalcemia, and that non-White populations, such as Black individuals, may have greater needs for supplementation,” they outlined.
They are now recruiting participants for two separate clinical trials testing the efficacy of vitamin D supplements for preventing COVID-19.
The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Higher levels of vitamin D than traditionally considered sufficient may help prevent COVID-19 infection – particularly in Black patients, shows a new single-center, retrospective study looking at the role of vitamin D in prevention of infection.
The study, published recently in JAMA Network Open, noted that expert opinion varies as to what “sufficient” levels of vitamin D are, some define this as 30 ng/mL, while others cite 40 ng/mL or greater.
In their discussion, the authors also noted that their results showed the “risk of positive COVID-19 test results decreased significantly with increased vitamin D level of 30 ng/mL or greater when measured as a continuous variable.”
“These new results tell us that having vitamin D levels above those normally considered sufficient is associated with decreased risk of testing positive for COVID-19, at least in Black individuals,” lead author, David Meltzer, MD, chief of hospital medicine at the University of Chicago, said in a press release from his institution.
“These findings suggest that randomized clinical trials to determine whether increasing vitamin D levels to greater than 30-40 ng/mL affect COVID-19 risk are warranted, especially in Black individuals,” he and his coauthors said.
Vitamin D at time of testing most strongly associated with COVID risk
An earlier study by the same researchers found that vitamin D deficiency (less than 20 ng/mL) may raise the risk of testing positive for COVID-19 in people from various ethnicities, as reported by this news organization.
Data for this latest study were drawn from electronic health records for 4,638 individuals at the University of Chicago Medicine and were used to examine whether the likelihood of a positive COVID-19 test was associated with a person’s most recent vitamin D level (within the previous year), and whether there was any effect of ethnicity on this outcome.
Mean age was 52.8 years, 69% were women, 49% were Black, 43% White, and 8% were another race/ethnicity. A total of 27% of the individuals were deficient in vitamin D (less than 20 ng/mL), 27% had insufficient levels (20-30 ng/mL), 22% had sufficient levels (30-40 ng/mL), and the remaining 24% had levels of 40 ng/mL or greater.
In total, 333 (7%) of people tested positive for COVID-19, including 102 (5%) Whites and 211 (9%) Blacks. And 36% of Black individuals who tested positive for COVID-19 were classified as vitamin D deficient, compared with 16% of Whites.
A positive test result for COVID-19 was not significantly associated with vitamin D levels in white individuals but was in Black individuals.
In Black people, compared with levels of at least 40 ng/mL, vitamin D levels of 30-40 ng/mL were associated with an incidence rate ratio of 2.64 for COVID-19 positivity (P = .01). For levels of 20-30 ng/mL, the IRR was 1.69 (P = 0.21); and for less than 20 ng/mL the IRR was 2.55 (P = .009).
The researchers also found that the risk of positive test results with lower vitamin D levels increased when those levels were lower just prior to the positive COVID-19 test, lending “support [to] the idea that vitamin D level at the time of testing is most strongly associated with COVID-19 risk,” they wrote.
Try upping vitamin D levels to 40 ng/mL or greater to prevent COVID?
In their discussion, the authors noted that significant association of vitamin D levels with COVID-19 risk in Blacks but not in Whites, “could reflect their higher COVID-19 risk, to which socioeconomic factors and structural inequities clearly contribute.
“Biological susceptibility to vitamin D deficiency may also be less frequent in White than Black individuals, since lighter skin increases vitamin D production in response to sunlight, and vitamin D binding proteins may vary by race and affect vitamin D bioavailability.”
Given less than 10% of U.S. adults have a vitamin D level greater than 40 ng/mL, the study findings increase the urgency to consider whether increased sun exposure or supplementation could reduce COVID-19 risk, according to the authors.
“When increased sun exposure is impractical, achieving vitamin D levels of 40 ng/mL or greater typically requires greater supplementation than currently recommended for most individuals of 600-800 IU/d vitamin D3,” they added.
However, Dr. Meltzer also acknowledged that “this is an observational study. We can see that there’s an association between vitamin D levels and likelihood of a COVID-19 diagnosis, but we don’t know exactly why that is, or whether these results are due to the vitamin D directly or other related biological factors.”
All in all, the authors suggested that randomized clinical trials are needed to understand if vitamin D can reduce COVID-19 risk, and as such they should include doses of supplements likely to increase vitamin D to at least 40 ng/mL, and perhaps even higher, although they pointed out that the latter must be achieved safely.
“Studies should also consider the role of vitamin D testing, loading doses, dose adjustments for individuals who are obese or overweight, risks for hypercalcemia, and strategies to monitor for and mitigate hypercalcemia, and that non-White populations, such as Black individuals, may have greater needs for supplementation,” they outlined.
They are now recruiting participants for two separate clinical trials testing the efficacy of vitamin D supplements for preventing COVID-19.
The authors disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Poor survival with COVID in patients who have had HSCT
Among individuals who have received a hematopoietic stem cell transplant (HSCT), often used in the treatment of blood cancers, rates of survival are poor for those who develop COVID-19.
The probability of survival 30 days after being diagnosed with COVID-19 is only 68% for persons who have received an allogeneic HSCT and 67% for autologous HSCT recipients, according to new data from the Center for International Blood and Marrow Transplant Research.
These findings underscore the need for “stringent surveillance and aggressive treatment measures” in this population, Akshay Sharma, MBBS, of St. Jude Children’s Research Hospital, Memphis, and colleagues wrote.
The findings were published online March 1, 2021, in The Lancet Haematology.
The study is “of importance for physicians caring for HSCT recipients worldwide,” Mathieu Leclerc, MD, and Sébastien Maury, MD, Hôpital Henri Mondor, Créteil, France, commented in an accompanying editorial.
Study details
For their study, Dr. Sharma and colleagues analyzed outcomes for all HSCT recipients who developed COVID-19 and whose cases were reported to the CIBMTR. Of 318 such patients, 184 had undergone allogeneic HSCT, and 134 had undergone autologous HSCT.
Overall, about half of these patients (49%) had mild COVID-19.
Severe COVID-19 that required mechanical ventilation developed in 15% and 13% of the allogeneic and autologous HSCT recipients, respectively.
About one-fifth of patients died: 22% and 19% of allogeneic and autologous HSCT recipients, respectively.
Factors associated with greater mortality risk included age of 50 years or older (hazard ratio, 2.53), male sex (HR, 3.53), and development of COVID-19 within 12 months of undergoing HSCT (HR, 2.67).
Among autologous HSCT recipients, lymphoma was associated with higher mortality risk in comparison with a plasma cell disorder or myeloma (HR, 2.41), the authors noted.
“Two important messages can be drawn from the results reported by Sharma and colleagues,” Dr. Leclerc and Dr. Maury wrote in their editorial. “The first is the confirmation that the prognosis of COVID-19 is particularly poor in HSCT recipients, and that its prevention, in the absence of any specific curative treatment with sufficient efficacy, should be at the forefront of concerns.”
The second relates to the risk factors for death among HSCT recipients who develop COVID-19. In addition to previously known risk factors, such as age and gender, the investigators identified transplant-specific factors potentially associated with prognosis – namely, the nearly threefold increase in death among allogeneic HSCT recipients who develop COVID-19 within 12 months of transplant, they explained.
However, the findings are limited by a substantial amount of missing data, short follow-up, and the possibility of selection bias, they noted.
“Further large and well-designed studies with longer follow-up are needed to confirm and refine the results,” the editorialists wrote.
“[A] better understanding of the distinctive features of COVID-19 infection in HSCT recipients will be a necessary and essential step toward improvement of the remarkably poor prognosis observed in this setting,” they added.
The study was funded by the American Society of Hematology; the Leukemia and Lymphoma Society; the National Cancer Institute; the National Heart, Lung and Blood Institute; the National Institute of Allergy and Infectious Diseases; the National Institutes of Health; the Health Resources and Services Administration; and the Office of Naval Research. Dr. Sharma receives support for the conduct of industry-sponsored trials from Vertex Pharmaceuticals, CRISPR Therapeutics, and Novartis and consulting fees from Spotlight Therapeutics. Dr. Leclerc and Dr. Maury disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Among individuals who have received a hematopoietic stem cell transplant (HSCT), often used in the treatment of blood cancers, rates of survival are poor for those who develop COVID-19.
The probability of survival 30 days after being diagnosed with COVID-19 is only 68% for persons who have received an allogeneic HSCT and 67% for autologous HSCT recipients, according to new data from the Center for International Blood and Marrow Transplant Research.
These findings underscore the need for “stringent surveillance and aggressive treatment measures” in this population, Akshay Sharma, MBBS, of St. Jude Children’s Research Hospital, Memphis, and colleagues wrote.
The findings were published online March 1, 2021, in The Lancet Haematology.
The study is “of importance for physicians caring for HSCT recipients worldwide,” Mathieu Leclerc, MD, and Sébastien Maury, MD, Hôpital Henri Mondor, Créteil, France, commented in an accompanying editorial.
Study details
For their study, Dr. Sharma and colleagues analyzed outcomes for all HSCT recipients who developed COVID-19 and whose cases were reported to the CIBMTR. Of 318 such patients, 184 had undergone allogeneic HSCT, and 134 had undergone autologous HSCT.
Overall, about half of these patients (49%) had mild COVID-19.
Severe COVID-19 that required mechanical ventilation developed in 15% and 13% of the allogeneic and autologous HSCT recipients, respectively.
About one-fifth of patients died: 22% and 19% of allogeneic and autologous HSCT recipients, respectively.
Factors associated with greater mortality risk included age of 50 years or older (hazard ratio, 2.53), male sex (HR, 3.53), and development of COVID-19 within 12 months of undergoing HSCT (HR, 2.67).
Among autologous HSCT recipients, lymphoma was associated with higher mortality risk in comparison with a plasma cell disorder or myeloma (HR, 2.41), the authors noted.
“Two important messages can be drawn from the results reported by Sharma and colleagues,” Dr. Leclerc and Dr. Maury wrote in their editorial. “The first is the confirmation that the prognosis of COVID-19 is particularly poor in HSCT recipients, and that its prevention, in the absence of any specific curative treatment with sufficient efficacy, should be at the forefront of concerns.”
The second relates to the risk factors for death among HSCT recipients who develop COVID-19. In addition to previously known risk factors, such as age and gender, the investigators identified transplant-specific factors potentially associated with prognosis – namely, the nearly threefold increase in death among allogeneic HSCT recipients who develop COVID-19 within 12 months of transplant, they explained.
However, the findings are limited by a substantial amount of missing data, short follow-up, and the possibility of selection bias, they noted.
“Further large and well-designed studies with longer follow-up are needed to confirm and refine the results,” the editorialists wrote.
“[A] better understanding of the distinctive features of COVID-19 infection in HSCT recipients will be a necessary and essential step toward improvement of the remarkably poor prognosis observed in this setting,” they added.
The study was funded by the American Society of Hematology; the Leukemia and Lymphoma Society; the National Cancer Institute; the National Heart, Lung and Blood Institute; the National Institute of Allergy and Infectious Diseases; the National Institutes of Health; the Health Resources and Services Administration; and the Office of Naval Research. Dr. Sharma receives support for the conduct of industry-sponsored trials from Vertex Pharmaceuticals, CRISPR Therapeutics, and Novartis and consulting fees from Spotlight Therapeutics. Dr. Leclerc and Dr. Maury disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Among individuals who have received a hematopoietic stem cell transplant (HSCT), often used in the treatment of blood cancers, rates of survival are poor for those who develop COVID-19.
The probability of survival 30 days after being diagnosed with COVID-19 is only 68% for persons who have received an allogeneic HSCT and 67% for autologous HSCT recipients, according to new data from the Center for International Blood and Marrow Transplant Research.
These findings underscore the need for “stringent surveillance and aggressive treatment measures” in this population, Akshay Sharma, MBBS, of St. Jude Children’s Research Hospital, Memphis, and colleagues wrote.
The findings were published online March 1, 2021, in The Lancet Haematology.
The study is “of importance for physicians caring for HSCT recipients worldwide,” Mathieu Leclerc, MD, and Sébastien Maury, MD, Hôpital Henri Mondor, Créteil, France, commented in an accompanying editorial.
Study details
For their study, Dr. Sharma and colleagues analyzed outcomes for all HSCT recipients who developed COVID-19 and whose cases were reported to the CIBMTR. Of 318 such patients, 184 had undergone allogeneic HSCT, and 134 had undergone autologous HSCT.
Overall, about half of these patients (49%) had mild COVID-19.
Severe COVID-19 that required mechanical ventilation developed in 15% and 13% of the allogeneic and autologous HSCT recipients, respectively.
About one-fifth of patients died: 22% and 19% of allogeneic and autologous HSCT recipients, respectively.
Factors associated with greater mortality risk included age of 50 years or older (hazard ratio, 2.53), male sex (HR, 3.53), and development of COVID-19 within 12 months of undergoing HSCT (HR, 2.67).
Among autologous HSCT recipients, lymphoma was associated with higher mortality risk in comparison with a plasma cell disorder or myeloma (HR, 2.41), the authors noted.
“Two important messages can be drawn from the results reported by Sharma and colleagues,” Dr. Leclerc and Dr. Maury wrote in their editorial. “The first is the confirmation that the prognosis of COVID-19 is particularly poor in HSCT recipients, and that its prevention, in the absence of any specific curative treatment with sufficient efficacy, should be at the forefront of concerns.”
The second relates to the risk factors for death among HSCT recipients who develop COVID-19. In addition to previously known risk factors, such as age and gender, the investigators identified transplant-specific factors potentially associated with prognosis – namely, the nearly threefold increase in death among allogeneic HSCT recipients who develop COVID-19 within 12 months of transplant, they explained.
However, the findings are limited by a substantial amount of missing data, short follow-up, and the possibility of selection bias, they noted.
“Further large and well-designed studies with longer follow-up are needed to confirm and refine the results,” the editorialists wrote.
“[A] better understanding of the distinctive features of COVID-19 infection in HSCT recipients will be a necessary and essential step toward improvement of the remarkably poor prognosis observed in this setting,” they added.
The study was funded by the American Society of Hematology; the Leukemia and Lymphoma Society; the National Cancer Institute; the National Heart, Lung and Blood Institute; the National Institute of Allergy and Infectious Diseases; the National Institutes of Health; the Health Resources and Services Administration; and the Office of Naval Research. Dr. Sharma receives support for the conduct of industry-sponsored trials from Vertex Pharmaceuticals, CRISPR Therapeutics, and Novartis and consulting fees from Spotlight Therapeutics. Dr. Leclerc and Dr. Maury disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Lenvatinib Plus Pembrolizumab Improves Outcomes in Previously Untreated Advanced Clear Cell Renal Cell Carcinoma
Study Overview
Objective. To evaluate the efficacy and safety of lenvatinib in combination with everolimus or pembrolizumab compared with sunitinib alone for the treatment of newly diagnosed advanced clear cell renal cell carcinoma (ccRCC).
Design. Global, multicenter, randomized, open-label, phase 3 trial.
Intervention. Patients were randomized in a 1:1:1 ratio to receive treatment with 1 of 3 regimens: lenvatinib 20 mg daily plus pembrolizumab 200 mg on day 1 of each 21-day cycle; lenvatinib 18 mg daily plus everolimus 5 mg once daily for each 21-day cycle; or sunitinib 50 mg daily for 4 weeks followed by 2 weeks off. Patients were stratified according to geographic region and Memorial Sloan Kettering Cancer Center (MSKCC) prognostic risk group.
Setting and participants. A total of 1417 patients were screened, and 1069 patients underwent randomization between October 2016 and July 2019: 355 patients were randomized to the lenvatinib plus pembrolizumab group, 357 were randomized to the lenvatinib plus everolimus group, and 357 were randomized to the sunitinib alone group. The patients must have had a diagnosis of previously untreated advanced renal cell carcinoma with a clear-cell component. All the patients need to have a Karnofsky performance status of at least 70, adequate renal function, and controlled blood pressure with or without antihypertensive medications.
Main outcome measures. The primary endpoint assessed the progression-free survival (PFS) as evaluated by independent review committee using RECIST, version 1.1. Imaging was performed at the time of screening and every 8 weeks thereafter. Secondary endpoints were safety, overall survival (OS), and objective response rate as well as investigator-assessed PFS. Also, they assessed the duration of response. During the treatment period, the safety and adverse events were assessed up to 30 days from the last dose of the trial drug.
Main results. The baseline characteristics were well balanced between the treatment groups. More than 70% of enrolled participants were male. Approximately 60% of participants were MSKCC intermediate risk, 27% were favorable risk, and 9% were poor risk. Patients with a PD-L1 combined positive score of 1% or more represented 30% of the population. The remainder had a PD-L1 combined positive score of <1% (30%) or such data were not available (38%). Liver metastases were present in 17% of patients at baseline in each group, and 70% of patients had a prior nephrectomy. The data cutoff occurred in August 2020 for PFS and the median follow-up for OS was 26.6 months. Around 40% of the participants in the lenvatinib plus pembrolizumab group, 18.8% in the sunitinib group, and 31% in the lenvatinib plus everolimus group were still receiving trial treatment at data cutoff. The leading cause for discontinuing therapy was disease progression. Approximately 50% of patients in the lenvatinib/everolimus group and sunitinib group received subsequent checkpoint inhibitor therapy after progression.
The median PFS in the lenvatinib plus pembrolizumab group was significantly longer than in the sunitinib group, 23.9 months vs 9.2 months (hazard ratio [HR], 0.39; 95% CI, 0.32-0.49; P < 0.001). The median PFS was also significantly longer in the lenvatinib plus everolimus group compared with sunitinib, 14.7 vs 9.2 months (HR 0.65; 95% CI 0.53-0.80; P < 0.001). The PFS benefit favored the lenvatinib combination groups over sunitinib in all subgroups, including the MSKCC prognostic risk groups. The median OS was not reached with any treatment, with 79% of patients in the lenvatinib plus pembrolizumab group, 66% of patients in the lenvatinib plus everolimus group, and 70% in the sunitinib group still alive at 24 months. Survival was significantly longer in the lenvatinib plus pembrolizumab group compared with sunitinib (HR, 0.66; 95% CI, 0.49-0.88; P = 0.005). The OS favored lenvatinib/pembrolizumab over sunitinib in the PD-L1 positive or negative groups. The median duration of response in the lenvatinib plus pembrolizumab group was 25.8 months compared to 16.6 months and 14.6 months in the lenvatinib plus everolimus and sunitinib groups, respectively. Complete response rates were higher in the lenvatinib plus pembrolizumab group (16%) compared with lenvatinib/everolimus (9.8%) or sunitinib (4.2%). The median time to response was around 1.9 months in all 3 groups.
The most frequent adverse events seen in all groups were diarrhea, hypertension, fatigue, and nausea. Hypothyroidism was seen more frequently in the lenvatinib plus pembrolizumab group (47%). Grade 3 adverse events were seen in approximately 80% of patients in all groups. The most common grade 3 or higher adverse event was hypertension in all 3 groups. The median time for discontinuing treatment due to side effects was 8.97 months in the lenvatinib plus pembrolizumab arm, 5.49 months in the lenvatinib plus everolimus group, and 4.57 months in the sunitinib group. In the lenvatinib plus pembrolizumab group, 15 patients had grade 5 adverse events; 11 participants had fatal events not related to disease progression. In the lenvatinib plus everolimus group, there were 22 patients with grade 5 events, with 10 fatal events not related to disease progression. In the sunitinib group, 11 patients had grade 5 events, and only 2 fatal events were not linked to disease progression.
Conclusion. The combination of lenvatinib plus pembrolizumab significantly prolongs PFS and OS compared with sunitinib in patients with previously untreated and advanced ccRCC. The median OS has not yet been reached.
Commentary
The results of the current phase 3 CLEAR trial highlight the efficacy and safety of lenvatinib plus pembrolizumab as a first-line treatment in advanced ccRCC. This trial adds to the rapidly growing body of literature supporting the notion that the combination of anti-PD-1 based therapy with either CTLA-4 antibodies or VEGF receptor tyrosine kinase inhibitors (TKI) improves outcomes in previously untreated patients with advanced ccRCC. Previously presented data from Keynote-426 (pembrolizumab plus axitinib), Checkmate-214 (nivolumab plus ipilimumab), and Javelin Renal 101 (Avelumab plus axitinib) have also shown improved outcomes with combination therapy in the frontline setting.1-4 While the landscape of therapeutic options in the frontline setting continues to grow, there remains lack of clarity as to how to tailor our therapeutic decisions for specific patient populations. The exception would be nivolumab and ipilimumab, which are currently indicated for IMDC intermediate- or poor-risk patients.
The combination of VEGFR TKI therapy and PD-1 antibodies provides rapid disease control, with a median time to response in the current study of 1.9 months, and, generally speaking, a low risk of progression in the first 6 months of therapy. While cross-trial comparisons are always problematic, the PFS reported in this study and others with VEGFR TKI and PD-1 antibody combinations is quite impressive and surpasses that noted in Checkmate 214.3 While the median OS survival has not yet been reached, the long duration of PFS and complete response rate of 16% in this study certainly make this an attractive frontline option for newly diagnosed patients with advanced ccRCC. Longer follow-up is needed to confirm the survival benefit noted.
Applications for Clinical Practice
The current data support the use VEGFR TKI and anti-PD1 therapy in the frontline setting. How to choose between such combination regimens or combination immunotherapy remains unclear, and further biomarker-based assessments are needed to help guide therapeutic decisions for our patients.
1. Motzer, R, Alekseev B, Rha SY, et al. Lenvatinib plus pembrolizumab or everolimus for advanced renal cell carcinoma [published online ahead of print, 2021 Feb 13]. N Engl J Med. 2021;10.1056/NEJMoa2035716. doi:10.1056/NEJMoa2035716
2. Rini, BI, Plimack ER, Stus V, et al. Pembrolizumab plus axitinib versus sunitinib for advanced renal-cell carcinoma. N Engl J Med. 2019;380(12):1116-1127.
3. Motzer, RJ, Tannir NM, McDermott DF, et al. Nivolumab plus ipilimumab versus sunitinib in advanced renal-cell carcinoma. N Engl J Med. 2018;378(14):1277-1290.
4. Motzer, RJ, Penkov K, Haanen J, et al. Avelumab plus axitinib versus sunitinib for advanced renal-cell carcinoma. N Engl J Med. 2019;380(12):1103-1115.
Study Overview
Objective. To evaluate the efficacy and safety of lenvatinib in combination with everolimus or pembrolizumab compared with sunitinib alone for the treatment of newly diagnosed advanced clear cell renal cell carcinoma (ccRCC).
Design. Global, multicenter, randomized, open-label, phase 3 trial.
Intervention. Patients were randomized in a 1:1:1 ratio to receive treatment with 1 of 3 regimens: lenvatinib 20 mg daily plus pembrolizumab 200 mg on day 1 of each 21-day cycle; lenvatinib 18 mg daily plus everolimus 5 mg once daily for each 21-day cycle; or sunitinib 50 mg daily for 4 weeks followed by 2 weeks off. Patients were stratified according to geographic region and Memorial Sloan Kettering Cancer Center (MSKCC) prognostic risk group.
Setting and participants. A total of 1417 patients were screened, and 1069 patients underwent randomization between October 2016 and July 2019: 355 patients were randomized to the lenvatinib plus pembrolizumab group, 357 were randomized to the lenvatinib plus everolimus group, and 357 were randomized to the sunitinib alone group. The patients must have had a diagnosis of previously untreated advanced renal cell carcinoma with a clear-cell component. All the patients need to have a Karnofsky performance status of at least 70, adequate renal function, and controlled blood pressure with or without antihypertensive medications.
Main outcome measures. The primary endpoint assessed the progression-free survival (PFS) as evaluated by independent review committee using RECIST, version 1.1. Imaging was performed at the time of screening and every 8 weeks thereafter. Secondary endpoints were safety, overall survival (OS), and objective response rate as well as investigator-assessed PFS. Also, they assessed the duration of response. During the treatment period, the safety and adverse events were assessed up to 30 days from the last dose of the trial drug.
Main results. The baseline characteristics were well balanced between the treatment groups. More than 70% of enrolled participants were male. Approximately 60% of participants were MSKCC intermediate risk, 27% were favorable risk, and 9% were poor risk. Patients with a PD-L1 combined positive score of 1% or more represented 30% of the population. The remainder had a PD-L1 combined positive score of <1% (30%) or such data were not available (38%). Liver metastases were present in 17% of patients at baseline in each group, and 70% of patients had a prior nephrectomy. The data cutoff occurred in August 2020 for PFS and the median follow-up for OS was 26.6 months. Around 40% of the participants in the lenvatinib plus pembrolizumab group, 18.8% in the sunitinib group, and 31% in the lenvatinib plus everolimus group were still receiving trial treatment at data cutoff. The leading cause for discontinuing therapy was disease progression. Approximately 50% of patients in the lenvatinib/everolimus group and sunitinib group received subsequent checkpoint inhibitor therapy after progression.
The median PFS in the lenvatinib plus pembrolizumab group was significantly longer than in the sunitinib group, 23.9 months vs 9.2 months (hazard ratio [HR], 0.39; 95% CI, 0.32-0.49; P < 0.001). The median PFS was also significantly longer in the lenvatinib plus everolimus group compared with sunitinib, 14.7 vs 9.2 months (HR 0.65; 95% CI 0.53-0.80; P < 0.001). The PFS benefit favored the lenvatinib combination groups over sunitinib in all subgroups, including the MSKCC prognostic risk groups. The median OS was not reached with any treatment, with 79% of patients in the lenvatinib plus pembrolizumab group, 66% of patients in the lenvatinib plus everolimus group, and 70% in the sunitinib group still alive at 24 months. Survival was significantly longer in the lenvatinib plus pembrolizumab group compared with sunitinib (HR, 0.66; 95% CI, 0.49-0.88; P = 0.005). The OS favored lenvatinib/pembrolizumab over sunitinib in the PD-L1 positive or negative groups. The median duration of response in the lenvatinib plus pembrolizumab group was 25.8 months compared to 16.6 months and 14.6 months in the lenvatinib plus everolimus and sunitinib groups, respectively. Complete response rates were higher in the lenvatinib plus pembrolizumab group (16%) compared with lenvatinib/everolimus (9.8%) or sunitinib (4.2%). The median time to response was around 1.9 months in all 3 groups.
The most frequent adverse events seen in all groups were diarrhea, hypertension, fatigue, and nausea. Hypothyroidism was seen more frequently in the lenvatinib plus pembrolizumab group (47%). Grade 3 adverse events were seen in approximately 80% of patients in all groups. The most common grade 3 or higher adverse event was hypertension in all 3 groups. The median time for discontinuing treatment due to side effects was 8.97 months in the lenvatinib plus pembrolizumab arm, 5.49 months in the lenvatinib plus everolimus group, and 4.57 months in the sunitinib group. In the lenvatinib plus pembrolizumab group, 15 patients had grade 5 adverse events; 11 participants had fatal events not related to disease progression. In the lenvatinib plus everolimus group, there were 22 patients with grade 5 events, with 10 fatal events not related to disease progression. In the sunitinib group, 11 patients had grade 5 events, and only 2 fatal events were not linked to disease progression.
Conclusion. The combination of lenvatinib plus pembrolizumab significantly prolongs PFS and OS compared with sunitinib in patients with previously untreated and advanced ccRCC. The median OS has not yet been reached.
Commentary
The results of the current phase 3 CLEAR trial highlight the efficacy and safety of lenvatinib plus pembrolizumab as a first-line treatment in advanced ccRCC. This trial adds to the rapidly growing body of literature supporting the notion that the combination of anti-PD-1 based therapy with either CTLA-4 antibodies or VEGF receptor tyrosine kinase inhibitors (TKI) improves outcomes in previously untreated patients with advanced ccRCC. Previously presented data from Keynote-426 (pembrolizumab plus axitinib), Checkmate-214 (nivolumab plus ipilimumab), and Javelin Renal 101 (Avelumab plus axitinib) have also shown improved outcomes with combination therapy in the frontline setting.1-4 While the landscape of therapeutic options in the frontline setting continues to grow, there remains lack of clarity as to how to tailor our therapeutic decisions for specific patient populations. The exception would be nivolumab and ipilimumab, which are currently indicated for IMDC intermediate- or poor-risk patients.
The combination of VEGFR TKI therapy and PD-1 antibodies provides rapid disease control, with a median time to response in the current study of 1.9 months, and, generally speaking, a low risk of progression in the first 6 months of therapy. While cross-trial comparisons are always problematic, the PFS reported in this study and others with VEGFR TKI and PD-1 antibody combinations is quite impressive and surpasses that noted in Checkmate 214.3 While the median OS survival has not yet been reached, the long duration of PFS and complete response rate of 16% in this study certainly make this an attractive frontline option for newly diagnosed patients with advanced ccRCC. Longer follow-up is needed to confirm the survival benefit noted.
Applications for Clinical Practice
The current data support the use VEGFR TKI and anti-PD1 therapy in the frontline setting. How to choose between such combination regimens or combination immunotherapy remains unclear, and further biomarker-based assessments are needed to help guide therapeutic decisions for our patients.
Study Overview
Objective. To evaluate the efficacy and safety of lenvatinib in combination with everolimus or pembrolizumab compared with sunitinib alone for the treatment of newly diagnosed advanced clear cell renal cell carcinoma (ccRCC).
Design. Global, multicenter, randomized, open-label, phase 3 trial.
Intervention. Patients were randomized in a 1:1:1 ratio to receive treatment with 1 of 3 regimens: lenvatinib 20 mg daily plus pembrolizumab 200 mg on day 1 of each 21-day cycle; lenvatinib 18 mg daily plus everolimus 5 mg once daily for each 21-day cycle; or sunitinib 50 mg daily for 4 weeks followed by 2 weeks off. Patients were stratified according to geographic region and Memorial Sloan Kettering Cancer Center (MSKCC) prognostic risk group.
Setting and participants. A total of 1417 patients were screened, and 1069 patients underwent randomization between October 2016 and July 2019: 355 patients were randomized to the lenvatinib plus pembrolizumab group, 357 were randomized to the lenvatinib plus everolimus group, and 357 were randomized to the sunitinib alone group. The patients must have had a diagnosis of previously untreated advanced renal cell carcinoma with a clear-cell component. All the patients need to have a Karnofsky performance status of at least 70, adequate renal function, and controlled blood pressure with or without antihypertensive medications.
Main outcome measures. The primary endpoint assessed the progression-free survival (PFS) as evaluated by independent review committee using RECIST, version 1.1. Imaging was performed at the time of screening and every 8 weeks thereafter. Secondary endpoints were safety, overall survival (OS), and objective response rate as well as investigator-assessed PFS. Also, they assessed the duration of response. During the treatment period, the safety and adverse events were assessed up to 30 days from the last dose of the trial drug.
Main results. The baseline characteristics were well balanced between the treatment groups. More than 70% of enrolled participants were male. Approximately 60% of participants were MSKCC intermediate risk, 27% were favorable risk, and 9% were poor risk. Patients with a PD-L1 combined positive score of 1% or more represented 30% of the population. The remainder had a PD-L1 combined positive score of <1% (30%) or such data were not available (38%). Liver metastases were present in 17% of patients at baseline in each group, and 70% of patients had a prior nephrectomy. The data cutoff occurred in August 2020 for PFS and the median follow-up for OS was 26.6 months. Around 40% of the participants in the lenvatinib plus pembrolizumab group, 18.8% in the sunitinib group, and 31% in the lenvatinib plus everolimus group were still receiving trial treatment at data cutoff. The leading cause for discontinuing therapy was disease progression. Approximately 50% of patients in the lenvatinib/everolimus group and sunitinib group received subsequent checkpoint inhibitor therapy after progression.
The median PFS in the lenvatinib plus pembrolizumab group was significantly longer than in the sunitinib group, 23.9 months vs 9.2 months (hazard ratio [HR], 0.39; 95% CI, 0.32-0.49; P < 0.001). The median PFS was also significantly longer in the lenvatinib plus everolimus group compared with sunitinib, 14.7 vs 9.2 months (HR 0.65; 95% CI 0.53-0.80; P < 0.001). The PFS benefit favored the lenvatinib combination groups over sunitinib in all subgroups, including the MSKCC prognostic risk groups. The median OS was not reached with any treatment, with 79% of patients in the lenvatinib plus pembrolizumab group, 66% of patients in the lenvatinib plus everolimus group, and 70% in the sunitinib group still alive at 24 months. Survival was significantly longer in the lenvatinib plus pembrolizumab group compared with sunitinib (HR, 0.66; 95% CI, 0.49-0.88; P = 0.005). The OS favored lenvatinib/pembrolizumab over sunitinib in the PD-L1 positive or negative groups. The median duration of response in the lenvatinib plus pembrolizumab group was 25.8 months compared to 16.6 months and 14.6 months in the lenvatinib plus everolimus and sunitinib groups, respectively. Complete response rates were higher in the lenvatinib plus pembrolizumab group (16%) compared with lenvatinib/everolimus (9.8%) or sunitinib (4.2%). The median time to response was around 1.9 months in all 3 groups.
The most frequent adverse events seen in all groups were diarrhea, hypertension, fatigue, and nausea. Hypothyroidism was seen more frequently in the lenvatinib plus pembrolizumab group (47%). Grade 3 adverse events were seen in approximately 80% of patients in all groups. The most common grade 3 or higher adverse event was hypertension in all 3 groups. The median time for discontinuing treatment due to side effects was 8.97 months in the lenvatinib plus pembrolizumab arm, 5.49 months in the lenvatinib plus everolimus group, and 4.57 months in the sunitinib group. In the lenvatinib plus pembrolizumab group, 15 patients had grade 5 adverse events; 11 participants had fatal events not related to disease progression. In the lenvatinib plus everolimus group, there were 22 patients with grade 5 events, with 10 fatal events not related to disease progression. In the sunitinib group, 11 patients had grade 5 events, and only 2 fatal events were not linked to disease progression.
Conclusion. The combination of lenvatinib plus pembrolizumab significantly prolongs PFS and OS compared with sunitinib in patients with previously untreated and advanced ccRCC. The median OS has not yet been reached.
Commentary
The results of the current phase 3 CLEAR trial highlight the efficacy and safety of lenvatinib plus pembrolizumab as a first-line treatment in advanced ccRCC. This trial adds to the rapidly growing body of literature supporting the notion that the combination of anti-PD-1 based therapy with either CTLA-4 antibodies or VEGF receptor tyrosine kinase inhibitors (TKI) improves outcomes in previously untreated patients with advanced ccRCC. Previously presented data from Keynote-426 (pembrolizumab plus axitinib), Checkmate-214 (nivolumab plus ipilimumab), and Javelin Renal 101 (Avelumab plus axitinib) have also shown improved outcomes with combination therapy in the frontline setting.1-4 While the landscape of therapeutic options in the frontline setting continues to grow, there remains lack of clarity as to how to tailor our therapeutic decisions for specific patient populations. The exception would be nivolumab and ipilimumab, which are currently indicated for IMDC intermediate- or poor-risk patients.
The combination of VEGFR TKI therapy and PD-1 antibodies provides rapid disease control, with a median time to response in the current study of 1.9 months, and, generally speaking, a low risk of progression in the first 6 months of therapy. While cross-trial comparisons are always problematic, the PFS reported in this study and others with VEGFR TKI and PD-1 antibody combinations is quite impressive and surpasses that noted in Checkmate 214.3 While the median OS survival has not yet been reached, the long duration of PFS and complete response rate of 16% in this study certainly make this an attractive frontline option for newly diagnosed patients with advanced ccRCC. Longer follow-up is needed to confirm the survival benefit noted.
Applications for Clinical Practice
The current data support the use VEGFR TKI and anti-PD1 therapy in the frontline setting. How to choose between such combination regimens or combination immunotherapy remains unclear, and further biomarker-based assessments are needed to help guide therapeutic decisions for our patients.
1. Motzer, R, Alekseev B, Rha SY, et al. Lenvatinib plus pembrolizumab or everolimus for advanced renal cell carcinoma [published online ahead of print, 2021 Feb 13]. N Engl J Med. 2021;10.1056/NEJMoa2035716. doi:10.1056/NEJMoa2035716
2. Rini, BI, Plimack ER, Stus V, et al. Pembrolizumab plus axitinib versus sunitinib for advanced renal-cell carcinoma. N Engl J Med. 2019;380(12):1116-1127.
3. Motzer, RJ, Tannir NM, McDermott DF, et al. Nivolumab plus ipilimumab versus sunitinib in advanced renal-cell carcinoma. N Engl J Med. 2018;378(14):1277-1290.
4. Motzer, RJ, Penkov K, Haanen J, et al. Avelumab plus axitinib versus sunitinib for advanced renal-cell carcinoma. N Engl J Med. 2019;380(12):1103-1115.
1. Motzer, R, Alekseev B, Rha SY, et al. Lenvatinib plus pembrolizumab or everolimus for advanced renal cell carcinoma [published online ahead of print, 2021 Feb 13]. N Engl J Med. 2021;10.1056/NEJMoa2035716. doi:10.1056/NEJMoa2035716
2. Rini, BI, Plimack ER, Stus V, et al. Pembrolizumab plus axitinib versus sunitinib for advanced renal-cell carcinoma. N Engl J Med. 2019;380(12):1116-1127.
3. Motzer, RJ, Tannir NM, McDermott DF, et al. Nivolumab plus ipilimumab versus sunitinib in advanced renal-cell carcinoma. N Engl J Med. 2018;378(14):1277-1290.
4. Motzer, RJ, Penkov K, Haanen J, et al. Avelumab plus axitinib versus sunitinib for advanced renal-cell carcinoma. N Engl J Med. 2019;380(12):1103-1115.
“Thank You for Not Letting Me Crash and Burn”: The Imperative of Quality Physician Onboarding to Foster Job Satisfaction, Strengthen Workplace Culture, and Advance the Quadruple Aim
From The Ohio State University College of Medicine Department of Family and Community Medicine, Columbus, OH (Candy Magaña, Jná Báez, Christine Junk, Drs. Ahmad, Conroy, and Olayiwola); The Ohio State University College of Medicine Center for Primary Care Innovation and Transformation (Candy Magaña, Jná Báez, and Dr. Olayiwola); and The Ohio State University Wexner Medical Center (Christine Harsh, Erica Esposito).
Much has been discussed about the growing crisis of professional dissatisfaction among physicians, with increasing efforts being made to incorporate physician wellness into health system strategies that move from the Triple to the Quadruple Aim.1 For many years, our health care system has been focused on improving the health of populations, optimizing the patient experience, and reducing the cost of care (Triple Aim). The inclusion of the fourth aim, improving the experience of the teams that deliver care, has become paramount in achieving the other aims.
An area often overlooked in this focus on wellness, however, is the importance of the earliest days of employment to shape and predict long-term career contentment. This is a missed opportunity, as data suggest that organizations with standardized onboarding programs boast a 62% increased productivity rate and a 50% greater retention rate among new hires.2,3 Moreover, a study by the International Institute for Management Development found that businesses lose an estimated $37 billion annually because employees do not fully understand their jobs.4 The report ties losses to “actions taken by employees who have misunderstood or misinterpreted company policies, business processes, job function, or a combination of the three.” Additionally, onboarding programs that focus strictly on technical or functional orientation tasks miss important opportunities for culture integration during the onboarding process.5 It is therefore imperative to look to effective models of employee onboarding to develop systems that position physicians and practices for success.
Challenges With Traditional Physician Onboarding
In recent years, the Department of Family and Community Medicine at The Ohio State University College of Medicine has experienced rapid organizational change. Like many primary care systems nationwide responding to disruption in health care and changing demands on the clinical workforce, the department has hired new leadership, revised strategic priorities, and witnessed an influx of faculty and staff. It has also planned an expansion of ambulatory services that will more than double the clinical workforce over the next 3 years. While an exciting time, there has been a growing need to align strategy, culture, and human capital during these changes.
As we entered this phase of transformation, we recognized that our highly individualized, ad hoc orientation system presented shortcomings. During the act of revamping our physician recruitment process, stakeholder workgroup members specifically noted that improvement efforts were needed regarding new physician orientation, as no consistent structures were previously in place. New physician orientation had been a major gap for years, resulting in dissatisfaction in the first few months of physician practice, early physician turnover, and staff frustration. For physicians, we continued to learn about their frustration and unanswered questions regarding expectations, norms, structures, and processes.
Many new hires were left with a kind of “trial by fire” entry into their roles. On the first day of clinic, a new physician would most likely need to simultaneously see patients, learn the nuances of the electronic health record (EHR), figure out where the break room was located, and quickly learn population health issues for the patients they were serving. Opportunities to meet key clinic site leadership would be at random, and new physicians might not have the opportunity to meet leadership or staff until months into their tenure; this did not allow for a sense of belonging or understanding of the many resources available to them. We learned that the quality of these ad hoc orientations also varied based on the experience and priorities of each practice’s clinic and administrative leaders, who themselves felt ill-equipped to provide a consistent, robust, and confidence-building experience. In addition, practice site management was rarely given advance time to prepare for the arrival of new physicians, which resulted in physicians perceiving practices to be unwelcoming and disorganized. Their first days were often spent with patients in clinic with no structured orientation and without understanding workflows or having systems practice knowledge.
Institutionally, the interview process satisfied some transfer of knowledge, but we were unclear of what was being consistently shared and understood in the multiple ambulatory locations where our physicians enter practice. More importantly, we knew we were missing a critical opportunity to use orientation to imbue other values of diversity and inclusion, health equity, and operational excellence into the workforce. Based on anecdotal insights from employees and our own review of successful onboarding approaches from other industries, we also knew a more structured welcoming process would predict greater long-term career satisfaction for physicians and create a foundation for providing optimal care for patients when clinical encounters began.
Reengineering Physician Onboarding
In 2019, our department developed a multipronged approach to physician onboarding, which is already paying dividends in easing acculturation and fostering team cohesion. The department tapped its Center for Primary Care Innovation and Transformation (PCIT) to direct this effort, based on its expertise in practice transformation, clinical transformation and adaptations, and workflow efficiency through process and quality improvement. The PCIT team provides support to the department and the entire health system focused on technology and innovation, health equity, and health care efficiency.6 They applied many of the tools used in the Clinical Transformation in Technology approach to lead this initiative.7
The PCIT team began identifying key stakeholders (department, clinical and ambulatory leadership, clinicians and clinical staff, community partners, human resources, and resident physicians), and then engaging those individuals in dialogue surrounding orientation needs. During scheduled in-person and virtual work sessions, stakeholders were asked to provide input on pain points for new physicians and clinic leadership and were then empowered to create an onboarding program. Applying health care quality improvement techniques, we leveraged workflow mapping, current and future state planning, and goal setting, led by the skilled process improvement and clinical transformation specialists. We coordinated a multidisciplinary process improvement team that included clinic administrators, medical directors, human resources, administrative staff, ambulatory and resident leadership, clinical leadership, and recruitment liaisons. This diverse group of leadership and staff was brought together to address these critical identified gaps and weaknesses in new physician onboarding.
Through a series of learning sessions, the workgroup provided input that was used to form an itemized physician onboarding schedule, which was then leveraged to develop Plan-Do-Study-Act (PDSA) cycles, collecting feedback in real time. Some issues that seem small can cause major distress for new physicians. For example, in our inaugural orientation implementation, a physician provided feedback that they wanted to obtain information on setting up their work email on their personal devices and was having considerable trouble figuring out how to do so. This particular topic was not initially included in the first iteration of the Department’s orientation program. We rapidly sought out different ways to embed that into the onboarding experience. The first PDSA involved integrating the university information technology team (IT) into the process but was not successful because it required extra work for the new physician and reliance on the IT schedule. The next attempt was to have IT train a department staff member, but again, this still required that the physician find time to connect with that staff member. Finally, we decided to obtain a useful tip sheet that clearly outlined the process and could be included in orientation materials. This gave the new physicians control over how and when they would work on this issue. Based on these learnings, this was incorporated as a standing agenda item and resource for incoming physicians.
Essential Elements of Effective Onboarding
The new physician onboarding program consists of 5 key elements: (1) 2-week acclimation period; (2) peer learning and connection; (3) training before beginning patient care; (4) standardization, transparency, and accountability in all processes; (5) ongoing feedback for continued program improvement with individual support (Figure).
The program begins with a 2-week period of intentional investment in individual success, during which time no patients are scheduled. In week 1, we work with new hires to set expectations for performance, understand departmental norms, and introduce culture. Physicians meet formally and informally with department and institutional leadership, as well as attend team meetings and trainings that include a range of administrative and compliance requirements, such as quality standards and expectations, compliance, billing and coding specific to family medicine, EHR management, and institutionally mandated orientations. We are also adding implicit bias and antiracism training during this period, which are essential to creating a culture of unity and belonging.
During week 2, we focus on clinic-level orientation, assigning new hires an orientation buddy and a department sponsor, such as a physician lead or medical director. Physicians spend time with leadership at their clinic as they nurture relationships important for mentorship, sponsorship, and peer support. They also meet care team members, including front desk associates, medical assistants, behavioral health clinicians, nutritionists, social workers, pharmacists, and other key colleagues and care team members. This introduces the physician to the clinical environment and physical space as well as acclimates the physician to workflows and feedback loops for regular interaction.
When physicians ultimately begin patient care, they begin with an expected productivity rate of 50%, followed by an expected productivity rate of 75%, and then an expected productivity rate of 100%. This steady increase occurs over 3 to 4 weeks depending on the physician’s comfort level. They are also provided monthly reports on work relative value unit performance so that they can track and adapt practice patterns as necessary.More details on the program can be found in Appendix 1.
Takeaways From the Implementation of the New Program
Give time for new physicians to focus on acclimating to the role and environment.
The initial 2-week period of transition—without direct patient care—ensures that physicians feel comfortable in their new ecosystem. This also supports personal transitions, as many new hires are managing relocation and acclimating themselves and their families to new settings. Even residents from our training program who returned as attending physicians found this flexibility and slow reentry essential. This also gives the clinic time to orient to an additional provider, nurture them into the team culture, and develop relationships with the care team.
Cultivate spaces for shared learning, problem-solving, and peer connection.
Orientation is delivered primarily through group learning sessions with cohorts of new physicians, thus developing spaces for networking, fostering psychological safety, encouraging personal and professional rapport, emphasizing interactive learning, and reinforcing scheduling blocks at the departmental level. New hires also participate in peer shadowing to develop clinical competencies and are assigned a workplace buddy to foster a sense of belonging and create opportunities for additional knowledge sharing and cross-training.
Strengthen physician knowledge base, confidence, and comfort in the workplace before beginning direct patient care.
Without fluency in the workflows, culture, and operations of a practice, the urgency to have physicians begin clinical care can result in frustration for the physician, patients, and clinical and administrative staff. Therefore, we complete essential training prior to seeing any patients. This includes clinical workflows, referral processes, use of alternate modalities of care (eg, telehealth, eConsults), billing protocols, population health training, patient resources, office resources, and other essential daily processes and tools. This creates efficiency in administrative management, increased productivity, and better understanding of resources available for patients’ medical, social, and behavioral needs when patient care begins.
Embrace standardization, transparency, and accountability in as many processes as possible.
Standardized knowledge-sharing and checklists are mandated at every step of the orientation process, requiring sign off from the physician lead, practice manager, and new physicians upon completion. This offers all parties the opportunity to play a role in the delivery of and accountability for skills transfer and empowers new hires to press pause if they feel unsure about any domain in the training. It is also essential in guaranteeing that all physicians—regardless of which ambulatory location they practice in—receive consistent information and expectations. A sample checklist can be found in Appendix 2.
Commit to collecting and acting on feedback for continued program improvement and individual support.
As physicians complete the program, it is necessary to create structures to measure and enhance its impact, as well as evaluate how physicians are faring following the program. Each physician completes surveys at the end of the orientation program, attends a 90-day post-program check-in with the department chair, and receives follow-up trainings on advanced topics as they become more deeply embedded in the organization.
Lessons Learned
Feedback from surveys and 90-day check-ins with leadership and physicians reflect a high degree of clarity on job roles and duties, a sense of team camaraderie, easier system navigation, and a strong sense of support. We do recognize that sustaining change takes time and our study is limited by data demonstrating the impact of these efforts. We look forward to sharing more robust data from surveys and qualitative interviews with physicians, clinical leadership, and staff in the future. Our team will conduct interviews at 90-day and 180-day checkpoints with new physicians who have gone through this program, followed by a check-in after 1 year. Additionally, new physicians as well as key stakeholders, such as physician leads, practice managers, and members of the recruitment team, have started to participate in short surveys. These are designed to better understand their experiences, what worked well, what can be improved, and the overall satisfaction of the physician and other members of the extended care team.
What follows are some comments made by the initial group of physicians that went through this program and participated in follow-up interviews:
“I really feel like part of a bigger team.”
“I knew exactly what do to when I walked into the exam room on clinic Day 1.”
“It was great to make deep connections during the early process of joining.”
“Having a buddy to direct questions and ideas to is amazing and empowering.”
“Even though the orientation was long, I felt that I learned so much that I would not have otherwise.”
“Thank you for not letting me crash and burn!”
“Great culture! I love understanding our values of health equity, diversity, and inclusion.”
In the months since our endeavor began, we have learned just how essential it is to fully and effectively integrate new hires into the organization for their own satisfaction and success—and ours. Indeed, we cannot expect to achieve the Quadruple Aim without investing in the kind of transparent and intentional orientation process that defines expectations, aligns cultural values, mitigates costly and stressful operational misunderstandings, and communicates to physicians that, not only do they belong, but their sense of belonging is our priority. While we have yet to understand the impact of this program on the fourth aim of the Quadruple Aim, we are hopeful that the benefits will be far-reaching.
It is our ultimate hope that programs like this: (1) give physicians the confidence needed to create impactful patient-centered experiences; (2) enable physicians to become more cost-effective and efficient in care delivery; (3) allow physicians to understand the populations they are serving and access tools available to mitigate health disparities and other barriers; and (4) improve the collective experience of every member of the care team, practice leadership, and clinician-patient partnership.
Corresponding author: J. Nwando Olayiwola, MD, MPH, FAAFP, The Ohio State University College of Medicine, Department of Family and Community Medicine, 2231 N High St, Ste 250, Columbus, OH 43210; [email protected].
Financial disclosures: None.
Keywords: physician onboarding; Quadruple Aim; leadership; clinician satisfaction; care team satisfaction.
1. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6): 573-576.
2. Maurer R. Onboarding key to retaining, engaging talent. Society for Human Resource Management. April 16, 2015. Accessed January 8, 2021. https://www.shrm.org/resourcesandtools/hr-topics/talent-acquisition/pages/onboarding-key-retaining-engaging-talent.aspx
3. Boston AG. New hire onboarding standardization and automation powers productivity gains. GlobeNewswire. March 8, 2011. Accessed January 8, 2021. http://www.globenewswire.com/news-release/2011/03/08/994239/0/en/New-Hire-Onboarding-Standardization-and-Automation-Powers-Productivity-Gains.html
4. $37 billion – US and UK business count the cost of employee misunderstanding. HR.com – Maximizing Human Potential. June 18, 2008. Accessed March 10, 2021. https://www.hr.com/en/communities/staffing_and_recruitment/37-billion---us-and-uk-businesses-count-the-cost-o_fhnduq4d.html
5. Employers risk driving new hires away with poor onboarding. Society for Human Resource Management. February 23, 2018. Accessed March 10, 2021. https://www.shrm.org/resourcesandtools/hr-topics/talent-acquisition/pages/employers-new-hires-poor-onboarding.aspx
6. Center for Primary Care Innovation and Transformation. The Ohio State University College of Medicine. Accessed January 8, 2021. https://wexnermedical.osu.edu/departments/family-medicine/pcit
7. Olayiwola, J.N. and Magaña, C. Clinical transformation in technology: a fresh change management approach for primary care. Harvard Health Policy Review. February 2, 2019. Accessed March 10, 2021. http://www.hhpronline.org/articles/2019/2/2/clinical-transformation-in-technology-a-fresh-change-management-approach-for-primary-care
From The Ohio State University College of Medicine Department of Family and Community Medicine, Columbus, OH (Candy Magaña, Jná Báez, Christine Junk, Drs. Ahmad, Conroy, and Olayiwola); The Ohio State University College of Medicine Center for Primary Care Innovation and Transformation (Candy Magaña, Jná Báez, and Dr. Olayiwola); and The Ohio State University Wexner Medical Center (Christine Harsh, Erica Esposito).
Much has been discussed about the growing crisis of professional dissatisfaction among physicians, with increasing efforts being made to incorporate physician wellness into health system strategies that move from the Triple to the Quadruple Aim.1 For many years, our health care system has been focused on improving the health of populations, optimizing the patient experience, and reducing the cost of care (Triple Aim). The inclusion of the fourth aim, improving the experience of the teams that deliver care, has become paramount in achieving the other aims.
An area often overlooked in this focus on wellness, however, is the importance of the earliest days of employment to shape and predict long-term career contentment. This is a missed opportunity, as data suggest that organizations with standardized onboarding programs boast a 62% increased productivity rate and a 50% greater retention rate among new hires.2,3 Moreover, a study by the International Institute for Management Development found that businesses lose an estimated $37 billion annually because employees do not fully understand their jobs.4 The report ties losses to “actions taken by employees who have misunderstood or misinterpreted company policies, business processes, job function, or a combination of the three.” Additionally, onboarding programs that focus strictly on technical or functional orientation tasks miss important opportunities for culture integration during the onboarding process.5 It is therefore imperative to look to effective models of employee onboarding to develop systems that position physicians and practices for success.
Challenges With Traditional Physician Onboarding
In recent years, the Department of Family and Community Medicine at The Ohio State University College of Medicine has experienced rapid organizational change. Like many primary care systems nationwide responding to disruption in health care and changing demands on the clinical workforce, the department has hired new leadership, revised strategic priorities, and witnessed an influx of faculty and staff. It has also planned an expansion of ambulatory services that will more than double the clinical workforce over the next 3 years. While an exciting time, there has been a growing need to align strategy, culture, and human capital during these changes.
As we entered this phase of transformation, we recognized that our highly individualized, ad hoc orientation system presented shortcomings. During the act of revamping our physician recruitment process, stakeholder workgroup members specifically noted that improvement efforts were needed regarding new physician orientation, as no consistent structures were previously in place. New physician orientation had been a major gap for years, resulting in dissatisfaction in the first few months of physician practice, early physician turnover, and staff frustration. For physicians, we continued to learn about their frustration and unanswered questions regarding expectations, norms, structures, and processes.
Many new hires were left with a kind of “trial by fire” entry into their roles. On the first day of clinic, a new physician would most likely need to simultaneously see patients, learn the nuances of the electronic health record (EHR), figure out where the break room was located, and quickly learn population health issues for the patients they were serving. Opportunities to meet key clinic site leadership would be at random, and new physicians might not have the opportunity to meet leadership or staff until months into their tenure; this did not allow for a sense of belonging or understanding of the many resources available to them. We learned that the quality of these ad hoc orientations also varied based on the experience and priorities of each practice’s clinic and administrative leaders, who themselves felt ill-equipped to provide a consistent, robust, and confidence-building experience. In addition, practice site management was rarely given advance time to prepare for the arrival of new physicians, which resulted in physicians perceiving practices to be unwelcoming and disorganized. Their first days were often spent with patients in clinic with no structured orientation and without understanding workflows or having systems practice knowledge.
Institutionally, the interview process satisfied some transfer of knowledge, but we were unclear of what was being consistently shared and understood in the multiple ambulatory locations where our physicians enter practice. More importantly, we knew we were missing a critical opportunity to use orientation to imbue other values of diversity and inclusion, health equity, and operational excellence into the workforce. Based on anecdotal insights from employees and our own review of successful onboarding approaches from other industries, we also knew a more structured welcoming process would predict greater long-term career satisfaction for physicians and create a foundation for providing optimal care for patients when clinical encounters began.
Reengineering Physician Onboarding
In 2019, our department developed a multipronged approach to physician onboarding, which is already paying dividends in easing acculturation and fostering team cohesion. The department tapped its Center for Primary Care Innovation and Transformation (PCIT) to direct this effort, based on its expertise in practice transformation, clinical transformation and adaptations, and workflow efficiency through process and quality improvement. The PCIT team provides support to the department and the entire health system focused on technology and innovation, health equity, and health care efficiency.6 They applied many of the tools used in the Clinical Transformation in Technology approach to lead this initiative.7
The PCIT team began identifying key stakeholders (department, clinical and ambulatory leadership, clinicians and clinical staff, community partners, human resources, and resident physicians), and then engaging those individuals in dialogue surrounding orientation needs. During scheduled in-person and virtual work sessions, stakeholders were asked to provide input on pain points for new physicians and clinic leadership and were then empowered to create an onboarding program. Applying health care quality improvement techniques, we leveraged workflow mapping, current and future state planning, and goal setting, led by the skilled process improvement and clinical transformation specialists. We coordinated a multidisciplinary process improvement team that included clinic administrators, medical directors, human resources, administrative staff, ambulatory and resident leadership, clinical leadership, and recruitment liaisons. This diverse group of leadership and staff was brought together to address these critical identified gaps and weaknesses in new physician onboarding.
Through a series of learning sessions, the workgroup provided input that was used to form an itemized physician onboarding schedule, which was then leveraged to develop Plan-Do-Study-Act (PDSA) cycles, collecting feedback in real time. Some issues that seem small can cause major distress for new physicians. For example, in our inaugural orientation implementation, a physician provided feedback that they wanted to obtain information on setting up their work email on their personal devices and was having considerable trouble figuring out how to do so. This particular topic was not initially included in the first iteration of the Department’s orientation program. We rapidly sought out different ways to embed that into the onboarding experience. The first PDSA involved integrating the university information technology team (IT) into the process but was not successful because it required extra work for the new physician and reliance on the IT schedule. The next attempt was to have IT train a department staff member, but again, this still required that the physician find time to connect with that staff member. Finally, we decided to obtain a useful tip sheet that clearly outlined the process and could be included in orientation materials. This gave the new physicians control over how and when they would work on this issue. Based on these learnings, this was incorporated as a standing agenda item and resource for incoming physicians.
Essential Elements of Effective Onboarding
The new physician onboarding program consists of 5 key elements: (1) 2-week acclimation period; (2) peer learning and connection; (3) training before beginning patient care; (4) standardization, transparency, and accountability in all processes; (5) ongoing feedback for continued program improvement with individual support (Figure).
The program begins with a 2-week period of intentional investment in individual success, during which time no patients are scheduled. In week 1, we work with new hires to set expectations for performance, understand departmental norms, and introduce culture. Physicians meet formally and informally with department and institutional leadership, as well as attend team meetings and trainings that include a range of administrative and compliance requirements, such as quality standards and expectations, compliance, billing and coding specific to family medicine, EHR management, and institutionally mandated orientations. We are also adding implicit bias and antiracism training during this period, which are essential to creating a culture of unity and belonging.
During week 2, we focus on clinic-level orientation, assigning new hires an orientation buddy and a department sponsor, such as a physician lead or medical director. Physicians spend time with leadership at their clinic as they nurture relationships important for mentorship, sponsorship, and peer support. They also meet care team members, including front desk associates, medical assistants, behavioral health clinicians, nutritionists, social workers, pharmacists, and other key colleagues and care team members. This introduces the physician to the clinical environment and physical space as well as acclimates the physician to workflows and feedback loops for regular interaction.
When physicians ultimately begin patient care, they begin with an expected productivity rate of 50%, followed by an expected productivity rate of 75%, and then an expected productivity rate of 100%. This steady increase occurs over 3 to 4 weeks depending on the physician’s comfort level. They are also provided monthly reports on work relative value unit performance so that they can track and adapt practice patterns as necessary.More details on the program can be found in Appendix 1.
Takeaways From the Implementation of the New Program
Give time for new physicians to focus on acclimating to the role and environment.
The initial 2-week period of transition—without direct patient care—ensures that physicians feel comfortable in their new ecosystem. This also supports personal transitions, as many new hires are managing relocation and acclimating themselves and their families to new settings. Even residents from our training program who returned as attending physicians found this flexibility and slow reentry essential. This also gives the clinic time to orient to an additional provider, nurture them into the team culture, and develop relationships with the care team.
Cultivate spaces for shared learning, problem-solving, and peer connection.
Orientation is delivered primarily through group learning sessions with cohorts of new physicians, thus developing spaces for networking, fostering psychological safety, encouraging personal and professional rapport, emphasizing interactive learning, and reinforcing scheduling blocks at the departmental level. New hires also participate in peer shadowing to develop clinical competencies and are assigned a workplace buddy to foster a sense of belonging and create opportunities for additional knowledge sharing and cross-training.
Strengthen physician knowledge base, confidence, and comfort in the workplace before beginning direct patient care.
Without fluency in the workflows, culture, and operations of a practice, the urgency to have physicians begin clinical care can result in frustration for the physician, patients, and clinical and administrative staff. Therefore, we complete essential training prior to seeing any patients. This includes clinical workflows, referral processes, use of alternate modalities of care (eg, telehealth, eConsults), billing protocols, population health training, patient resources, office resources, and other essential daily processes and tools. This creates efficiency in administrative management, increased productivity, and better understanding of resources available for patients’ medical, social, and behavioral needs when patient care begins.
Embrace standardization, transparency, and accountability in as many processes as possible.
Standardized knowledge-sharing and checklists are mandated at every step of the orientation process, requiring sign off from the physician lead, practice manager, and new physicians upon completion. This offers all parties the opportunity to play a role in the delivery of and accountability for skills transfer and empowers new hires to press pause if they feel unsure about any domain in the training. It is also essential in guaranteeing that all physicians—regardless of which ambulatory location they practice in—receive consistent information and expectations. A sample checklist can be found in Appendix 2.
Commit to collecting and acting on feedback for continued program improvement and individual support.
As physicians complete the program, it is necessary to create structures to measure and enhance its impact, as well as evaluate how physicians are faring following the program. Each physician completes surveys at the end of the orientation program, attends a 90-day post-program check-in with the department chair, and receives follow-up trainings on advanced topics as they become more deeply embedded in the organization.
Lessons Learned
Feedback from surveys and 90-day check-ins with leadership and physicians reflect a high degree of clarity on job roles and duties, a sense of team camaraderie, easier system navigation, and a strong sense of support. We do recognize that sustaining change takes time and our study is limited by data demonstrating the impact of these efforts. We look forward to sharing more robust data from surveys and qualitative interviews with physicians, clinical leadership, and staff in the future. Our team will conduct interviews at 90-day and 180-day checkpoints with new physicians who have gone through this program, followed by a check-in after 1 year. Additionally, new physicians as well as key stakeholders, such as physician leads, practice managers, and members of the recruitment team, have started to participate in short surveys. These are designed to better understand their experiences, what worked well, what can be improved, and the overall satisfaction of the physician and other members of the extended care team.
What follows are some comments made by the initial group of physicians that went through this program and participated in follow-up interviews:
“I really feel like part of a bigger team.”
“I knew exactly what do to when I walked into the exam room on clinic Day 1.”
“It was great to make deep connections during the early process of joining.”
“Having a buddy to direct questions and ideas to is amazing and empowering.”
“Even though the orientation was long, I felt that I learned so much that I would not have otherwise.”
“Thank you for not letting me crash and burn!”
“Great culture! I love understanding our values of health equity, diversity, and inclusion.”
In the months since our endeavor began, we have learned just how essential it is to fully and effectively integrate new hires into the organization for their own satisfaction and success—and ours. Indeed, we cannot expect to achieve the Quadruple Aim without investing in the kind of transparent and intentional orientation process that defines expectations, aligns cultural values, mitigates costly and stressful operational misunderstandings, and communicates to physicians that, not only do they belong, but their sense of belonging is our priority. While we have yet to understand the impact of this program on the fourth aim of the Quadruple Aim, we are hopeful that the benefits will be far-reaching.
It is our ultimate hope that programs like this: (1) give physicians the confidence needed to create impactful patient-centered experiences; (2) enable physicians to become more cost-effective and efficient in care delivery; (3) allow physicians to understand the populations they are serving and access tools available to mitigate health disparities and other barriers; and (4) improve the collective experience of every member of the care team, practice leadership, and clinician-patient partnership.
Corresponding author: J. Nwando Olayiwola, MD, MPH, FAAFP, The Ohio State University College of Medicine, Department of Family and Community Medicine, 2231 N High St, Ste 250, Columbus, OH 43210; [email protected].
Financial disclosures: None.
Keywords: physician onboarding; Quadruple Aim; leadership; clinician satisfaction; care team satisfaction.
From The Ohio State University College of Medicine Department of Family and Community Medicine, Columbus, OH (Candy Magaña, Jná Báez, Christine Junk, Drs. Ahmad, Conroy, and Olayiwola); The Ohio State University College of Medicine Center for Primary Care Innovation and Transformation (Candy Magaña, Jná Báez, and Dr. Olayiwola); and The Ohio State University Wexner Medical Center (Christine Harsh, Erica Esposito).
Much has been discussed about the growing crisis of professional dissatisfaction among physicians, with increasing efforts being made to incorporate physician wellness into health system strategies that move from the Triple to the Quadruple Aim.1 For many years, our health care system has been focused on improving the health of populations, optimizing the patient experience, and reducing the cost of care (Triple Aim). The inclusion of the fourth aim, improving the experience of the teams that deliver care, has become paramount in achieving the other aims.
An area often overlooked in this focus on wellness, however, is the importance of the earliest days of employment to shape and predict long-term career contentment. This is a missed opportunity, as data suggest that organizations with standardized onboarding programs boast a 62% increased productivity rate and a 50% greater retention rate among new hires.2,3 Moreover, a study by the International Institute for Management Development found that businesses lose an estimated $37 billion annually because employees do not fully understand their jobs.4 The report ties losses to “actions taken by employees who have misunderstood or misinterpreted company policies, business processes, job function, or a combination of the three.” Additionally, onboarding programs that focus strictly on technical or functional orientation tasks miss important opportunities for culture integration during the onboarding process.5 It is therefore imperative to look to effective models of employee onboarding to develop systems that position physicians and practices for success.
Challenges With Traditional Physician Onboarding
In recent years, the Department of Family and Community Medicine at The Ohio State University College of Medicine has experienced rapid organizational change. Like many primary care systems nationwide responding to disruption in health care and changing demands on the clinical workforce, the department has hired new leadership, revised strategic priorities, and witnessed an influx of faculty and staff. It has also planned an expansion of ambulatory services that will more than double the clinical workforce over the next 3 years. While an exciting time, there has been a growing need to align strategy, culture, and human capital during these changes.
As we entered this phase of transformation, we recognized that our highly individualized, ad hoc orientation system presented shortcomings. During the act of revamping our physician recruitment process, stakeholder workgroup members specifically noted that improvement efforts were needed regarding new physician orientation, as no consistent structures were previously in place. New physician orientation had been a major gap for years, resulting in dissatisfaction in the first few months of physician practice, early physician turnover, and staff frustration. For physicians, we continued to learn about their frustration and unanswered questions regarding expectations, norms, structures, and processes.
Many new hires were left with a kind of “trial by fire” entry into their roles. On the first day of clinic, a new physician would most likely need to simultaneously see patients, learn the nuances of the electronic health record (EHR), figure out where the break room was located, and quickly learn population health issues for the patients they were serving. Opportunities to meet key clinic site leadership would be at random, and new physicians might not have the opportunity to meet leadership or staff until months into their tenure; this did not allow for a sense of belonging or understanding of the many resources available to them. We learned that the quality of these ad hoc orientations also varied based on the experience and priorities of each practice’s clinic and administrative leaders, who themselves felt ill-equipped to provide a consistent, robust, and confidence-building experience. In addition, practice site management was rarely given advance time to prepare for the arrival of new physicians, which resulted in physicians perceiving practices to be unwelcoming and disorganized. Their first days were often spent with patients in clinic with no structured orientation and without understanding workflows or having systems practice knowledge.
Institutionally, the interview process satisfied some transfer of knowledge, but we were unclear of what was being consistently shared and understood in the multiple ambulatory locations where our physicians enter practice. More importantly, we knew we were missing a critical opportunity to use orientation to imbue other values of diversity and inclusion, health equity, and operational excellence into the workforce. Based on anecdotal insights from employees and our own review of successful onboarding approaches from other industries, we also knew a more structured welcoming process would predict greater long-term career satisfaction for physicians and create a foundation for providing optimal care for patients when clinical encounters began.
Reengineering Physician Onboarding
In 2019, our department developed a multipronged approach to physician onboarding, which is already paying dividends in easing acculturation and fostering team cohesion. The department tapped its Center for Primary Care Innovation and Transformation (PCIT) to direct this effort, based on its expertise in practice transformation, clinical transformation and adaptations, and workflow efficiency through process and quality improvement. The PCIT team provides support to the department and the entire health system focused on technology and innovation, health equity, and health care efficiency.6 They applied many of the tools used in the Clinical Transformation in Technology approach to lead this initiative.7
The PCIT team began identifying key stakeholders (department, clinical and ambulatory leadership, clinicians and clinical staff, community partners, human resources, and resident physicians), and then engaging those individuals in dialogue surrounding orientation needs. During scheduled in-person and virtual work sessions, stakeholders were asked to provide input on pain points for new physicians and clinic leadership and were then empowered to create an onboarding program. Applying health care quality improvement techniques, we leveraged workflow mapping, current and future state planning, and goal setting, led by the skilled process improvement and clinical transformation specialists. We coordinated a multidisciplinary process improvement team that included clinic administrators, medical directors, human resources, administrative staff, ambulatory and resident leadership, clinical leadership, and recruitment liaisons. This diverse group of leadership and staff was brought together to address these critical identified gaps and weaknesses in new physician onboarding.
Through a series of learning sessions, the workgroup provided input that was used to form an itemized physician onboarding schedule, which was then leveraged to develop Plan-Do-Study-Act (PDSA) cycles, collecting feedback in real time. Some issues that seem small can cause major distress for new physicians. For example, in our inaugural orientation implementation, a physician provided feedback that they wanted to obtain information on setting up their work email on their personal devices and was having considerable trouble figuring out how to do so. This particular topic was not initially included in the first iteration of the Department’s orientation program. We rapidly sought out different ways to embed that into the onboarding experience. The first PDSA involved integrating the university information technology team (IT) into the process but was not successful because it required extra work for the new physician and reliance on the IT schedule. The next attempt was to have IT train a department staff member, but again, this still required that the physician find time to connect with that staff member. Finally, we decided to obtain a useful tip sheet that clearly outlined the process and could be included in orientation materials. This gave the new physicians control over how and when they would work on this issue. Based on these learnings, this was incorporated as a standing agenda item and resource for incoming physicians.
Essential Elements of Effective Onboarding
The new physician onboarding program consists of 5 key elements: (1) 2-week acclimation period; (2) peer learning and connection; (3) training before beginning patient care; (4) standardization, transparency, and accountability in all processes; (5) ongoing feedback for continued program improvement with individual support (Figure).
The program begins with a 2-week period of intentional investment in individual success, during which time no patients are scheduled. In week 1, we work with new hires to set expectations for performance, understand departmental norms, and introduce culture. Physicians meet formally and informally with department and institutional leadership, as well as attend team meetings and trainings that include a range of administrative and compliance requirements, such as quality standards and expectations, compliance, billing and coding specific to family medicine, EHR management, and institutionally mandated orientations. We are also adding implicit bias and antiracism training during this period, which are essential to creating a culture of unity and belonging.
During week 2, we focus on clinic-level orientation, assigning new hires an orientation buddy and a department sponsor, such as a physician lead or medical director. Physicians spend time with leadership at their clinic as they nurture relationships important for mentorship, sponsorship, and peer support. They also meet care team members, including front desk associates, medical assistants, behavioral health clinicians, nutritionists, social workers, pharmacists, and other key colleagues and care team members. This introduces the physician to the clinical environment and physical space as well as acclimates the physician to workflows and feedback loops for regular interaction.
When physicians ultimately begin patient care, they begin with an expected productivity rate of 50%, followed by an expected productivity rate of 75%, and then an expected productivity rate of 100%. This steady increase occurs over 3 to 4 weeks depending on the physician’s comfort level. They are also provided monthly reports on work relative value unit performance so that they can track and adapt practice patterns as necessary.More details on the program can be found in Appendix 1.
Takeaways From the Implementation of the New Program
Give time for new physicians to focus on acclimating to the role and environment.
The initial 2-week period of transition—without direct patient care—ensures that physicians feel comfortable in their new ecosystem. This also supports personal transitions, as many new hires are managing relocation and acclimating themselves and their families to new settings. Even residents from our training program who returned as attending physicians found this flexibility and slow reentry essential. This also gives the clinic time to orient to an additional provider, nurture them into the team culture, and develop relationships with the care team.
Cultivate spaces for shared learning, problem-solving, and peer connection.
Orientation is delivered primarily through group learning sessions with cohorts of new physicians, thus developing spaces for networking, fostering psychological safety, encouraging personal and professional rapport, emphasizing interactive learning, and reinforcing scheduling blocks at the departmental level. New hires also participate in peer shadowing to develop clinical competencies and are assigned a workplace buddy to foster a sense of belonging and create opportunities for additional knowledge sharing and cross-training.
Strengthen physician knowledge base, confidence, and comfort in the workplace before beginning direct patient care.
Without fluency in the workflows, culture, and operations of a practice, the urgency to have physicians begin clinical care can result in frustration for the physician, patients, and clinical and administrative staff. Therefore, we complete essential training prior to seeing any patients. This includes clinical workflows, referral processes, use of alternate modalities of care (eg, telehealth, eConsults), billing protocols, population health training, patient resources, office resources, and other essential daily processes and tools. This creates efficiency in administrative management, increased productivity, and better understanding of resources available for patients’ medical, social, and behavioral needs when patient care begins.
Embrace standardization, transparency, and accountability in as many processes as possible.
Standardized knowledge-sharing and checklists are mandated at every step of the orientation process, requiring sign off from the physician lead, practice manager, and new physicians upon completion. This offers all parties the opportunity to play a role in the delivery of and accountability for skills transfer and empowers new hires to press pause if they feel unsure about any domain in the training. It is also essential in guaranteeing that all physicians—regardless of which ambulatory location they practice in—receive consistent information and expectations. A sample checklist can be found in Appendix 2.
Commit to collecting and acting on feedback for continued program improvement and individual support.
As physicians complete the program, it is necessary to create structures to measure and enhance its impact, as well as evaluate how physicians are faring following the program. Each physician completes surveys at the end of the orientation program, attends a 90-day post-program check-in with the department chair, and receives follow-up trainings on advanced topics as they become more deeply embedded in the organization.
Lessons Learned
Feedback from surveys and 90-day check-ins with leadership and physicians reflect a high degree of clarity on job roles and duties, a sense of team camaraderie, easier system navigation, and a strong sense of support. We do recognize that sustaining change takes time and our study is limited by data demonstrating the impact of these efforts. We look forward to sharing more robust data from surveys and qualitative interviews with physicians, clinical leadership, and staff in the future. Our team will conduct interviews at 90-day and 180-day checkpoints with new physicians who have gone through this program, followed by a check-in after 1 year. Additionally, new physicians as well as key stakeholders, such as physician leads, practice managers, and members of the recruitment team, have started to participate in short surveys. These are designed to better understand their experiences, what worked well, what can be improved, and the overall satisfaction of the physician and other members of the extended care team.
What follows are some comments made by the initial group of physicians that went through this program and participated in follow-up interviews:
“I really feel like part of a bigger team.”
“I knew exactly what do to when I walked into the exam room on clinic Day 1.”
“It was great to make deep connections during the early process of joining.”
“Having a buddy to direct questions and ideas to is amazing and empowering.”
“Even though the orientation was long, I felt that I learned so much that I would not have otherwise.”
“Thank you for not letting me crash and burn!”
“Great culture! I love understanding our values of health equity, diversity, and inclusion.”
In the months since our endeavor began, we have learned just how essential it is to fully and effectively integrate new hires into the organization for their own satisfaction and success—and ours. Indeed, we cannot expect to achieve the Quadruple Aim without investing in the kind of transparent and intentional orientation process that defines expectations, aligns cultural values, mitigates costly and stressful operational misunderstandings, and communicates to physicians that, not only do they belong, but their sense of belonging is our priority. While we have yet to understand the impact of this program on the fourth aim of the Quadruple Aim, we are hopeful that the benefits will be far-reaching.
It is our ultimate hope that programs like this: (1) give physicians the confidence needed to create impactful patient-centered experiences; (2) enable physicians to become more cost-effective and efficient in care delivery; (3) allow physicians to understand the populations they are serving and access tools available to mitigate health disparities and other barriers; and (4) improve the collective experience of every member of the care team, practice leadership, and clinician-patient partnership.
Corresponding author: J. Nwando Olayiwola, MD, MPH, FAAFP, The Ohio State University College of Medicine, Department of Family and Community Medicine, 2231 N High St, Ste 250, Columbus, OH 43210; [email protected].
Financial disclosures: None.
Keywords: physician onboarding; Quadruple Aim; leadership; clinician satisfaction; care team satisfaction.
1. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6): 573-576.
2. Maurer R. Onboarding key to retaining, engaging talent. Society for Human Resource Management. April 16, 2015. Accessed January 8, 2021. https://www.shrm.org/resourcesandtools/hr-topics/talent-acquisition/pages/onboarding-key-retaining-engaging-talent.aspx
3. Boston AG. New hire onboarding standardization and automation powers productivity gains. GlobeNewswire. March 8, 2011. Accessed January 8, 2021. http://www.globenewswire.com/news-release/2011/03/08/994239/0/en/New-Hire-Onboarding-Standardization-and-Automation-Powers-Productivity-Gains.html
4. $37 billion – US and UK business count the cost of employee misunderstanding. HR.com – Maximizing Human Potential. June 18, 2008. Accessed March 10, 2021. https://www.hr.com/en/communities/staffing_and_recruitment/37-billion---us-and-uk-businesses-count-the-cost-o_fhnduq4d.html
5. Employers risk driving new hires away with poor onboarding. Society for Human Resource Management. February 23, 2018. Accessed March 10, 2021. https://www.shrm.org/resourcesandtools/hr-topics/talent-acquisition/pages/employers-new-hires-poor-onboarding.aspx
6. Center for Primary Care Innovation and Transformation. The Ohio State University College of Medicine. Accessed January 8, 2021. https://wexnermedical.osu.edu/departments/family-medicine/pcit
7. Olayiwola, J.N. and Magaña, C. Clinical transformation in technology: a fresh change management approach for primary care. Harvard Health Policy Review. February 2, 2019. Accessed March 10, 2021. http://www.hhpronline.org/articles/2019/2/2/clinical-transformation-in-technology-a-fresh-change-management-approach-for-primary-care
1. Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6): 573-576.
2. Maurer R. Onboarding key to retaining, engaging talent. Society for Human Resource Management. April 16, 2015. Accessed January 8, 2021. https://www.shrm.org/resourcesandtools/hr-topics/talent-acquisition/pages/onboarding-key-retaining-engaging-talent.aspx
3. Boston AG. New hire onboarding standardization and automation powers productivity gains. GlobeNewswire. March 8, 2011. Accessed January 8, 2021. http://www.globenewswire.com/news-release/2011/03/08/994239/0/en/New-Hire-Onboarding-Standardization-and-Automation-Powers-Productivity-Gains.html
4. $37 billion – US and UK business count the cost of employee misunderstanding. HR.com – Maximizing Human Potential. June 18, 2008. Accessed March 10, 2021. https://www.hr.com/en/communities/staffing_and_recruitment/37-billion---us-and-uk-businesses-count-the-cost-o_fhnduq4d.html
5. Employers risk driving new hires away with poor onboarding. Society for Human Resource Management. February 23, 2018. Accessed March 10, 2021. https://www.shrm.org/resourcesandtools/hr-topics/talent-acquisition/pages/employers-new-hires-poor-onboarding.aspx
6. Center for Primary Care Innovation and Transformation. The Ohio State University College of Medicine. Accessed January 8, 2021. https://wexnermedical.osu.edu/departments/family-medicine/pcit
7. Olayiwola, J.N. and Magaña, C. Clinical transformation in technology: a fresh change management approach for primary care. Harvard Health Policy Review. February 2, 2019. Accessed March 10, 2021. http://www.hhpronline.org/articles/2019/2/2/clinical-transformation-in-technology-a-fresh-change-management-approach-for-primary-care




