User login
News and Views that Matter to Pediatricians
The leading independent newspaper covering news and commentary in pediatrics.
A Banned Chemical That Is Still Causing Cancer
This transcript has been edited for clarity.
These types of stories usually end with a call for regulation — to ban said chemical or substance, or to regulate it — but in this case, that has already happened. This new carcinogen I’m telling you about is actually an old chemical. And it has not been manufactured or legally imported in the US since 2013.
So, why bother? Because in this case, the chemical — or, really, a group of chemicals called polybrominated diphenyl ethers (PBDEs) — are still around: in our soil, in our food, and in our blood.
PBDEs are a group of compounds that confer flame-retardant properties to plastics, and they were used extensively in the latter part of the 20th century in electronic enclosures, business equipment, and foam cushioning in upholstery.
But there was a problem. They don’t chemically bond to plastics; they are just sort of mixed in, which means they can leach out. They are hydrophobic, meaning they don’t get washed out of soil, and, when ingested or inhaled by humans, they dissolve in our fat stores, making it difficult for our normal excretory systems to excrete them.
PBDEs biomagnify. Small animals can take them up from contaminated soil or water, and those animals are eaten by larger animals, which accumulate higher concentrations of the chemicals. This bioaccumulation increases as you move up the food web until you get to an apex predator — like you and me.
This is true of lots of chemicals, of course. The concern arises when these chemicals are toxic. To date, the toxicity data for PBDEs were pretty limited. There were some animal studies where rats were exposed to extremely high doses and they developed liver lesions — but I am always very wary of extrapolating high-dose rat toxicity studies to humans. There was also some suggestion that the chemicals could be endocrine disruptors, affecting breast and thyroid tissue.
What about cancer? In 2016, the International Agency for Research on Cancer concluded there was “inadequate evidence in humans for the carcinogencity of” PBDEs.
In the same report, though, they suggested PBDEs are “probably carcinogenic to humans” based on mechanistic studies.
In other words, we can’t prove they’re cancerous — but come on, they probably are.
Finally, we have some evidence that really pushes us toward the carcinogenic conclusion, in the form of this study, appearing in JAMA Network Open. It’s a nice bit of epidemiology leveraging the population-based National Health and Nutrition Examination Survey (NHANES).
Researchers measured PBDE levels in blood samples from 1100 people enrolled in NHANES in 2003 and 2004 and linked them to death records collected over the next 20 years or so.
The first thing to note is that the researchers were able to measure PBDEs in the blood samples. They were in there. They were detectable. And they were variable. Dividing the 1100 participants into low, medium, and high PBDE tertiles, you can see a nearly 10-fold difference across the population.
Importantly, not many baseline variables correlated with PBDE levels. People in the highest group were a bit younger but had a fairly similar sex distribution, race, ethnicity, education, income, physical activity, smoking status, and body mass index.
This is not a randomized trial, of course — but at least based on these data, exposure levels do seem fairly random, which is what you would expect from an environmental toxin that percolates up through the food chain. They are often somewhat indiscriminate.
This similarity in baseline characteristics between people with low or high blood levels of PBDE also allows us to make some stronger inferences about the observed outcomes. Let’s take a look at them.
After adjustment for baseline factors, individuals in the highest PBDE group had a 43% higher rate of death from any cause over the follow-up period. This was not enough to achieve statistical significance, but it was close.
But the key finding is deaths due to cancer. After adjustment, cancer deaths occurred four times as frequently among those in the high PBDE group, and that is a statistically significant difference.
To be fair, cancer deaths were rare in this cohort. The vast majority of people did not die of anything during the follow-up period regardless of PBDE level. But the data are strongly suggestive of the carcinogenicity of these chemicals.
I should also point out that the researchers are linking the PBDE level at a single time point to all these future events. If PBDE levels remain relatively stable within an individual over time, that’s fine, but if they tend to vary with intake of different foods for example, this would not be captured and would actually lead to an underestimation of the cancer risk.
The researchers also didn’t have granular enough data to determine the type of cancer, but they do show that rates are similar between men and women, which might point away from the more sex-specific cancer etiologies. Clearly, some more work is needed.
Of course, I started this piece by telling you that these chemicals are already pretty much banned in the United States. What are we supposed to do about these findings? Studies have examined the primary ongoing sources of PBDE in our environment and it seems like most of our exposure will be coming from the food we eat due to that biomagnification thing: high-fat fish, meat and dairy products, and fish oil supplements. It may be worth some investigation into the relative adulteration of these products with this new old carcinogen.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
These types of stories usually end with a call for regulation — to ban said chemical or substance, or to regulate it — but in this case, that has already happened. This new carcinogen I’m telling you about is actually an old chemical. And it has not been manufactured or legally imported in the US since 2013.
So, why bother? Because in this case, the chemical — or, really, a group of chemicals called polybrominated diphenyl ethers (PBDEs) — are still around: in our soil, in our food, and in our blood.
PBDEs are a group of compounds that confer flame-retardant properties to plastics, and they were used extensively in the latter part of the 20th century in electronic enclosures, business equipment, and foam cushioning in upholstery.
But there was a problem. They don’t chemically bond to plastics; they are just sort of mixed in, which means they can leach out. They are hydrophobic, meaning they don’t get washed out of soil, and, when ingested or inhaled by humans, they dissolve in our fat stores, making it difficult for our normal excretory systems to excrete them.
PBDEs biomagnify. Small animals can take them up from contaminated soil or water, and those animals are eaten by larger animals, which accumulate higher concentrations of the chemicals. This bioaccumulation increases as you move up the food web until you get to an apex predator — like you and me.
This is true of lots of chemicals, of course. The concern arises when these chemicals are toxic. To date, the toxicity data for PBDEs were pretty limited. There were some animal studies where rats were exposed to extremely high doses and they developed liver lesions — but I am always very wary of extrapolating high-dose rat toxicity studies to humans. There was also some suggestion that the chemicals could be endocrine disruptors, affecting breast and thyroid tissue.
What about cancer? In 2016, the International Agency for Research on Cancer concluded there was “inadequate evidence in humans for the carcinogencity of” PBDEs.
In the same report, though, they suggested PBDEs are “probably carcinogenic to humans” based on mechanistic studies.
In other words, we can’t prove they’re cancerous — but come on, they probably are.
Finally, we have some evidence that really pushes us toward the carcinogenic conclusion, in the form of this study, appearing in JAMA Network Open. It’s a nice bit of epidemiology leveraging the population-based National Health and Nutrition Examination Survey (NHANES).
Researchers measured PBDE levels in blood samples from 1100 people enrolled in NHANES in 2003 and 2004 and linked them to death records collected over the next 20 years or so.
The first thing to note is that the researchers were able to measure PBDEs in the blood samples. They were in there. They were detectable. And they were variable. Dividing the 1100 participants into low, medium, and high PBDE tertiles, you can see a nearly 10-fold difference across the population.
Importantly, not many baseline variables correlated with PBDE levels. People in the highest group were a bit younger but had a fairly similar sex distribution, race, ethnicity, education, income, physical activity, smoking status, and body mass index.
This is not a randomized trial, of course — but at least based on these data, exposure levels do seem fairly random, which is what you would expect from an environmental toxin that percolates up through the food chain. They are often somewhat indiscriminate.
This similarity in baseline characteristics between people with low or high blood levels of PBDE also allows us to make some stronger inferences about the observed outcomes. Let’s take a look at them.
After adjustment for baseline factors, individuals in the highest PBDE group had a 43% higher rate of death from any cause over the follow-up period. This was not enough to achieve statistical significance, but it was close.
But the key finding is deaths due to cancer. After adjustment, cancer deaths occurred four times as frequently among those in the high PBDE group, and that is a statistically significant difference.
To be fair, cancer deaths were rare in this cohort. The vast majority of people did not die of anything during the follow-up period regardless of PBDE level. But the data are strongly suggestive of the carcinogenicity of these chemicals.
I should also point out that the researchers are linking the PBDE level at a single time point to all these future events. If PBDE levels remain relatively stable within an individual over time, that’s fine, but if they tend to vary with intake of different foods for example, this would not be captured and would actually lead to an underestimation of the cancer risk.
The researchers also didn’t have granular enough data to determine the type of cancer, but they do show that rates are similar between men and women, which might point away from the more sex-specific cancer etiologies. Clearly, some more work is needed.
Of course, I started this piece by telling you that these chemicals are already pretty much banned in the United States. What are we supposed to do about these findings? Studies have examined the primary ongoing sources of PBDE in our environment and it seems like most of our exposure will be coming from the food we eat due to that biomagnification thing: high-fat fish, meat and dairy products, and fish oil supplements. It may be worth some investigation into the relative adulteration of these products with this new old carcinogen.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
These types of stories usually end with a call for regulation — to ban said chemical or substance, or to regulate it — but in this case, that has already happened. This new carcinogen I’m telling you about is actually an old chemical. And it has not been manufactured or legally imported in the US since 2013.
So, why bother? Because in this case, the chemical — or, really, a group of chemicals called polybrominated diphenyl ethers (PBDEs) — are still around: in our soil, in our food, and in our blood.
PBDEs are a group of compounds that confer flame-retardant properties to plastics, and they were used extensively in the latter part of the 20th century in electronic enclosures, business equipment, and foam cushioning in upholstery.
But there was a problem. They don’t chemically bond to plastics; they are just sort of mixed in, which means they can leach out. They are hydrophobic, meaning they don’t get washed out of soil, and, when ingested or inhaled by humans, they dissolve in our fat stores, making it difficult for our normal excretory systems to excrete them.
PBDEs biomagnify. Small animals can take them up from contaminated soil or water, and those animals are eaten by larger animals, which accumulate higher concentrations of the chemicals. This bioaccumulation increases as you move up the food web until you get to an apex predator — like you and me.
This is true of lots of chemicals, of course. The concern arises when these chemicals are toxic. To date, the toxicity data for PBDEs were pretty limited. There were some animal studies where rats were exposed to extremely high doses and they developed liver lesions — but I am always very wary of extrapolating high-dose rat toxicity studies to humans. There was also some suggestion that the chemicals could be endocrine disruptors, affecting breast and thyroid tissue.
What about cancer? In 2016, the International Agency for Research on Cancer concluded there was “inadequate evidence in humans for the carcinogencity of” PBDEs.
In the same report, though, they suggested PBDEs are “probably carcinogenic to humans” based on mechanistic studies.
In other words, we can’t prove they’re cancerous — but come on, they probably are.
Finally, we have some evidence that really pushes us toward the carcinogenic conclusion, in the form of this study, appearing in JAMA Network Open. It’s a nice bit of epidemiology leveraging the population-based National Health and Nutrition Examination Survey (NHANES).
Researchers measured PBDE levels in blood samples from 1100 people enrolled in NHANES in 2003 and 2004 and linked them to death records collected over the next 20 years or so.
The first thing to note is that the researchers were able to measure PBDEs in the blood samples. They were in there. They were detectable. And they were variable. Dividing the 1100 participants into low, medium, and high PBDE tertiles, you can see a nearly 10-fold difference across the population.
Importantly, not many baseline variables correlated with PBDE levels. People in the highest group were a bit younger but had a fairly similar sex distribution, race, ethnicity, education, income, physical activity, smoking status, and body mass index.
This is not a randomized trial, of course — but at least based on these data, exposure levels do seem fairly random, which is what you would expect from an environmental toxin that percolates up through the food chain. They are often somewhat indiscriminate.
This similarity in baseline characteristics between people with low or high blood levels of PBDE also allows us to make some stronger inferences about the observed outcomes. Let’s take a look at them.
After adjustment for baseline factors, individuals in the highest PBDE group had a 43% higher rate of death from any cause over the follow-up period. This was not enough to achieve statistical significance, but it was close.
But the key finding is deaths due to cancer. After adjustment, cancer deaths occurred four times as frequently among those in the high PBDE group, and that is a statistically significant difference.
To be fair, cancer deaths were rare in this cohort. The vast majority of people did not die of anything during the follow-up period regardless of PBDE level. But the data are strongly suggestive of the carcinogenicity of these chemicals.
I should also point out that the researchers are linking the PBDE level at a single time point to all these future events. If PBDE levels remain relatively stable within an individual over time, that’s fine, but if they tend to vary with intake of different foods for example, this would not be captured and would actually lead to an underestimation of the cancer risk.
The researchers also didn’t have granular enough data to determine the type of cancer, but they do show that rates are similar between men and women, which might point away from the more sex-specific cancer etiologies. Clearly, some more work is needed.
Of course, I started this piece by telling you that these chemicals are already pretty much banned in the United States. What are we supposed to do about these findings? Studies have examined the primary ongoing sources of PBDE in our environment and it seems like most of our exposure will be coming from the food we eat due to that biomagnification thing: high-fat fish, meat and dairy products, and fish oil supplements. It may be worth some investigation into the relative adulteration of these products with this new old carcinogen.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
What We’ve Learned About Remote Learning
I would have preferred to start this Letter reporting to you that the pandemic is fading out of sight in our rear view mirror. However, I think it is more accurate to say the pandemic is sitting in that blind spot off our passenger side rear fender. Unless you’re like one of those cars with “blind spot detection” blinking a warning, you probably aren’t giving the pandemic much thought. However, three journalists at The New York Times have taken this lull in the pandemic’s newsworthiness to consider the consequences of school closure and remote learning.
From what you may have read and heard, and possibly experienced firsthand, you have a sense that keeping children out of school has been awash in negatives. These journalists looked at all the data they could find and their article is replete with graphs and references. I will just summarize some of what they discovered.
“While poverty and other factors played a role, remote learning was a key driver in academic declines ...” They found there was a direct relationship between the length of school closure and the severity of academic skill loss. The journalists noted that “some time in school was better than no time.” And sadly, “most students have not caught up.”
Poverty played a significant role, with students in economically challenged communities experiencing steeper losses in academics. The reporters quoted Stanford Professor Sean F. Reardon, EdD, who has said “A community’s poverty rate and length of school closures had a ‘roughly equal’ effect.” Poorer school districts tended to continue remote learning longer than those in more well off communities.
At the very beginning of the pandemic, when we were floating in a sea of unknowns, the decision to close schools and take advantage of the new technology that made remote learning possible sounded like the best and maybe only option. However, looking back, Dr. Sean O’Leary, who helped craft AAP guidelines, admits “we probably kept schools closed longer than we should have.”
Early signs that children were not as likely as adults to get sick, and that students posed little threat to others in the school environment, were not taken seriously enough. Too much time and energy was wasted in deep cleaning even after it was clear the virus was airborne. Opening windows that had been painted shut would have been a much better investment.
As it became more apparent that school closures were not having the deterrent effect we had hoped for, there were still communities that resisted. The Times’ reporters noted that teachers’ unions and Democratic cities tended to be more cautious about reopening. And clearly there was political flavor to how communities responded. Masking is probably one of the best examples where emotions and politics colored our responses.
Are there things we could have done differently? One can certainly understand why teachers might have been cautious about returning to in-school learning. With more than a quarter of teachers in this country being older than 50 (16% over 55) and nearly 80% of elementary and middle school teachers self-reporting that they are obese or overweight, educators represent a group that we know now is more vulnerable to complications from COVID. In retrospect, had we understood more about the virus and the downsides of remote learning, the government could have offered paid leave to teachers who felt vulnerable. Then, by expediting the transition of the younger, less vulnerable college students in their final years of training into the workforce earlier could have kept schools open until we were up to speed with vaccines and treatment. But the water has spilled over the dam. We can hope that we as a nation have learned that making frequent evaluations of our strategies and being flexible enough to make changes will help in future pandemics. Unfortunately, those RNA viruses are fast mutators and clever adapters. Strategies we thought were working the first time may not succeed with new variants.
We have now learned that, in general, remote learning was a bust. My grandkids knew it at the time. It’s not just the learning piece. It’s about the social contact with peers that can provide comfort and support when the adults around at home may be anxious and depressed. School is a place you can be physically active away from 24/7 television at home. Adapting to going to school can be difficult for some young children in the beginning because of separation anxiety, but for the vast majority of children doing the school thing is a habit that is quickly rewarded and reinforced daily.
Children learn in school because they are rubbing elbows with other kids who are learning. While some peers may be distracting, the data suggest the distractions of home are far more of a problem. Most children I know were eager to get back in school because that’s where their friends were. But, getting back in the habit of going to school can be difficult for some, especially those who have been less successful in the past. Not surprisingly, the longer the hiatus the more difficult the reentry becomes.
The big lesson we mustn’t forget is that being in school is far more valuable than we ever imagined. And, when we are considering our options in future pandemics and natural disasters, we should be giving much more weight to in-school learning than we have in the past.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I would have preferred to start this Letter reporting to you that the pandemic is fading out of sight in our rear view mirror. However, I think it is more accurate to say the pandemic is sitting in that blind spot off our passenger side rear fender. Unless you’re like one of those cars with “blind spot detection” blinking a warning, you probably aren’t giving the pandemic much thought. However, three journalists at The New York Times have taken this lull in the pandemic’s newsworthiness to consider the consequences of school closure and remote learning.
From what you may have read and heard, and possibly experienced firsthand, you have a sense that keeping children out of school has been awash in negatives. These journalists looked at all the data they could find and their article is replete with graphs and references. I will just summarize some of what they discovered.
“While poverty and other factors played a role, remote learning was a key driver in academic declines ...” They found there was a direct relationship between the length of school closure and the severity of academic skill loss. The journalists noted that “some time in school was better than no time.” And sadly, “most students have not caught up.”
Poverty played a significant role, with students in economically challenged communities experiencing steeper losses in academics. The reporters quoted Stanford Professor Sean F. Reardon, EdD, who has said “A community’s poverty rate and length of school closures had a ‘roughly equal’ effect.” Poorer school districts tended to continue remote learning longer than those in more well off communities.
At the very beginning of the pandemic, when we were floating in a sea of unknowns, the decision to close schools and take advantage of the new technology that made remote learning possible sounded like the best and maybe only option. However, looking back, Dr. Sean O’Leary, who helped craft AAP guidelines, admits “we probably kept schools closed longer than we should have.”
Early signs that children were not as likely as adults to get sick, and that students posed little threat to others in the school environment, were not taken seriously enough. Too much time and energy was wasted in deep cleaning even after it was clear the virus was airborne. Opening windows that had been painted shut would have been a much better investment.
As it became more apparent that school closures were not having the deterrent effect we had hoped for, there were still communities that resisted. The Times’ reporters noted that teachers’ unions and Democratic cities tended to be more cautious about reopening. And clearly there was political flavor to how communities responded. Masking is probably one of the best examples where emotions and politics colored our responses.
Are there things we could have done differently? One can certainly understand why teachers might have been cautious about returning to in-school learning. With more than a quarter of teachers in this country being older than 50 (16% over 55) and nearly 80% of elementary and middle school teachers self-reporting that they are obese or overweight, educators represent a group that we know now is more vulnerable to complications from COVID. In retrospect, had we understood more about the virus and the downsides of remote learning, the government could have offered paid leave to teachers who felt vulnerable. Then, by expediting the transition of the younger, less vulnerable college students in their final years of training into the workforce earlier could have kept schools open until we were up to speed with vaccines and treatment. But the water has spilled over the dam. We can hope that we as a nation have learned that making frequent evaluations of our strategies and being flexible enough to make changes will help in future pandemics. Unfortunately, those RNA viruses are fast mutators and clever adapters. Strategies we thought were working the first time may not succeed with new variants.
We have now learned that, in general, remote learning was a bust. My grandkids knew it at the time. It’s not just the learning piece. It’s about the social contact with peers that can provide comfort and support when the adults around at home may be anxious and depressed. School is a place you can be physically active away from 24/7 television at home. Adapting to going to school can be difficult for some young children in the beginning because of separation anxiety, but for the vast majority of children doing the school thing is a habit that is quickly rewarded and reinforced daily.
Children learn in school because they are rubbing elbows with other kids who are learning. While some peers may be distracting, the data suggest the distractions of home are far more of a problem. Most children I know were eager to get back in school because that’s where their friends were. But, getting back in the habit of going to school can be difficult for some, especially those who have been less successful in the past. Not surprisingly, the longer the hiatus the more difficult the reentry becomes.
The big lesson we mustn’t forget is that being in school is far more valuable than we ever imagined. And, when we are considering our options in future pandemics and natural disasters, we should be giving much more weight to in-school learning than we have in the past.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
I would have preferred to start this Letter reporting to you that the pandemic is fading out of sight in our rear view mirror. However, I think it is more accurate to say the pandemic is sitting in that blind spot off our passenger side rear fender. Unless you’re like one of those cars with “blind spot detection” blinking a warning, you probably aren’t giving the pandemic much thought. However, three journalists at The New York Times have taken this lull in the pandemic’s newsworthiness to consider the consequences of school closure and remote learning.
From what you may have read and heard, and possibly experienced firsthand, you have a sense that keeping children out of school has been awash in negatives. These journalists looked at all the data they could find and their article is replete with graphs and references. I will just summarize some of what they discovered.
“While poverty and other factors played a role, remote learning was a key driver in academic declines ...” They found there was a direct relationship between the length of school closure and the severity of academic skill loss. The journalists noted that “some time in school was better than no time.” And sadly, “most students have not caught up.”
Poverty played a significant role, with students in economically challenged communities experiencing steeper losses in academics. The reporters quoted Stanford Professor Sean F. Reardon, EdD, who has said “A community’s poverty rate and length of school closures had a ‘roughly equal’ effect.” Poorer school districts tended to continue remote learning longer than those in more well off communities.
At the very beginning of the pandemic, when we were floating in a sea of unknowns, the decision to close schools and take advantage of the new technology that made remote learning possible sounded like the best and maybe only option. However, looking back, Dr. Sean O’Leary, who helped craft AAP guidelines, admits “we probably kept schools closed longer than we should have.”
Early signs that children were not as likely as adults to get sick, and that students posed little threat to others in the school environment, were not taken seriously enough. Too much time and energy was wasted in deep cleaning even after it was clear the virus was airborne. Opening windows that had been painted shut would have been a much better investment.
As it became more apparent that school closures were not having the deterrent effect we had hoped for, there were still communities that resisted. The Times’ reporters noted that teachers’ unions and Democratic cities tended to be more cautious about reopening. And clearly there was political flavor to how communities responded. Masking is probably one of the best examples where emotions and politics colored our responses.
Are there things we could have done differently? One can certainly understand why teachers might have been cautious about returning to in-school learning. With more than a quarter of teachers in this country being older than 50 (16% over 55) and nearly 80% of elementary and middle school teachers self-reporting that they are obese or overweight, educators represent a group that we know now is more vulnerable to complications from COVID. In retrospect, had we understood more about the virus and the downsides of remote learning, the government could have offered paid leave to teachers who felt vulnerable. Then, by expediting the transition of the younger, less vulnerable college students in their final years of training into the workforce earlier could have kept schools open until we were up to speed with vaccines and treatment. But the water has spilled over the dam. We can hope that we as a nation have learned that making frequent evaluations of our strategies and being flexible enough to make changes will help in future pandemics. Unfortunately, those RNA viruses are fast mutators and clever adapters. Strategies we thought were working the first time may not succeed with new variants.
We have now learned that, in general, remote learning was a bust. My grandkids knew it at the time. It’s not just the learning piece. It’s about the social contact with peers that can provide comfort and support when the adults around at home may be anxious and depressed. School is a place you can be physically active away from 24/7 television at home. Adapting to going to school can be difficult for some young children in the beginning because of separation anxiety, but for the vast majority of children doing the school thing is a habit that is quickly rewarded and reinforced daily.
Children learn in school because they are rubbing elbows with other kids who are learning. While some peers may be distracting, the data suggest the distractions of home are far more of a problem. Most children I know were eager to get back in school because that’s where their friends were. But, getting back in the habit of going to school can be difficult for some, especially those who have been less successful in the past. Not surprisingly, the longer the hiatus the more difficult the reentry becomes.
The big lesson we mustn’t forget is that being in school is far more valuable than we ever imagined. And, when we are considering our options in future pandemics and natural disasters, we should be giving much more weight to in-school learning than we have in the past.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
Autoimmunity’s Female Bias and the Mysteries of Xist
Female bias in autoimmune disease can be profound, with nine females developing lupus for every male affected, and nearly twice that ratio seen in Sjögren disease.
For years, researchers have worked to determine the reasons for sex-linked differences in immune response and autoimmunity, with environmental factors, sex hormones, and X-chromosome inactivation — the process by which a second X chromosome is silenced — all seen as having roles.
More recently, different groups of researchers have homed in on a long noncoding RNA fragment called X-inactive specific transcript, or Xist, as a potential driver of sex bias in autoimmune disease. Xist, which occurs in female mammals, has been known since the 1990s as the master regulator of X-chromosome inactivation, the process by which the second X chromosome is silenced, averting a fatal double dose of X-linked genes.
The inactivation process, which scientists liken to wrapping the extra X with a fluffy cloud of proteins, occurs early in embryonic development. After its initial work silencing the X, Xist is produced throughout the female’s life, allowing X inactivation to be maintained.
But is it possible that Xist, and the many dozens of proteins it recruits to keep that extra X chromosome silent, can also provoke autoimmunity? This is the question that several teams of researchers have been grappling with, resulting in provocative findings and opening exciting new avenues of discovery.
Xist Protein Complexes Make Male Mice Vulnerable to Lupus
In February, researchers Howard Chang, MD, PhD, and Diana Dou, PhD, of Stanford University in Stanford, California, made worldwide news when they published results from an experiment using male mice genetically engineered to carry a non-silencing form of Xist on one of their chromosomes.
Xist acts like a scaffold, recruiting multiple protein complexes to help it do its job. Dr. Dou explained in an interview that her team has been eyeing suspiciously for years the dozens of proteins Xist recruits in the process of X-chromosome inactivation, many of which are known autoantigens.
When the mice were injected with pristane, a chemical that induces lupus-like autoimmunity in mice, the Xist-producing males developed symptoms at a rate similar to that of females, while wild-type male mice did not.
By using a male model, the scientists could determine whether Xist could cause an increased vulnerability for autoimmunity absent the influence of female hormones and development. “Everything else about the animal is male,” Dr. Dou commented. “You just add the formation of the Xist ribonucleoprotein particles — Xist RNA plus the associating proteins — to male cells that would not ordinarily have these particles. Is just having the particles present in these animals sufficient to increase their autoimmunity? This is what our paper showed: That just having expression of Xist, the presence of these Xist [ribonucleoproteins], is enough in permissive genetic backgrounds to invoke higher incidence and severity of autoimmune disease development in our pristane-induced lupus model.”
The Stanford group sees the Xist protein complex, which they have studied extensively, as a key to understanding how Xist might provoke autoimmunity. Nonetheless, Dr. Dou said, “It’s important to note that there are other contributing factors, which is why not all females develop autoimmunity, and we had very different results in our autoimmune-resistant mouse strain compared to the more autoimmune-prone strain. Xist is a factor, but many factors are required to subvert the checkpoints in immune balance and allow the progression to full-blown autoimmunity.”
Faulty X Inactivation and Gene Escape
The understanding that Xist might be implicated in autoimmune disease — and explain some of its female bias — is not new.
About a decade ago, Montserrat Anguera, PhD, a biologist at the University of Pennsylvania, Philadelphia, began looking at the relationship of X-chromosome inactivation, which by definition involves Xist, and lupus.
Dr. Anguera hypothesized that imperfect X inactivation allowed for greater escape of genes associated with immunity and autoimmunity. Studying patients with lupus, Dr. Anguera found that the silencing process was abnormal, allowing more of these genes to escape the silenced X — including toll-like receptor 7 (TLR-7) and other genes implicated in the pathogenesis of lupus.
“If you get increased expression of certain genes from the [silenced] X, like TLR-7, it can result in autoimmune disease,” Dr. Anguera said. “So what we think is that in the lupus patients, because the silencing is impacted, you’re going to have more expression happening from the inactive X. And then in conjunction with the active X, that’s going to throw off the dosage [of autoimmunity-linked genes]. You’re changing the dosage of genes, and that’s what’s critical.”
Even among patients with lupus whose symptoms are well controlled with medication, “if you look at their T cells and B cells, they still have messed up X inactivation,” Dr. Anguera said. “The Xist RNA that’s supposed to be tethered to the inactive X in a fluffy cloud is not localized, and instead is dispersed all over the nucleus.”
Dr. Anguera pointed out that autoimmune diseases are complex and can result from a combination of factors. “You also have a host of hormonal and environmental contributors, such as previous viral infections,” she said. And of course men can also develop lupus, meaning that the X chromosome cannot explain everything.
Dr. Anguera said that, while the findings by the Stanford scientists do not explain the full pathogenesis of lupus and related diseases, they still support a strong role for Xist in sex-biased autoimmune diseases. “It’s sort of another take on it,” she said.
Is It the Proteins, the RNA, or Both?
The Stanford team points to the proteins recruited by Xist in the process of X-chromosome inactivation as the likely trigger of autoimmunity. However, a group of researchers at Johns Hopkins University in Baltimore, Maryland, made the case in a 2022 paper that Xist RNA itself was dangerous. They found that numerous short RNA sequences within the Xist molecule serve as ligands for TLR-7. And TLR-7 ligation causes plasmacytoid dendritic cells to overproduce type 1 interferon, a classic hallmark of lupus.
“Within rheumatology, the diseases that tend to be most female biased are the ones that are antibody positive and have this presence of upregulated interferon,” explained Brendan Antiochos, MD. “Lupus is an example of that. Sjögren’s syndrome is another. So there’s always been this quest to want to understand the mechanisms that explain why women would have more autoimmunity. And are there specific pathways which could contribute? One of the key pathways that’s been shown in humans and in mice to be important in lupus is toll-like receptor signaling.” Most convincingly, one recent study showed that people who have a gain-of-function mutation in their TLR-7 gene get a spontaneous form of lupus.
These findings led Erika Darrah, PhD, and her colleague Dr. Antiochos to begin looking more deeply into which RNAs could be triggering this signaling pathway. “We started to think: Well, there is this sex bias. Could it be that women have unique RNAs that could potentially act as triggers for TLR-7 signaling?” Dr. Darrah said.
Dr. Darrah and Dr. Antiochos looked at publicly available genetic data to identify sex-biased sources of self-RNA containing TLR-7 ligands. Xist, they found, was chock full of them. “Every time we analyzed that data, no matter what filter we applied, Xist kept popping out over and over again as the most highly female skewed RNA, the RNA most likely to contain these TLR-7 binding motifs,” Dr. Darrah said. “We started to formulate the hypothesis that Xist was actually promoting responses that were dangerous and pathogenic in lupus.”
That finding led the team to conduct in-vitro experiments that showed different fragments of Xist can activate TLR-7, resulting in higher interferon production. Finally, they looked at blood and kidney cells from women with lupus and found that higher Xist expression correlated with more interferon production, and higher disease activity. “The more Xist, the sicker people were,” Dr. Darrah said.
Xist’s Other Functions
Xist was first studied in the 1990s, and most research has centered on its primary role in X-chromosome inactivation. A research group led by Kathrin Plath, PhD, at the University of California, Los Angeles, has been occupied for years with untangling exactly how Xist does what it does. “It’s a very clever RNA, right? It can silence the whole chromosome,” Dr. Plath said in an interview.
In 2021, Dr. Plath and her colleagues established in detail how Xist executes silencing, setting down pairs of molecules in specific spots along the chromosome and building huge protein clouds around them. “We worked on learning where Xist binds and what proteins it binds, drilling down to understand how these proteins and the RNA are coming together.”
Dr. Plath has long suspected that Xist has other functions besides X inactivation, and she and her colleagues are starting to identify them. Early this year they published the surprising finding that Xist can regulate gene expression in autosomes, or non–sex-linked chromosomes, “which it might well also do in cancer cells and lymphocytes,” Dr. Plath said. “And now there is this new evidence of an autoimmune function,” she said. “It’s a super exciting time.”
The different hypotheses surrounding Xist’s role in sex-biased autoimmunity aren’t mutually exclusive, Dr. Plath said. “There’s a tremendous enrichment of proteins occurring” during X inactivation, she said, supporting the Stanford team’s hypothesis that proteins are triggering autoimmunity. As for the Johns Hopkins researchers’ understanding that Xist RNA itself is the trigger, “I’m totally open to that,” she said. “Why can’t it be an autoantigen?”
The other model in the field, Dr. Plath noted, is the one proposed by Dr. Anguera — “that there’s [gene] escape from X-inactivation — that females have more escape expression, and that Xist is more dispersed in the lymphocytes [of patients with lupus]. In fact, Xist becoming a little dispersed might make it a better antigen. So I do think everything is possible.”
The plethora of new findings related to autoimmunity has caused Dr. Plath to consider redirecting her lab’s focus toward more translational work, “because we are obviously good at studying Xist.” Among the mysteries Dr. Plath would like to solve is how some genes manage to escape the Xist cloud.
What is needed, she said, is collaboration. “Everyone will come up with different ideas. So I think it’s good to have more people look at things together. Then the field will achieve a breakthrough treatment.”
Female bias in autoimmune disease can be profound, with nine females developing lupus for every male affected, and nearly twice that ratio seen in Sjögren disease.
For years, researchers have worked to determine the reasons for sex-linked differences in immune response and autoimmunity, with environmental factors, sex hormones, and X-chromosome inactivation — the process by which a second X chromosome is silenced — all seen as having roles.
More recently, different groups of researchers have homed in on a long noncoding RNA fragment called X-inactive specific transcript, or Xist, as a potential driver of sex bias in autoimmune disease. Xist, which occurs in female mammals, has been known since the 1990s as the master regulator of X-chromosome inactivation, the process by which the second X chromosome is silenced, averting a fatal double dose of X-linked genes.
The inactivation process, which scientists liken to wrapping the extra X with a fluffy cloud of proteins, occurs early in embryonic development. After its initial work silencing the X, Xist is produced throughout the female’s life, allowing X inactivation to be maintained.
But is it possible that Xist, and the many dozens of proteins it recruits to keep that extra X chromosome silent, can also provoke autoimmunity? This is the question that several teams of researchers have been grappling with, resulting in provocative findings and opening exciting new avenues of discovery.
Xist Protein Complexes Make Male Mice Vulnerable to Lupus
In February, researchers Howard Chang, MD, PhD, and Diana Dou, PhD, of Stanford University in Stanford, California, made worldwide news when they published results from an experiment using male mice genetically engineered to carry a non-silencing form of Xist on one of their chromosomes.
Xist acts like a scaffold, recruiting multiple protein complexes to help it do its job. Dr. Dou explained in an interview that her team has been eyeing suspiciously for years the dozens of proteins Xist recruits in the process of X-chromosome inactivation, many of which are known autoantigens.
When the mice were injected with pristane, a chemical that induces lupus-like autoimmunity in mice, the Xist-producing males developed symptoms at a rate similar to that of females, while wild-type male mice did not.
By using a male model, the scientists could determine whether Xist could cause an increased vulnerability for autoimmunity absent the influence of female hormones and development. “Everything else about the animal is male,” Dr. Dou commented. “You just add the formation of the Xist ribonucleoprotein particles — Xist RNA plus the associating proteins — to male cells that would not ordinarily have these particles. Is just having the particles present in these animals sufficient to increase their autoimmunity? This is what our paper showed: That just having expression of Xist, the presence of these Xist [ribonucleoproteins], is enough in permissive genetic backgrounds to invoke higher incidence and severity of autoimmune disease development in our pristane-induced lupus model.”
The Stanford group sees the Xist protein complex, which they have studied extensively, as a key to understanding how Xist might provoke autoimmunity. Nonetheless, Dr. Dou said, “It’s important to note that there are other contributing factors, which is why not all females develop autoimmunity, and we had very different results in our autoimmune-resistant mouse strain compared to the more autoimmune-prone strain. Xist is a factor, but many factors are required to subvert the checkpoints in immune balance and allow the progression to full-blown autoimmunity.”
Faulty X Inactivation and Gene Escape
The understanding that Xist might be implicated in autoimmune disease — and explain some of its female bias — is not new.
About a decade ago, Montserrat Anguera, PhD, a biologist at the University of Pennsylvania, Philadelphia, began looking at the relationship of X-chromosome inactivation, which by definition involves Xist, and lupus.
Dr. Anguera hypothesized that imperfect X inactivation allowed for greater escape of genes associated with immunity and autoimmunity. Studying patients with lupus, Dr. Anguera found that the silencing process was abnormal, allowing more of these genes to escape the silenced X — including toll-like receptor 7 (TLR-7) and other genes implicated in the pathogenesis of lupus.
“If you get increased expression of certain genes from the [silenced] X, like TLR-7, it can result in autoimmune disease,” Dr. Anguera said. “So what we think is that in the lupus patients, because the silencing is impacted, you’re going to have more expression happening from the inactive X. And then in conjunction with the active X, that’s going to throw off the dosage [of autoimmunity-linked genes]. You’re changing the dosage of genes, and that’s what’s critical.”
Even among patients with lupus whose symptoms are well controlled with medication, “if you look at their T cells and B cells, they still have messed up X inactivation,” Dr. Anguera said. “The Xist RNA that’s supposed to be tethered to the inactive X in a fluffy cloud is not localized, and instead is dispersed all over the nucleus.”
Dr. Anguera pointed out that autoimmune diseases are complex and can result from a combination of factors. “You also have a host of hormonal and environmental contributors, such as previous viral infections,” she said. And of course men can also develop lupus, meaning that the X chromosome cannot explain everything.
Dr. Anguera said that, while the findings by the Stanford scientists do not explain the full pathogenesis of lupus and related diseases, they still support a strong role for Xist in sex-biased autoimmune diseases. “It’s sort of another take on it,” she said.
Is It the Proteins, the RNA, or Both?
The Stanford team points to the proteins recruited by Xist in the process of X-chromosome inactivation as the likely trigger of autoimmunity. However, a group of researchers at Johns Hopkins University in Baltimore, Maryland, made the case in a 2022 paper that Xist RNA itself was dangerous. They found that numerous short RNA sequences within the Xist molecule serve as ligands for TLR-7. And TLR-7 ligation causes plasmacytoid dendritic cells to overproduce type 1 interferon, a classic hallmark of lupus.
“Within rheumatology, the diseases that tend to be most female biased are the ones that are antibody positive and have this presence of upregulated interferon,” explained Brendan Antiochos, MD. “Lupus is an example of that. Sjögren’s syndrome is another. So there’s always been this quest to want to understand the mechanisms that explain why women would have more autoimmunity. And are there specific pathways which could contribute? One of the key pathways that’s been shown in humans and in mice to be important in lupus is toll-like receptor signaling.” Most convincingly, one recent study showed that people who have a gain-of-function mutation in their TLR-7 gene get a spontaneous form of lupus.
These findings led Erika Darrah, PhD, and her colleague Dr. Antiochos to begin looking more deeply into which RNAs could be triggering this signaling pathway. “We started to think: Well, there is this sex bias. Could it be that women have unique RNAs that could potentially act as triggers for TLR-7 signaling?” Dr. Darrah said.
Dr. Darrah and Dr. Antiochos looked at publicly available genetic data to identify sex-biased sources of self-RNA containing TLR-7 ligands. Xist, they found, was chock full of them. “Every time we analyzed that data, no matter what filter we applied, Xist kept popping out over and over again as the most highly female skewed RNA, the RNA most likely to contain these TLR-7 binding motifs,” Dr. Darrah said. “We started to formulate the hypothesis that Xist was actually promoting responses that were dangerous and pathogenic in lupus.”
That finding led the team to conduct in-vitro experiments that showed different fragments of Xist can activate TLR-7, resulting in higher interferon production. Finally, they looked at blood and kidney cells from women with lupus and found that higher Xist expression correlated with more interferon production, and higher disease activity. “The more Xist, the sicker people were,” Dr. Darrah said.
Xist’s Other Functions
Xist was first studied in the 1990s, and most research has centered on its primary role in X-chromosome inactivation. A research group led by Kathrin Plath, PhD, at the University of California, Los Angeles, has been occupied for years with untangling exactly how Xist does what it does. “It’s a very clever RNA, right? It can silence the whole chromosome,” Dr. Plath said in an interview.
In 2021, Dr. Plath and her colleagues established in detail how Xist executes silencing, setting down pairs of molecules in specific spots along the chromosome and building huge protein clouds around them. “We worked on learning where Xist binds and what proteins it binds, drilling down to understand how these proteins and the RNA are coming together.”
Dr. Plath has long suspected that Xist has other functions besides X inactivation, and she and her colleagues are starting to identify them. Early this year they published the surprising finding that Xist can regulate gene expression in autosomes, or non–sex-linked chromosomes, “which it might well also do in cancer cells and lymphocytes,” Dr. Plath said. “And now there is this new evidence of an autoimmune function,” she said. “It’s a super exciting time.”
The different hypotheses surrounding Xist’s role in sex-biased autoimmunity aren’t mutually exclusive, Dr. Plath said. “There’s a tremendous enrichment of proteins occurring” during X inactivation, she said, supporting the Stanford team’s hypothesis that proteins are triggering autoimmunity. As for the Johns Hopkins researchers’ understanding that Xist RNA itself is the trigger, “I’m totally open to that,” she said. “Why can’t it be an autoantigen?”
The other model in the field, Dr. Plath noted, is the one proposed by Dr. Anguera — “that there’s [gene] escape from X-inactivation — that females have more escape expression, and that Xist is more dispersed in the lymphocytes [of patients with lupus]. In fact, Xist becoming a little dispersed might make it a better antigen. So I do think everything is possible.”
The plethora of new findings related to autoimmunity has caused Dr. Plath to consider redirecting her lab’s focus toward more translational work, “because we are obviously good at studying Xist.” Among the mysteries Dr. Plath would like to solve is how some genes manage to escape the Xist cloud.
What is needed, she said, is collaboration. “Everyone will come up with different ideas. So I think it’s good to have more people look at things together. Then the field will achieve a breakthrough treatment.”
Female bias in autoimmune disease can be profound, with nine females developing lupus for every male affected, and nearly twice that ratio seen in Sjögren disease.
For years, researchers have worked to determine the reasons for sex-linked differences in immune response and autoimmunity, with environmental factors, sex hormones, and X-chromosome inactivation — the process by which a second X chromosome is silenced — all seen as having roles.
More recently, different groups of researchers have homed in on a long noncoding RNA fragment called X-inactive specific transcript, or Xist, as a potential driver of sex bias in autoimmune disease. Xist, which occurs in female mammals, has been known since the 1990s as the master regulator of X-chromosome inactivation, the process by which the second X chromosome is silenced, averting a fatal double dose of X-linked genes.
The inactivation process, which scientists liken to wrapping the extra X with a fluffy cloud of proteins, occurs early in embryonic development. After its initial work silencing the X, Xist is produced throughout the female’s life, allowing X inactivation to be maintained.
But is it possible that Xist, and the many dozens of proteins it recruits to keep that extra X chromosome silent, can also provoke autoimmunity? This is the question that several teams of researchers have been grappling with, resulting in provocative findings and opening exciting new avenues of discovery.
Xist Protein Complexes Make Male Mice Vulnerable to Lupus
In February, researchers Howard Chang, MD, PhD, and Diana Dou, PhD, of Stanford University in Stanford, California, made worldwide news when they published results from an experiment using male mice genetically engineered to carry a non-silencing form of Xist on one of their chromosomes.
Xist acts like a scaffold, recruiting multiple protein complexes to help it do its job. Dr. Dou explained in an interview that her team has been eyeing suspiciously for years the dozens of proteins Xist recruits in the process of X-chromosome inactivation, many of which are known autoantigens.
When the mice were injected with pristane, a chemical that induces lupus-like autoimmunity in mice, the Xist-producing males developed symptoms at a rate similar to that of females, while wild-type male mice did not.
By using a male model, the scientists could determine whether Xist could cause an increased vulnerability for autoimmunity absent the influence of female hormones and development. “Everything else about the animal is male,” Dr. Dou commented. “You just add the formation of the Xist ribonucleoprotein particles — Xist RNA plus the associating proteins — to male cells that would not ordinarily have these particles. Is just having the particles present in these animals sufficient to increase their autoimmunity? This is what our paper showed: That just having expression of Xist, the presence of these Xist [ribonucleoproteins], is enough in permissive genetic backgrounds to invoke higher incidence and severity of autoimmune disease development in our pristane-induced lupus model.”
The Stanford group sees the Xist protein complex, which they have studied extensively, as a key to understanding how Xist might provoke autoimmunity. Nonetheless, Dr. Dou said, “It’s important to note that there are other contributing factors, which is why not all females develop autoimmunity, and we had very different results in our autoimmune-resistant mouse strain compared to the more autoimmune-prone strain. Xist is a factor, but many factors are required to subvert the checkpoints in immune balance and allow the progression to full-blown autoimmunity.”
Faulty X Inactivation and Gene Escape
The understanding that Xist might be implicated in autoimmune disease — and explain some of its female bias — is not new.
About a decade ago, Montserrat Anguera, PhD, a biologist at the University of Pennsylvania, Philadelphia, began looking at the relationship of X-chromosome inactivation, which by definition involves Xist, and lupus.
Dr. Anguera hypothesized that imperfect X inactivation allowed for greater escape of genes associated with immunity and autoimmunity. Studying patients with lupus, Dr. Anguera found that the silencing process was abnormal, allowing more of these genes to escape the silenced X — including toll-like receptor 7 (TLR-7) and other genes implicated in the pathogenesis of lupus.
“If you get increased expression of certain genes from the [silenced] X, like TLR-7, it can result in autoimmune disease,” Dr. Anguera said. “So what we think is that in the lupus patients, because the silencing is impacted, you’re going to have more expression happening from the inactive X. And then in conjunction with the active X, that’s going to throw off the dosage [of autoimmunity-linked genes]. You’re changing the dosage of genes, and that’s what’s critical.”
Even among patients with lupus whose symptoms are well controlled with medication, “if you look at their T cells and B cells, they still have messed up X inactivation,” Dr. Anguera said. “The Xist RNA that’s supposed to be tethered to the inactive X in a fluffy cloud is not localized, and instead is dispersed all over the nucleus.”
Dr. Anguera pointed out that autoimmune diseases are complex and can result from a combination of factors. “You also have a host of hormonal and environmental contributors, such as previous viral infections,” she said. And of course men can also develop lupus, meaning that the X chromosome cannot explain everything.
Dr. Anguera said that, while the findings by the Stanford scientists do not explain the full pathogenesis of lupus and related diseases, they still support a strong role for Xist in sex-biased autoimmune diseases. “It’s sort of another take on it,” she said.
Is It the Proteins, the RNA, or Both?
The Stanford team points to the proteins recruited by Xist in the process of X-chromosome inactivation as the likely trigger of autoimmunity. However, a group of researchers at Johns Hopkins University in Baltimore, Maryland, made the case in a 2022 paper that Xist RNA itself was dangerous. They found that numerous short RNA sequences within the Xist molecule serve as ligands for TLR-7. And TLR-7 ligation causes plasmacytoid dendritic cells to overproduce type 1 interferon, a classic hallmark of lupus.
“Within rheumatology, the diseases that tend to be most female biased are the ones that are antibody positive and have this presence of upregulated interferon,” explained Brendan Antiochos, MD. “Lupus is an example of that. Sjögren’s syndrome is another. So there’s always been this quest to want to understand the mechanisms that explain why women would have more autoimmunity. And are there specific pathways which could contribute? One of the key pathways that’s been shown in humans and in mice to be important in lupus is toll-like receptor signaling.” Most convincingly, one recent study showed that people who have a gain-of-function mutation in their TLR-7 gene get a spontaneous form of lupus.
These findings led Erika Darrah, PhD, and her colleague Dr. Antiochos to begin looking more deeply into which RNAs could be triggering this signaling pathway. “We started to think: Well, there is this sex bias. Could it be that women have unique RNAs that could potentially act as triggers for TLR-7 signaling?” Dr. Darrah said.
Dr. Darrah and Dr. Antiochos looked at publicly available genetic data to identify sex-biased sources of self-RNA containing TLR-7 ligands. Xist, they found, was chock full of them. “Every time we analyzed that data, no matter what filter we applied, Xist kept popping out over and over again as the most highly female skewed RNA, the RNA most likely to contain these TLR-7 binding motifs,” Dr. Darrah said. “We started to formulate the hypothesis that Xist was actually promoting responses that were dangerous and pathogenic in lupus.”
That finding led the team to conduct in-vitro experiments that showed different fragments of Xist can activate TLR-7, resulting in higher interferon production. Finally, they looked at blood and kidney cells from women with lupus and found that higher Xist expression correlated with more interferon production, and higher disease activity. “The more Xist, the sicker people were,” Dr. Darrah said.
Xist’s Other Functions
Xist was first studied in the 1990s, and most research has centered on its primary role in X-chromosome inactivation. A research group led by Kathrin Plath, PhD, at the University of California, Los Angeles, has been occupied for years with untangling exactly how Xist does what it does. “It’s a very clever RNA, right? It can silence the whole chromosome,” Dr. Plath said in an interview.
In 2021, Dr. Plath and her colleagues established in detail how Xist executes silencing, setting down pairs of molecules in specific spots along the chromosome and building huge protein clouds around them. “We worked on learning where Xist binds and what proteins it binds, drilling down to understand how these proteins and the RNA are coming together.”
Dr. Plath has long suspected that Xist has other functions besides X inactivation, and she and her colleagues are starting to identify them. Early this year they published the surprising finding that Xist can regulate gene expression in autosomes, or non–sex-linked chromosomes, “which it might well also do in cancer cells and lymphocytes,” Dr. Plath said. “And now there is this new evidence of an autoimmune function,” she said. “It’s a super exciting time.”
The different hypotheses surrounding Xist’s role in sex-biased autoimmunity aren’t mutually exclusive, Dr. Plath said. “There’s a tremendous enrichment of proteins occurring” during X inactivation, she said, supporting the Stanford team’s hypothesis that proteins are triggering autoimmunity. As for the Johns Hopkins researchers’ understanding that Xist RNA itself is the trigger, “I’m totally open to that,” she said. “Why can’t it be an autoantigen?”
The other model in the field, Dr. Plath noted, is the one proposed by Dr. Anguera — “that there’s [gene] escape from X-inactivation — that females have more escape expression, and that Xist is more dispersed in the lymphocytes [of patients with lupus]. In fact, Xist becoming a little dispersed might make it a better antigen. So I do think everything is possible.”
The plethora of new findings related to autoimmunity has caused Dr. Plath to consider redirecting her lab’s focus toward more translational work, “because we are obviously good at studying Xist.” Among the mysteries Dr. Plath would like to solve is how some genes manage to escape the Xist cloud.
What is needed, she said, is collaboration. “Everyone will come up with different ideas. So I think it’s good to have more people look at things together. Then the field will achieve a breakthrough treatment.”
Visionary Surgery Saved Pitcher’s Arm. Now Even Children Get It
In 1974, Tommy John of the Los Angeles Dodgers was 31 and a 12-year veteran of Major League Baseball when he became the unwitting vanguard of a revolution in baseball and orthopedics. Fifty years later, Mr. John might not be a candidate for the latest advances to a procedure that bears his name.
The southpaw pitcher had faced the abrupt end of his career when, after one fateful delivery, he found himself unable to throw to home. So he took a gamble on the surgical equivalent of a Hail Mary: the reconstruction of a torn ligament in his pitching elbow.
The experiment was a wild success. Mr. John pitched— and better than he had before — for another 14 seasons, retiring in 1989 at the age of 46. How much better? After the surgery, he tallied three 20-win seasons compared with none before the operation, and he finished among the top five vote-getters for the annual Cy Young Award three times. He was named an All-Star once before the surgery and three times after.
The triumph notwithstanding, Tommy John now cautions against Tommy John surgery. What’s given him and clinicians pause is a trend in recent years of ever-younger athletes who undergo the procedure.
Along with the surgical improvements in repairing a torn ulnar collateral ligament (UCL) is a demographic shift toward school-aged athletes who get it. By 2014, one study concluded that 67.4% of UCL reconstruction surgeries were performed on athletes between 16 and 20 years of age. Some patients are still in Little League when they undergo the procedure.
Experts say these athletes have weakened their UCLs through overuse. They disagree on whether to call it an “epidemic,” but if it is, “the vaccine is awareness” against throwing too hard and too often, said Eric Makhni, MD, an orthopedic surgeon at Henry Ford Health in Detroit.
From Career-Ending to Routine
Mr. John’s entry into baseball and orthopedic lore was initially slow, but the trickle turned into a tide. After Frank Jobe, MD, swapped a healthy tendon from John’s right wrist for his worn and torn left UCL on September 25, 1974, he didn’t perform his second surgery for another 1194 days. By the time “Tommy John surgery” became a recognized phrase, Mr. John was still active but only 14 professional baseball players had undergone the operation.
Prior to the start of spring training this year, an oft-cited database listed 366 pro players who’d undergone the operation.
“Before Tommy John, that was a career-ending injury,” said Grant E. Garrigues, MD, an orthopedic surgeon at Midwest Orthopaedics at RUSH in Chicago, who called Mr. John “a pure revolutionary.”
Tommy John surgery is “the only one that I can think of that is named after the patient rather than the doctor who first did it,” said Patrick McCulloch, MD, an orthopedic surgeon in Houston and a team physician for the Astros.
Dr. McCulloch, who performs about 25 UCL repairs a year, said that by recent estimates, one-third of pro pitchers had had some sort of surgical repair. He hesitated to call the increasing number of operations an epidemic but acknowledged that the ingredients exist for more elbow trauma among baseball players.
“More people are playing more often, and people are bigger and stronger and throwing harder,” he said.
Either way, Dr. McCulloch said, “the procedure is a victim of its own success” because it is “just done phenomenally well.”
The surgery is now commonplace — perhaps too commonplace, said David W. Altchek, MD, attending surgeon and co-chief emeritus at Hospital for Special Surgery in New York City.
Dr. Altchek played a key role in the popularity of the operation. Twenty-two years after Mr. John’s surgery, he helped develop a variation of the procedure called the docking technique.
Whereas Dr. Jobe sutured Mr. John’s replacement graft to itself, “we developed a different way of tying it over a bone bridge, which was more secure and more easy to tension,” Dr. Altchek explained.
The advance meant less drilling into bone and enabled surgeons to avoid moving a problem-free ulnar nerve or removing the flexor-pronator muscle that protects the elbow from stress. “The trauma of the surgery is significantly less,” he said. “We just made it a lot easier very quickly,” cutting the surgery time from 2 hours to 30-40 minutes.
Maybe the surgery became too easy, said Dr. Altchek, who estimates he has done 2000 of them over the past 30 years. “I don’t want to condemn my colleagues, but there are a lot of people doing the surgery,” he said. “And not a lot of people are doing a lot of them, and they don’t know the nuances of doing the surgery.”
The older procedures are known as the “full Tommy John”; each has a 12- to 18-month healing process, with a success rate of 80%-85%. Pitchers typically sit out a season while recovering.
Brandon Erickson, MD, an orthopedic surgeon at Rothman Orthopaedic Institute in New York City, said that in younger patients he has recently turned more often to the suture of the future: an internal brace that provides a repair rather than reconstruction.
The procedure, pioneered by Felix H. Savoie III, MD, the Ray J. Haddad Professor of Orthopaedics at Tulane University School of Medicine in New Orleans, and Jeffrey R. Dugas, MD, of Andrews Sports Medicine & Orthopaedic Center in Birmingham, Alabama, uses collagen-coated tape that looks like a shoelace and provides a scaffold that Dr. McCulloch said “is inductive to healing and growth of ligament tissue.”
The brace is intended for an “overhead” athlete (mostly baseball players but also javelin throwers and gymnasts) whose UCL is torn on only one side but is otherwise in good shape. In a pitcher the same age as Mr. John was when Dr. Jobe performed the first procedure, “that ligament may not be of very good quality,” Dr. McCulloch said. “It may have thickened. It may have calcifications.” But for a high-school junior with aspirations to pitch in college or beyond without “way too many miles on the elbow,” the approach is a good fit. The healing process is as little as 6 months.
“The ones who have a good ligament are very likely to do well,” said Dr. Erickson, an assistant team doctor for the Philadelphia Phillies.
“If the patient’s ligament is generally ‘good’ with only a tear, the InternalBrace procedure may be used to repair the native ligament. On the other end of the spectrum, if the patient’s ligament is torn and degenerative the surgeon may opt to do a UCL reconstruction using an auto or allograft — ie, Tommy John surgery,” Allen Holowecky, senior product manager of Arthrex of Naples, Florida, the maker of the InternalBrace, told this news organization. “Before UCL repair, Tommy John surgery was the only real treatment option. We tend to see repairs done on younger patients since their ligament hasn’t seen years of use-damage.”
Calls for Caution
Tommy John III wanted to play baseball like his dad until near-fatal complications from shoulder surgery altered his path. He was drawn to chiropractic and consults on injury prevention. “All surgeries and all medical interventions are cut first, ask questions later,” he said. “I was born with that.”
He saw his dad’s slow, heroic comeback from the surgery and described him as the perfect candidate for Dr. Jobe’s experiment. Tommy John spent his recovery time squeezing Silly Putty and throwing tennis balls. “He was willing to do anything necessary. He wanted to throw. That was his brush.” When the son was recovering from his own injury, “he said, ‘Learn the knuckleball.’ I said, ‘I don’t want to. I’ve reached my point.’ ”
He said he tells young patients with UCL injuries to rest. But instead “we have year-round sports with the promise that the more you play, the better,” he said. “They’re over-activitied.”
According to the American Academy of Orthopaedic Surgeons, 6.4 million children and adolescents in the United States played organized baseball in 2022, down from 11.5 million in 2014. Nearly half of pitchers played in a league with no maximum pitch counts, and 43.5% pitched on consecutive days, the group said.
How many UCL repair or reconstruction surgeries are performed on youth athletes each year is unclear. A 2019 study, however, found that although baseball injuries decreased between 2006 and 2016, the elbow was “the only location of injury that saw an increase.”
Dr. Garrigues said some parents of throwing athletes have asked about prophylactic Tommy John surgery for their children. He said it shouldn’t apply to pitchers.
“People have taken it a little too far,” he said. Dr. Garrigues and others argue against children throwing weighted balls when coming back from surgery. Instead, “we’re shutting them down,” he said.
Throwing any pitch is an act of violence on the body, Dr. Garrigues said, with the elbow taking the final brunt of the force. “These pitchers are functioning at the absolute limits of what the human body can take,” he said. “There’s only so many bullets in a gun,” which is why pitchers often feel the twinge of a torn UCL on a routine pitch.
Dr. Makhni suggested cross-training for pitchers in the off-season instead of playing baseball year-round. “If you play soccer, your footwork is going to be better,” he said.
“Kids shouldn’t be doing this all year round,” said Rebecca Carl, MD, associate professor of pediatrics at Northwestern University Feinberg School of Medicine in Chicago. “We are recommending that kids take 2 or 3 months off.” In the off-season, she urges them to strengthen their backs and cores.
Such advice can “feel like a bombshell,” said Dr. Carl, who chairs the Council on Sports Medicine and Fitness for the American Academy of Pediatrics. ‘Some started at a very young age. They go to camps. If I say to a teenager, ‘If you do this, I can keep you from getting injured,’ they think, ‘I won’t be injured.’” Most parents, however, understand the risk of “doing too much, too soon.”
Justin Orenduff, a former pitching prospect until his arm blew out, has made a career teaching head-to-toe pitching mechanics. He founded DVS Baseball, which uses software to teach pitchers how to properly use every muscle, starting with the orientation of the back foot. He, too, argues against pitching year-round. “Everyone on that travel team expects to get their fair share of playing time,” he said. “It just never stops.”
Organized baseball is paying attention. It has come up with the Pitch Smart program that gives maximum pitch counts for young players, but experts said children often get around that by belonging to several leagues.
Dr. Altchek said some surgeons have added platelet-rich plasma, stem cells, and bone marrow during surgery to quicken the slow healing time from UCL replacement. But he said, “it has to heal. Can you speed up biology?”
Dr. McCulloch said that, all the advances in Tommy John surgery aside, “the next frontier is really trying to crack the code on prevention.”
A version of this article first appeared on Medscape.com.
In 1974, Tommy John of the Los Angeles Dodgers was 31 and a 12-year veteran of Major League Baseball when he became the unwitting vanguard of a revolution in baseball and orthopedics. Fifty years later, Mr. John might not be a candidate for the latest advances to a procedure that bears his name.
The southpaw pitcher had faced the abrupt end of his career when, after one fateful delivery, he found himself unable to throw to home. So he took a gamble on the surgical equivalent of a Hail Mary: the reconstruction of a torn ligament in his pitching elbow.
The experiment was a wild success. Mr. John pitched— and better than he had before — for another 14 seasons, retiring in 1989 at the age of 46. How much better? After the surgery, he tallied three 20-win seasons compared with none before the operation, and he finished among the top five vote-getters for the annual Cy Young Award three times. He was named an All-Star once before the surgery and three times after.
The triumph notwithstanding, Tommy John now cautions against Tommy John surgery. What’s given him and clinicians pause is a trend in recent years of ever-younger athletes who undergo the procedure.
Along with the surgical improvements in repairing a torn ulnar collateral ligament (UCL) is a demographic shift toward school-aged athletes who get it. By 2014, one study concluded that 67.4% of UCL reconstruction surgeries were performed on athletes between 16 and 20 years of age. Some patients are still in Little League when they undergo the procedure.
Experts say these athletes have weakened their UCLs through overuse. They disagree on whether to call it an “epidemic,” but if it is, “the vaccine is awareness” against throwing too hard and too often, said Eric Makhni, MD, an orthopedic surgeon at Henry Ford Health in Detroit.
From Career-Ending to Routine
Mr. John’s entry into baseball and orthopedic lore was initially slow, but the trickle turned into a tide. After Frank Jobe, MD, swapped a healthy tendon from John’s right wrist for his worn and torn left UCL on September 25, 1974, he didn’t perform his second surgery for another 1194 days. By the time “Tommy John surgery” became a recognized phrase, Mr. John was still active but only 14 professional baseball players had undergone the operation.
Prior to the start of spring training this year, an oft-cited database listed 366 pro players who’d undergone the operation.
“Before Tommy John, that was a career-ending injury,” said Grant E. Garrigues, MD, an orthopedic surgeon at Midwest Orthopaedics at RUSH in Chicago, who called Mr. John “a pure revolutionary.”
Tommy John surgery is “the only one that I can think of that is named after the patient rather than the doctor who first did it,” said Patrick McCulloch, MD, an orthopedic surgeon in Houston and a team physician for the Astros.
Dr. McCulloch, who performs about 25 UCL repairs a year, said that by recent estimates, one-third of pro pitchers had had some sort of surgical repair. He hesitated to call the increasing number of operations an epidemic but acknowledged that the ingredients exist for more elbow trauma among baseball players.
“More people are playing more often, and people are bigger and stronger and throwing harder,” he said.
Either way, Dr. McCulloch said, “the procedure is a victim of its own success” because it is “just done phenomenally well.”
The surgery is now commonplace — perhaps too commonplace, said David W. Altchek, MD, attending surgeon and co-chief emeritus at Hospital for Special Surgery in New York City.
Dr. Altchek played a key role in the popularity of the operation. Twenty-two years after Mr. John’s surgery, he helped develop a variation of the procedure called the docking technique.
Whereas Dr. Jobe sutured Mr. John’s replacement graft to itself, “we developed a different way of tying it over a bone bridge, which was more secure and more easy to tension,” Dr. Altchek explained.
The advance meant less drilling into bone and enabled surgeons to avoid moving a problem-free ulnar nerve or removing the flexor-pronator muscle that protects the elbow from stress. “The trauma of the surgery is significantly less,” he said. “We just made it a lot easier very quickly,” cutting the surgery time from 2 hours to 30-40 minutes.
Maybe the surgery became too easy, said Dr. Altchek, who estimates he has done 2000 of them over the past 30 years. “I don’t want to condemn my colleagues, but there are a lot of people doing the surgery,” he said. “And not a lot of people are doing a lot of them, and they don’t know the nuances of doing the surgery.”
The older procedures are known as the “full Tommy John”; each has a 12- to 18-month healing process, with a success rate of 80%-85%. Pitchers typically sit out a season while recovering.
Brandon Erickson, MD, an orthopedic surgeon at Rothman Orthopaedic Institute in New York City, said that in younger patients he has recently turned more often to the suture of the future: an internal brace that provides a repair rather than reconstruction.
The procedure, pioneered by Felix H. Savoie III, MD, the Ray J. Haddad Professor of Orthopaedics at Tulane University School of Medicine in New Orleans, and Jeffrey R. Dugas, MD, of Andrews Sports Medicine & Orthopaedic Center in Birmingham, Alabama, uses collagen-coated tape that looks like a shoelace and provides a scaffold that Dr. McCulloch said “is inductive to healing and growth of ligament tissue.”
The brace is intended for an “overhead” athlete (mostly baseball players but also javelin throwers and gymnasts) whose UCL is torn on only one side but is otherwise in good shape. In a pitcher the same age as Mr. John was when Dr. Jobe performed the first procedure, “that ligament may not be of very good quality,” Dr. McCulloch said. “It may have thickened. It may have calcifications.” But for a high-school junior with aspirations to pitch in college or beyond without “way too many miles on the elbow,” the approach is a good fit. The healing process is as little as 6 months.
“The ones who have a good ligament are very likely to do well,” said Dr. Erickson, an assistant team doctor for the Philadelphia Phillies.
“If the patient’s ligament is generally ‘good’ with only a tear, the InternalBrace procedure may be used to repair the native ligament. On the other end of the spectrum, if the patient’s ligament is torn and degenerative the surgeon may opt to do a UCL reconstruction using an auto or allograft — ie, Tommy John surgery,” Allen Holowecky, senior product manager of Arthrex of Naples, Florida, the maker of the InternalBrace, told this news organization. “Before UCL repair, Tommy John surgery was the only real treatment option. We tend to see repairs done on younger patients since their ligament hasn’t seen years of use-damage.”
Calls for Caution
Tommy John III wanted to play baseball like his dad until near-fatal complications from shoulder surgery altered his path. He was drawn to chiropractic and consults on injury prevention. “All surgeries and all medical interventions are cut first, ask questions later,” he said. “I was born with that.”
He saw his dad’s slow, heroic comeback from the surgery and described him as the perfect candidate for Dr. Jobe’s experiment. Tommy John spent his recovery time squeezing Silly Putty and throwing tennis balls. “He was willing to do anything necessary. He wanted to throw. That was his brush.” When the son was recovering from his own injury, “he said, ‘Learn the knuckleball.’ I said, ‘I don’t want to. I’ve reached my point.’ ”
He said he tells young patients with UCL injuries to rest. But instead “we have year-round sports with the promise that the more you play, the better,” he said. “They’re over-activitied.”
According to the American Academy of Orthopaedic Surgeons, 6.4 million children and adolescents in the United States played organized baseball in 2022, down from 11.5 million in 2014. Nearly half of pitchers played in a league with no maximum pitch counts, and 43.5% pitched on consecutive days, the group said.
How many UCL repair or reconstruction surgeries are performed on youth athletes each year is unclear. A 2019 study, however, found that although baseball injuries decreased between 2006 and 2016, the elbow was “the only location of injury that saw an increase.”
Dr. Garrigues said some parents of throwing athletes have asked about prophylactic Tommy John surgery for their children. He said it shouldn’t apply to pitchers.
“People have taken it a little too far,” he said. Dr. Garrigues and others argue against children throwing weighted balls when coming back from surgery. Instead, “we’re shutting them down,” he said.
Throwing any pitch is an act of violence on the body, Dr. Garrigues said, with the elbow taking the final brunt of the force. “These pitchers are functioning at the absolute limits of what the human body can take,” he said. “There’s only so many bullets in a gun,” which is why pitchers often feel the twinge of a torn UCL on a routine pitch.
Dr. Makhni suggested cross-training for pitchers in the off-season instead of playing baseball year-round. “If you play soccer, your footwork is going to be better,” he said.
“Kids shouldn’t be doing this all year round,” said Rebecca Carl, MD, associate professor of pediatrics at Northwestern University Feinberg School of Medicine in Chicago. “We are recommending that kids take 2 or 3 months off.” In the off-season, she urges them to strengthen their backs and cores.
Such advice can “feel like a bombshell,” said Dr. Carl, who chairs the Council on Sports Medicine and Fitness for the American Academy of Pediatrics. ‘Some started at a very young age. They go to camps. If I say to a teenager, ‘If you do this, I can keep you from getting injured,’ they think, ‘I won’t be injured.’” Most parents, however, understand the risk of “doing too much, too soon.”
Justin Orenduff, a former pitching prospect until his arm blew out, has made a career teaching head-to-toe pitching mechanics. He founded DVS Baseball, which uses software to teach pitchers how to properly use every muscle, starting with the orientation of the back foot. He, too, argues against pitching year-round. “Everyone on that travel team expects to get their fair share of playing time,” he said. “It just never stops.”
Organized baseball is paying attention. It has come up with the Pitch Smart program that gives maximum pitch counts for young players, but experts said children often get around that by belonging to several leagues.
Dr. Altchek said some surgeons have added platelet-rich plasma, stem cells, and bone marrow during surgery to quicken the slow healing time from UCL replacement. But he said, “it has to heal. Can you speed up biology?”
Dr. McCulloch said that, all the advances in Tommy John surgery aside, “the next frontier is really trying to crack the code on prevention.”
A version of this article first appeared on Medscape.com.
In 1974, Tommy John of the Los Angeles Dodgers was 31 and a 12-year veteran of Major League Baseball when he became the unwitting vanguard of a revolution in baseball and orthopedics. Fifty years later, Mr. John might not be a candidate for the latest advances to a procedure that bears his name.
The southpaw pitcher had faced the abrupt end of his career when, after one fateful delivery, he found himself unable to throw to home. So he took a gamble on the surgical equivalent of a Hail Mary: the reconstruction of a torn ligament in his pitching elbow.
The experiment was a wild success. Mr. John pitched— and better than he had before — for another 14 seasons, retiring in 1989 at the age of 46. How much better? After the surgery, he tallied three 20-win seasons compared with none before the operation, and he finished among the top five vote-getters for the annual Cy Young Award three times. He was named an All-Star once before the surgery and three times after.
The triumph notwithstanding, Tommy John now cautions against Tommy John surgery. What’s given him and clinicians pause is a trend in recent years of ever-younger athletes who undergo the procedure.
Along with the surgical improvements in repairing a torn ulnar collateral ligament (UCL) is a demographic shift toward school-aged athletes who get it. By 2014, one study concluded that 67.4% of UCL reconstruction surgeries were performed on athletes between 16 and 20 years of age. Some patients are still in Little League when they undergo the procedure.
Experts say these athletes have weakened their UCLs through overuse. They disagree on whether to call it an “epidemic,” but if it is, “the vaccine is awareness” against throwing too hard and too often, said Eric Makhni, MD, an orthopedic surgeon at Henry Ford Health in Detroit.
From Career-Ending to Routine
Mr. John’s entry into baseball and orthopedic lore was initially slow, but the trickle turned into a tide. After Frank Jobe, MD, swapped a healthy tendon from John’s right wrist for his worn and torn left UCL on September 25, 1974, he didn’t perform his second surgery for another 1194 days. By the time “Tommy John surgery” became a recognized phrase, Mr. John was still active but only 14 professional baseball players had undergone the operation.
Prior to the start of spring training this year, an oft-cited database listed 366 pro players who’d undergone the operation.
“Before Tommy John, that was a career-ending injury,” said Grant E. Garrigues, MD, an orthopedic surgeon at Midwest Orthopaedics at RUSH in Chicago, who called Mr. John “a pure revolutionary.”
Tommy John surgery is “the only one that I can think of that is named after the patient rather than the doctor who first did it,” said Patrick McCulloch, MD, an orthopedic surgeon in Houston and a team physician for the Astros.
Dr. McCulloch, who performs about 25 UCL repairs a year, said that by recent estimates, one-third of pro pitchers had had some sort of surgical repair. He hesitated to call the increasing number of operations an epidemic but acknowledged that the ingredients exist for more elbow trauma among baseball players.
“More people are playing more often, and people are bigger and stronger and throwing harder,” he said.
Either way, Dr. McCulloch said, “the procedure is a victim of its own success” because it is “just done phenomenally well.”
The surgery is now commonplace — perhaps too commonplace, said David W. Altchek, MD, attending surgeon and co-chief emeritus at Hospital for Special Surgery in New York City.
Dr. Altchek played a key role in the popularity of the operation. Twenty-two years after Mr. John’s surgery, he helped develop a variation of the procedure called the docking technique.
Whereas Dr. Jobe sutured Mr. John’s replacement graft to itself, “we developed a different way of tying it over a bone bridge, which was more secure and more easy to tension,” Dr. Altchek explained.
The advance meant less drilling into bone and enabled surgeons to avoid moving a problem-free ulnar nerve or removing the flexor-pronator muscle that protects the elbow from stress. “The trauma of the surgery is significantly less,” he said. “We just made it a lot easier very quickly,” cutting the surgery time from 2 hours to 30-40 minutes.
Maybe the surgery became too easy, said Dr. Altchek, who estimates he has done 2000 of them over the past 30 years. “I don’t want to condemn my colleagues, but there are a lot of people doing the surgery,” he said. “And not a lot of people are doing a lot of them, and they don’t know the nuances of doing the surgery.”
The older procedures are known as the “full Tommy John”; each has a 12- to 18-month healing process, with a success rate of 80%-85%. Pitchers typically sit out a season while recovering.
Brandon Erickson, MD, an orthopedic surgeon at Rothman Orthopaedic Institute in New York City, said that in younger patients he has recently turned more often to the suture of the future: an internal brace that provides a repair rather than reconstruction.
The procedure, pioneered by Felix H. Savoie III, MD, the Ray J. Haddad Professor of Orthopaedics at Tulane University School of Medicine in New Orleans, and Jeffrey R. Dugas, MD, of Andrews Sports Medicine & Orthopaedic Center in Birmingham, Alabama, uses collagen-coated tape that looks like a shoelace and provides a scaffold that Dr. McCulloch said “is inductive to healing and growth of ligament tissue.”
The brace is intended for an “overhead” athlete (mostly baseball players but also javelin throwers and gymnasts) whose UCL is torn on only one side but is otherwise in good shape. In a pitcher the same age as Mr. John was when Dr. Jobe performed the first procedure, “that ligament may not be of very good quality,” Dr. McCulloch said. “It may have thickened. It may have calcifications.” But for a high-school junior with aspirations to pitch in college or beyond without “way too many miles on the elbow,” the approach is a good fit. The healing process is as little as 6 months.
“The ones who have a good ligament are very likely to do well,” said Dr. Erickson, an assistant team doctor for the Philadelphia Phillies.
“If the patient’s ligament is generally ‘good’ with only a tear, the InternalBrace procedure may be used to repair the native ligament. On the other end of the spectrum, if the patient’s ligament is torn and degenerative the surgeon may opt to do a UCL reconstruction using an auto or allograft — ie, Tommy John surgery,” Allen Holowecky, senior product manager of Arthrex of Naples, Florida, the maker of the InternalBrace, told this news organization. “Before UCL repair, Tommy John surgery was the only real treatment option. We tend to see repairs done on younger patients since their ligament hasn’t seen years of use-damage.”
Calls for Caution
Tommy John III wanted to play baseball like his dad until near-fatal complications from shoulder surgery altered his path. He was drawn to chiropractic and consults on injury prevention. “All surgeries and all medical interventions are cut first, ask questions later,” he said. “I was born with that.”
He saw his dad’s slow, heroic comeback from the surgery and described him as the perfect candidate for Dr. Jobe’s experiment. Tommy John spent his recovery time squeezing Silly Putty and throwing tennis balls. “He was willing to do anything necessary. He wanted to throw. That was his brush.” When the son was recovering from his own injury, “he said, ‘Learn the knuckleball.’ I said, ‘I don’t want to. I’ve reached my point.’ ”
He said he tells young patients with UCL injuries to rest. But instead “we have year-round sports with the promise that the more you play, the better,” he said. “They’re over-activitied.”
According to the American Academy of Orthopaedic Surgeons, 6.4 million children and adolescents in the United States played organized baseball in 2022, down from 11.5 million in 2014. Nearly half of pitchers played in a league with no maximum pitch counts, and 43.5% pitched on consecutive days, the group said.
How many UCL repair or reconstruction surgeries are performed on youth athletes each year is unclear. A 2019 study, however, found that although baseball injuries decreased between 2006 and 2016, the elbow was “the only location of injury that saw an increase.”
Dr. Garrigues said some parents of throwing athletes have asked about prophylactic Tommy John surgery for their children. He said it shouldn’t apply to pitchers.
“People have taken it a little too far,” he said. Dr. Garrigues and others argue against children throwing weighted balls when coming back from surgery. Instead, “we’re shutting them down,” he said.
Throwing any pitch is an act of violence on the body, Dr. Garrigues said, with the elbow taking the final brunt of the force. “These pitchers are functioning at the absolute limits of what the human body can take,” he said. “There’s only so many bullets in a gun,” which is why pitchers often feel the twinge of a torn UCL on a routine pitch.
Dr. Makhni suggested cross-training for pitchers in the off-season instead of playing baseball year-round. “If you play soccer, your footwork is going to be better,” he said.
“Kids shouldn’t be doing this all year round,” said Rebecca Carl, MD, associate professor of pediatrics at Northwestern University Feinberg School of Medicine in Chicago. “We are recommending that kids take 2 or 3 months off.” In the off-season, she urges them to strengthen their backs and cores.
Such advice can “feel like a bombshell,” said Dr. Carl, who chairs the Council on Sports Medicine and Fitness for the American Academy of Pediatrics. ‘Some started at a very young age. They go to camps. If I say to a teenager, ‘If you do this, I can keep you from getting injured,’ they think, ‘I won’t be injured.’” Most parents, however, understand the risk of “doing too much, too soon.”
Justin Orenduff, a former pitching prospect until his arm blew out, has made a career teaching head-to-toe pitching mechanics. He founded DVS Baseball, which uses software to teach pitchers how to properly use every muscle, starting with the orientation of the back foot. He, too, argues against pitching year-round. “Everyone on that travel team expects to get their fair share of playing time,” he said. “It just never stops.”
Organized baseball is paying attention. It has come up with the Pitch Smart program that gives maximum pitch counts for young players, but experts said children often get around that by belonging to several leagues.
Dr. Altchek said some surgeons have added platelet-rich plasma, stem cells, and bone marrow during surgery to quicken the slow healing time from UCL replacement. But he said, “it has to heal. Can you speed up biology?”
Dr. McCulloch said that, all the advances in Tommy John surgery aside, “the next frontier is really trying to crack the code on prevention.”
A version of this article first appeared on Medscape.com.
Maternal Lifestyle Interventions Boost Babies’ Heart Health
Infants born to women with obesity showed improved measures of cardiovascular health when their mothers adopted healthier lifestyles before and during pregnancy, based on data from a systematic review presented at the annual meeting of the Society for Reproductive Investigation.
Previous research has shown that children born to mothers with a high body mass index (BMI) are more likely to die from cardiovascular disease in later life, said presenting author Samuel J. Burden, PhD, in an interview.
“Surprisingly, early signs of these heart issues can start before birth and continue into childhood,” said Dr. Burden, a research associate in the Department of Women & Children’s Health, School of Life Course & Population Sciences, King’s College London, London, United Kingdom.
To examine the effect of interventions such as a healthy diet and exercise in pregnant women with obesity on the heart health of their infants, Dr. Burden and colleagues reviewed data from eight randomized, controlled trials involving diet and exercise for pregnant women with obesity. Of these, two used antenatal exercise, two used diet and physical activity, and one used preconception diet and physical activity. The studies ranged in size from 18 to 404 participants. Two studies included infants younger than 2 months of age, and four studies included children aged 3-7 years.
Overall, lifestyle interventions before conception and before birth were associated with significant changes in cardiac remodeling, specifically reduced interventricular septal wall thickness.
In addition, one of three studies of cardiac diastolic function and four of five studies of systolic function showed significant improvements. The five studies of cardiac systolic function and three studies of diastolic function also showed improvement in systolic and diastolic blood pressure in infants of mothers who took part in the interventions. The studies were limited mainly by large attrition rates, the researchers wrote in their presentation. However, more studies in larger populations that also include older children could confirm the findings and inform public health strategies to promote healthy lifestyles for pregnant women, they noted.
Encourage Healthy Lifestyle Before and During Pregnancy
The evidence supports the findings from animal studies showing that an offspring’s health is influenced by maternal lifestyle before and during pregnancy, Dr. Burden said in an interview. The data suggest that healthcare providers should encourage women with a high BMI who want to become pregnant to eat healthfully and become more active as a way to enhance the future cardiovascular health of their children, he said.
The full results of the current study are soon to be published, but more work is needed, said Dr. Burden. “While we observed a protective effect from these lifestyle programs, there is a need for more extensive studies involving a larger number of women (and their children) who were part of the initial research,” he said. “Additionally, it will be crucial to track these children into adulthood to determine whether these antenatal lifestyle interventions persist in lowering the risk of future cardiovascular disease.”
Beginning healthy lifestyle programs prior to pregnancy might yield the best results for promoting infant cardiovascular health, and more prepregnancy interventions for women with obesity are needed, Dr. Burden added.
The current study adds to the growing body of evidence that the in utero environment can have lifelong effects on offspring, Joseph R. Biggio Jr, MD, system chair of maternal fetal medicine at Ochsner Health, New Orleans, Louisiana, said in an interview.
“Several studies have previously shown that the children of mothers with diabetes, hypertension, or obesity are at increased risk for developing signs of metabolic syndrome and cardiovascular changes during childhood or adolescence,” said Dr. Biggio.
The data from this systematic review support the potential value of interventions aimed at improving maternal weight gain and cardiovascular performance before and during pregnancy that may result in reduced cardiovascular remodeling and myocardial thickening in infants, he said.
The study was supported by a British Heart Foundation Special Project Grant. The researchers had no financial conflicts to disclose. Dr. Biggio had no financial conflicts to disclose.
Infants born to women with obesity showed improved measures of cardiovascular health when their mothers adopted healthier lifestyles before and during pregnancy, based on data from a systematic review presented at the annual meeting of the Society for Reproductive Investigation.
Previous research has shown that children born to mothers with a high body mass index (BMI) are more likely to die from cardiovascular disease in later life, said presenting author Samuel J. Burden, PhD, in an interview.
“Surprisingly, early signs of these heart issues can start before birth and continue into childhood,” said Dr. Burden, a research associate in the Department of Women & Children’s Health, School of Life Course & Population Sciences, King’s College London, London, United Kingdom.
To examine the effect of interventions such as a healthy diet and exercise in pregnant women with obesity on the heart health of their infants, Dr. Burden and colleagues reviewed data from eight randomized, controlled trials involving diet and exercise for pregnant women with obesity. Of these, two used antenatal exercise, two used diet and physical activity, and one used preconception diet and physical activity. The studies ranged in size from 18 to 404 participants. Two studies included infants younger than 2 months of age, and four studies included children aged 3-7 years.
Overall, lifestyle interventions before conception and before birth were associated with significant changes in cardiac remodeling, specifically reduced interventricular septal wall thickness.
In addition, one of three studies of cardiac diastolic function and four of five studies of systolic function showed significant improvements. The five studies of cardiac systolic function and three studies of diastolic function also showed improvement in systolic and diastolic blood pressure in infants of mothers who took part in the interventions. The studies were limited mainly by large attrition rates, the researchers wrote in their presentation. However, more studies in larger populations that also include older children could confirm the findings and inform public health strategies to promote healthy lifestyles for pregnant women, they noted.
Encourage Healthy Lifestyle Before and During Pregnancy
The evidence supports the findings from animal studies showing that an offspring’s health is influenced by maternal lifestyle before and during pregnancy, Dr. Burden said in an interview. The data suggest that healthcare providers should encourage women with a high BMI who want to become pregnant to eat healthfully and become more active as a way to enhance the future cardiovascular health of their children, he said.
The full results of the current study are soon to be published, but more work is needed, said Dr. Burden. “While we observed a protective effect from these lifestyle programs, there is a need for more extensive studies involving a larger number of women (and their children) who were part of the initial research,” he said. “Additionally, it will be crucial to track these children into adulthood to determine whether these antenatal lifestyle interventions persist in lowering the risk of future cardiovascular disease.”
Beginning healthy lifestyle programs prior to pregnancy might yield the best results for promoting infant cardiovascular health, and more prepregnancy interventions for women with obesity are needed, Dr. Burden added.
The current study adds to the growing body of evidence that the in utero environment can have lifelong effects on offspring, Joseph R. Biggio Jr, MD, system chair of maternal fetal medicine at Ochsner Health, New Orleans, Louisiana, said in an interview.
“Several studies have previously shown that the children of mothers with diabetes, hypertension, or obesity are at increased risk for developing signs of metabolic syndrome and cardiovascular changes during childhood or adolescence,” said Dr. Biggio.
The data from this systematic review support the potential value of interventions aimed at improving maternal weight gain and cardiovascular performance before and during pregnancy that may result in reduced cardiovascular remodeling and myocardial thickening in infants, he said.
The study was supported by a British Heart Foundation Special Project Grant. The researchers had no financial conflicts to disclose. Dr. Biggio had no financial conflicts to disclose.
Infants born to women with obesity showed improved measures of cardiovascular health when their mothers adopted healthier lifestyles before and during pregnancy, based on data from a systematic review presented at the annual meeting of the Society for Reproductive Investigation.
Previous research has shown that children born to mothers with a high body mass index (BMI) are more likely to die from cardiovascular disease in later life, said presenting author Samuel J. Burden, PhD, in an interview.
“Surprisingly, early signs of these heart issues can start before birth and continue into childhood,” said Dr. Burden, a research associate in the Department of Women & Children’s Health, School of Life Course & Population Sciences, King’s College London, London, United Kingdom.
To examine the effect of interventions such as a healthy diet and exercise in pregnant women with obesity on the heart health of their infants, Dr. Burden and colleagues reviewed data from eight randomized, controlled trials involving diet and exercise for pregnant women with obesity. Of these, two used antenatal exercise, two used diet and physical activity, and one used preconception diet and physical activity. The studies ranged in size from 18 to 404 participants. Two studies included infants younger than 2 months of age, and four studies included children aged 3-7 years.
Overall, lifestyle interventions before conception and before birth were associated with significant changes in cardiac remodeling, specifically reduced interventricular septal wall thickness.
In addition, one of three studies of cardiac diastolic function and four of five studies of systolic function showed significant improvements. The five studies of cardiac systolic function and three studies of diastolic function also showed improvement in systolic and diastolic blood pressure in infants of mothers who took part in the interventions. The studies were limited mainly by large attrition rates, the researchers wrote in their presentation. However, more studies in larger populations that also include older children could confirm the findings and inform public health strategies to promote healthy lifestyles for pregnant women, they noted.
Encourage Healthy Lifestyle Before and During Pregnancy
The evidence supports the findings from animal studies showing that an offspring’s health is influenced by maternal lifestyle before and during pregnancy, Dr. Burden said in an interview. The data suggest that healthcare providers should encourage women with a high BMI who want to become pregnant to eat healthfully and become more active as a way to enhance the future cardiovascular health of their children, he said.
The full results of the current study are soon to be published, but more work is needed, said Dr. Burden. “While we observed a protective effect from these lifestyle programs, there is a need for more extensive studies involving a larger number of women (and their children) who were part of the initial research,” he said. “Additionally, it will be crucial to track these children into adulthood to determine whether these antenatal lifestyle interventions persist in lowering the risk of future cardiovascular disease.”
Beginning healthy lifestyle programs prior to pregnancy might yield the best results for promoting infant cardiovascular health, and more prepregnancy interventions for women with obesity are needed, Dr. Burden added.
The current study adds to the growing body of evidence that the in utero environment can have lifelong effects on offspring, Joseph R. Biggio Jr, MD, system chair of maternal fetal medicine at Ochsner Health, New Orleans, Louisiana, said in an interview.
“Several studies have previously shown that the children of mothers with diabetes, hypertension, or obesity are at increased risk for developing signs of metabolic syndrome and cardiovascular changes during childhood or adolescence,” said Dr. Biggio.
The data from this systematic review support the potential value of interventions aimed at improving maternal weight gain and cardiovascular performance before and during pregnancy that may result in reduced cardiovascular remodeling and myocardial thickening in infants, he said.
The study was supported by a British Heart Foundation Special Project Grant. The researchers had no financial conflicts to disclose. Dr. Biggio had no financial conflicts to disclose.
Teen Pregnancy Linked With Risk for Premature Death
Teen pregnancy is associated with a higher risk for premature mortality, both among those who carry the pregnancies to term and those who miscarry, according to a new study.
Among 2.2 million female teenagers in Ontario, Canada, the risk for premature death by age 31 years was 1.5 times higher among those who had one teen pregnancy and 2.1 times higher among those with two or more teen pregnancies.
“No person should die during childhood or early adulthood. Such deaths, unexpected and tragic, are often from preventable causes, including intentional injury,” lead author Joel Ray, MD, an obstetric medicine specialist and epidemiologist at St. Michael’s Hospital in Toronto, told this news organization.
“Women who experience teen pregnancy appear more vulnerable, often having experienced a history of adverse experiences in childhood, including abuse and economic challenges,” he said.
The study was published online in JAMA Network Open.
Analyzing Pregnancy Associations
The investigators conducted a population-based cohort study of all girls who were alive at age 12 years from April 1991 to March 2021 in Ontario. They evaluated the risk for all-cause mortality from age 12 years onward in association with the number of teen pregnancies between ages 12 and 19 years and the age at first pregnancy. The investigators adjusted the hazard ratios for year of birth, comorbidities at ages 9-11 years, area-level education, income level, and rural status.
Among more than 2.2 million teens, 163,124 (7.3%) had a pregnancy at a median age of 18 years, including 121,276 (74.3%) who had one pregnancy and 41,848 (25.6%) who had two or more. These teens were more likely to live in the lowest neighborhood income quintile and in an area with a lower rate of high school completion. They also had a higher prevalence of self-harm history between ages 12 and 19 years but not a higher prevalence of physical or mental comorbidities.
Among all teens who had a pregnancy, 60,037 (36.8%) ended in a birth, including 59,485 (99.1%) live births. A further 106,135 (65.1%) ended in induced abortion, and 17,945 (11%) ended in miscarriage or ectopic pregnancy.
Overall, there were 6030 premature deaths among those without a teen pregnancy, or 1.9 per 10,000 person-years. There were 701 deaths among those with one teen pregnancy (4.1 per 10,000 person-years) and 345 deaths among those with two or more teen pregnancies (6.1 per 10,000 person-years).
The adjusted hazard ratios (AHRs) for mortality were 1.51 for those with one pregnancy and 2.14 for those with two or more pregnancies. Compared with no teen pregnancy, the AHRs for premature death were 1.41 if the first teen pregnancy ended in an induced abortion and 2.10 if it ended in a miscarriage or birth.
Comparing those with a teen pregnancy and those without, the AHRs for premature death were 1.25 from noninjury, 2.06 from unintentional injury, and 2.02 from intentional injury. Among patients with teen pregnancy, noninjury-related premature mortality was more common, at 2.0 per 10,000 person-years, than unintentional and intentional injuries, at 1.0 per 10,000 person-years and 0.4 per 10,000 person-years, respectively.
A teen pregnancy before age 16 years entailed the highest associated risk for premature death, with an AHR of 2.00.
Next Research Steps
“We were not surprised by our findings, but it was new to us to see that the risk for premature death was higher for women who had an induced abortion in their teen years,” said Dr. Ray. “It was even higher in those whose pregnancy ended in a birth or miscarriage.”
The investigators plan to evaluate whether the future risk for premature death after teen pregnancy differs by the type of induced abortion, such as procedural or pharmaceutical, or by whether the pregnancy ended in a live birth, stillbirth, or miscarriage. Among those with a live birth, the researchers will also analyze the risk for premature death in relation to whether the newborn was taken into custody by child protection services in Canada.
Other factors associated with teen pregnancy and overall mortality, particularly adverse childhood experiences, may point to the reasons for premature mortality and should be studied further, the authors wrote. Structural and systems-related factors should be considered as well.
Stigmatization and Isolation
“Some teens choose to become pregnant, but most teen pregnancies are unintended, which exposes shortcomings in the systems that exist to educate, guide, and support young people,” said Elizabeth Cook, a research scientist at Child Trends in Rockville, Maryland.
Dr. Cook, who wasn’t involved with this study, wrote an accompanying editorial in JAMA Network Open. She conducts studies of sexual and reproductive health for Child Trends.
“Teens who become pregnant often experience stigmatization and isolation that can make it more difficult to thrive in adulthood, especially if they lack the necessary support to navigate such a significant decision,” she said. “Fortunately, the systems that youths encounter, such as healthcare, education, and child welfare, are taking on a larger role in prevention efforts than they have in the past.”
These systems are shifting the burden of unintended teen pregnancy from the teens themselves and their behaviors to the health and education systems, Dr. Cook noted, though more work is needed around local policies and lack of access to healthcare facilities.
“Teen pregnancy may offer an opportunity to intervene in the lives of people at higher risk for premature death, but knowing how best to offer support requires an understanding of the context of their lives,” she said. “As a starting point, we must celebrate and listen to all pregnant young people so they can tell us what they need to live long, fulfilled lives.”
The study was funded by grants from the PSI Foundation and the Canadian Institutes of Health Research, as well as ICES, which is funded by the Ontario Ministry of Health and the Ministry of Long-Term Care. Dr. Ray and Dr. Cook reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
Teen pregnancy is associated with a higher risk for premature mortality, both among those who carry the pregnancies to term and those who miscarry, according to a new study.
Among 2.2 million female teenagers in Ontario, Canada, the risk for premature death by age 31 years was 1.5 times higher among those who had one teen pregnancy and 2.1 times higher among those with two or more teen pregnancies.
“No person should die during childhood or early adulthood. Such deaths, unexpected and tragic, are often from preventable causes, including intentional injury,” lead author Joel Ray, MD, an obstetric medicine specialist and epidemiologist at St. Michael’s Hospital in Toronto, told this news organization.
“Women who experience teen pregnancy appear more vulnerable, often having experienced a history of adverse experiences in childhood, including abuse and economic challenges,” he said.
The study was published online in JAMA Network Open.
Analyzing Pregnancy Associations
The investigators conducted a population-based cohort study of all girls who were alive at age 12 years from April 1991 to March 2021 in Ontario. They evaluated the risk for all-cause mortality from age 12 years onward in association with the number of teen pregnancies between ages 12 and 19 years and the age at first pregnancy. The investigators adjusted the hazard ratios for year of birth, comorbidities at ages 9-11 years, area-level education, income level, and rural status.
Among more than 2.2 million teens, 163,124 (7.3%) had a pregnancy at a median age of 18 years, including 121,276 (74.3%) who had one pregnancy and 41,848 (25.6%) who had two or more. These teens were more likely to live in the lowest neighborhood income quintile and in an area with a lower rate of high school completion. They also had a higher prevalence of self-harm history between ages 12 and 19 years but not a higher prevalence of physical or mental comorbidities.
Among all teens who had a pregnancy, 60,037 (36.8%) ended in a birth, including 59,485 (99.1%) live births. A further 106,135 (65.1%) ended in induced abortion, and 17,945 (11%) ended in miscarriage or ectopic pregnancy.
Overall, there were 6030 premature deaths among those without a teen pregnancy, or 1.9 per 10,000 person-years. There were 701 deaths among those with one teen pregnancy (4.1 per 10,000 person-years) and 345 deaths among those with two or more teen pregnancies (6.1 per 10,000 person-years).
The adjusted hazard ratios (AHRs) for mortality were 1.51 for those with one pregnancy and 2.14 for those with two or more pregnancies. Compared with no teen pregnancy, the AHRs for premature death were 1.41 if the first teen pregnancy ended in an induced abortion and 2.10 if it ended in a miscarriage or birth.
Comparing those with a teen pregnancy and those without, the AHRs for premature death were 1.25 from noninjury, 2.06 from unintentional injury, and 2.02 from intentional injury. Among patients with teen pregnancy, noninjury-related premature mortality was more common, at 2.0 per 10,000 person-years, than unintentional and intentional injuries, at 1.0 per 10,000 person-years and 0.4 per 10,000 person-years, respectively.
A teen pregnancy before age 16 years entailed the highest associated risk for premature death, with an AHR of 2.00.
Next Research Steps
“We were not surprised by our findings, but it was new to us to see that the risk for premature death was higher for women who had an induced abortion in their teen years,” said Dr. Ray. “It was even higher in those whose pregnancy ended in a birth or miscarriage.”
The investigators plan to evaluate whether the future risk for premature death after teen pregnancy differs by the type of induced abortion, such as procedural or pharmaceutical, or by whether the pregnancy ended in a live birth, stillbirth, or miscarriage. Among those with a live birth, the researchers will also analyze the risk for premature death in relation to whether the newborn was taken into custody by child protection services in Canada.
Other factors associated with teen pregnancy and overall mortality, particularly adverse childhood experiences, may point to the reasons for premature mortality and should be studied further, the authors wrote. Structural and systems-related factors should be considered as well.
Stigmatization and Isolation
“Some teens choose to become pregnant, but most teen pregnancies are unintended, which exposes shortcomings in the systems that exist to educate, guide, and support young people,” said Elizabeth Cook, a research scientist at Child Trends in Rockville, Maryland.
Dr. Cook, who wasn’t involved with this study, wrote an accompanying editorial in JAMA Network Open. She conducts studies of sexual and reproductive health for Child Trends.
“Teens who become pregnant often experience stigmatization and isolation that can make it more difficult to thrive in adulthood, especially if they lack the necessary support to navigate such a significant decision,” she said. “Fortunately, the systems that youths encounter, such as healthcare, education, and child welfare, are taking on a larger role in prevention efforts than they have in the past.”
These systems are shifting the burden of unintended teen pregnancy from the teens themselves and their behaviors to the health and education systems, Dr. Cook noted, though more work is needed around local policies and lack of access to healthcare facilities.
“Teen pregnancy may offer an opportunity to intervene in the lives of people at higher risk for premature death, but knowing how best to offer support requires an understanding of the context of their lives,” she said. “As a starting point, we must celebrate and listen to all pregnant young people so they can tell us what they need to live long, fulfilled lives.”
The study was funded by grants from the PSI Foundation and the Canadian Institutes of Health Research, as well as ICES, which is funded by the Ontario Ministry of Health and the Ministry of Long-Term Care. Dr. Ray and Dr. Cook reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
Teen pregnancy is associated with a higher risk for premature mortality, both among those who carry the pregnancies to term and those who miscarry, according to a new study.
Among 2.2 million female teenagers in Ontario, Canada, the risk for premature death by age 31 years was 1.5 times higher among those who had one teen pregnancy and 2.1 times higher among those with two or more teen pregnancies.
“No person should die during childhood or early adulthood. Such deaths, unexpected and tragic, are often from preventable causes, including intentional injury,” lead author Joel Ray, MD, an obstetric medicine specialist and epidemiologist at St. Michael’s Hospital in Toronto, told this news organization.
“Women who experience teen pregnancy appear more vulnerable, often having experienced a history of adverse experiences in childhood, including abuse and economic challenges,” he said.
The study was published online in JAMA Network Open.
Analyzing Pregnancy Associations
The investigators conducted a population-based cohort study of all girls who were alive at age 12 years from April 1991 to March 2021 in Ontario. They evaluated the risk for all-cause mortality from age 12 years onward in association with the number of teen pregnancies between ages 12 and 19 years and the age at first pregnancy. The investigators adjusted the hazard ratios for year of birth, comorbidities at ages 9-11 years, area-level education, income level, and rural status.
Among more than 2.2 million teens, 163,124 (7.3%) had a pregnancy at a median age of 18 years, including 121,276 (74.3%) who had one pregnancy and 41,848 (25.6%) who had two or more. These teens were more likely to live in the lowest neighborhood income quintile and in an area with a lower rate of high school completion. They also had a higher prevalence of self-harm history between ages 12 and 19 years but not a higher prevalence of physical or mental comorbidities.
Among all teens who had a pregnancy, 60,037 (36.8%) ended in a birth, including 59,485 (99.1%) live births. A further 106,135 (65.1%) ended in induced abortion, and 17,945 (11%) ended in miscarriage or ectopic pregnancy.
Overall, there were 6030 premature deaths among those without a teen pregnancy, or 1.9 per 10,000 person-years. There were 701 deaths among those with one teen pregnancy (4.1 per 10,000 person-years) and 345 deaths among those with two or more teen pregnancies (6.1 per 10,000 person-years).
The adjusted hazard ratios (AHRs) for mortality were 1.51 for those with one pregnancy and 2.14 for those with two or more pregnancies. Compared with no teen pregnancy, the AHRs for premature death were 1.41 if the first teen pregnancy ended in an induced abortion and 2.10 if it ended in a miscarriage or birth.
Comparing those with a teen pregnancy and those without, the AHRs for premature death were 1.25 from noninjury, 2.06 from unintentional injury, and 2.02 from intentional injury. Among patients with teen pregnancy, noninjury-related premature mortality was more common, at 2.0 per 10,000 person-years, than unintentional and intentional injuries, at 1.0 per 10,000 person-years and 0.4 per 10,000 person-years, respectively.
A teen pregnancy before age 16 years entailed the highest associated risk for premature death, with an AHR of 2.00.
Next Research Steps
“We were not surprised by our findings, but it was new to us to see that the risk for premature death was higher for women who had an induced abortion in their teen years,” said Dr. Ray. “It was even higher in those whose pregnancy ended in a birth or miscarriage.”
The investigators plan to evaluate whether the future risk for premature death after teen pregnancy differs by the type of induced abortion, such as procedural or pharmaceutical, or by whether the pregnancy ended in a live birth, stillbirth, or miscarriage. Among those with a live birth, the researchers will also analyze the risk for premature death in relation to whether the newborn was taken into custody by child protection services in Canada.
Other factors associated with teen pregnancy and overall mortality, particularly adverse childhood experiences, may point to the reasons for premature mortality and should be studied further, the authors wrote. Structural and systems-related factors should be considered as well.
Stigmatization and Isolation
“Some teens choose to become pregnant, but most teen pregnancies are unintended, which exposes shortcomings in the systems that exist to educate, guide, and support young people,” said Elizabeth Cook, a research scientist at Child Trends in Rockville, Maryland.
Dr. Cook, who wasn’t involved with this study, wrote an accompanying editorial in JAMA Network Open. She conducts studies of sexual and reproductive health for Child Trends.
“Teens who become pregnant often experience stigmatization and isolation that can make it more difficult to thrive in adulthood, especially if they lack the necessary support to navigate such a significant decision,” she said. “Fortunately, the systems that youths encounter, such as healthcare, education, and child welfare, are taking on a larger role in prevention efforts than they have in the past.”
These systems are shifting the burden of unintended teen pregnancy from the teens themselves and their behaviors to the health and education systems, Dr. Cook noted, though more work is needed around local policies and lack of access to healthcare facilities.
“Teen pregnancy may offer an opportunity to intervene in the lives of people at higher risk for premature death, but knowing how best to offer support requires an understanding of the context of their lives,” she said. “As a starting point, we must celebrate and listen to all pregnant young people so they can tell us what they need to live long, fulfilled lives.”
The study was funded by grants from the PSI Foundation and the Canadian Institutes of Health Research, as well as ICES, which is funded by the Ontario Ministry of Health and the Ministry of Long-Term Care. Dr. Ray and Dr. Cook reported no relevant disclosures.
A version of this article first appeared on Medscape.com.
Think Beyond the ‘Go-Tos’ for Wart Management, Expert Advises
SAN DIEGO — When Jennifer Adams, MD, recently entered the search term “warts” on the ClinicalTrials.gov web site, nearly 240 results popped up.
“There is a lot of research activity around this topic,” Dr. Adams, vice chair of the department of dermatology at the University of Nebraska Medical Center, said at the annual meeting of the American Academy of Dermatology. “We just don’t have fantastic, well-run trials on many of the currently available treatments.”
In a 2012 Cochrane review on the topical treatment of non-genital cutaneous warts, authors drew from 85 trials involving 8,815 randomized patients. They found that most warts spontaneously resolved, and the authors determined salicylic acid to be safe and modestly beneficial. Specifically, trials of salicylic acid (SA) versus placebo showed that the former significantly increased the chance of clearance of warts at all sites (risk ratio, 1.56, 95% confidence interval [CI], 1.20-2.03). A meta-analysis of cryotherapy versus placebo for warts at all sites favored neither intervention nor control (RR, 1.45, 95% CI, 0.65-3.23).
“The authors determined that there is less evidence for cryotherapy but stated that it may work when salicylic acid does not, or in combination with salicylic acid,” Dr. Adams said. “However, salicylic acid and cryotherapy don’t do enough for our patients [with warts]. There are a lot of situations where we need to reach further into the toolbox.”
A 2021 review article listed many options for managing difficult-to-treat warts, including intralesional Candida antigen, intralesional measles-mumps-rubella (MMR), intralesional HPV vaccine, intralesional vitamin D, intralesional cidofovir, intralesional bleomycin, and intralesional 5-FU injections, and topical vitamin D, topical cidofovir, and topical bleomycin. According to Dr. Adams, clinical data exist for cidofovir and vitamin D but studies evaluated different formulations, doses, sites of administration, and limited randomized controlled trials.
“Intralesional cidofovir is more effective than the topical form, but intralesional cidofovir can be painful and both forms are expensive,” she said. “Topical vitamin D is less likely to cause dyspigmentation compared to other available treatments, so it’s a great option in skin of color, but it has been less effective compared to some of our other topical treatments.”
Newer Options Promising
On the horizon, berdazimer gel was approved in January of 2024 for the treatment of molluscum but results from trials of its use for extragenital warts are encouraging. Another promising option is topical ionic contraviral therapy (ICVT) with digoxin and furosemide combined, which inhibits cellular potassium influx. A phase 2a randomized controlled trial of 80 adults found a statistically significant reduction in the diameter of cutaneous warts among those who received ICVT compared with those who received placebo (P = .002). “It’s cheap and well tolerated,” Dr. Adams added.
Intralesional approaches to treating warts offer another alternative. A 2020 review of 43 articles concluded that intralesional treatments for warts have equal or superior efficacy to first-line salicylic acid or cryotherapy.
Dr. Adams said that she considers intralesional treatments such as vitamin D, MMR vaccine antigen, and Candida antigen for refractory, numerous, or distant site warts. “Injecting the MMR vaccine into the largest wart every 2 weeks has been found to lead to complete clearance in 60%-68% of cases in one study,” she said. “The benefit is that it’s $21 per dose, which is nice, but as with any vaccination, patients can develop flu-like symptoms as side effects.”
Use of the HPV vaccine for treating cutaneous warts remains controversial, she continued, but it seems to work better in younger patients. In one open-label study that evaluated the HPV vaccine for the treatment of multiple recalcitrant warts, with doses administered at 0. 2, and 6 months, the response rate 3 months after the third dose was 55% among those older than age 26, compared with 84% among those ages 9-26 years.
Another option, intralesional cidofovir, has been shown to be especially effective for refractory warts. “It has also been shown to work for warts in immunocompetent and immunocompromised patients,” Dr. Adams said.
In the realm of adjuvant treatments, microneedling has been found to have similar efficacy to needling, Dr. Adams said, but with minimal pain. “When we combine it with topical treatments like 5-FU, it’s even more efficacious,” she said.
One study found that combining microneedling with topical 5-FU had clearance similar to that of intralesional 5-FU or microneedling alone, but involved fewer treatment sessions and less pain in the combination group.
Autoinoculation has been used to stimulate an immune response in patients with warts, leading to clearance rates of 4% (mild clearance) to 66% (complete clearance) in one study. “We would expect this to work better in immunocompetent patients, but it’s something to keep in mind if you’re limited in the medications you can get for a patient,” Dr. Adams said. Also, results from a systematic review and meta-analysis suggest that systemic retinoids combined with intralesional immunotherapy leads to higher clearance rates and lower rates of recurrence of warts. The top performer among those tested was acitretin plus Candida antigen.
Dr. Adams advised dermatologists who try alternatives to salicylic acid and cryotherapy for warts to be “wary of a lack of high-level evidence” for their use. “They can be helpful for patients who have failed traditional therapies or have a contraindication to the usual go-tos.”
She reported having no relevant financial disclosures.
SAN DIEGO — When Jennifer Adams, MD, recently entered the search term “warts” on the ClinicalTrials.gov web site, nearly 240 results popped up.
“There is a lot of research activity around this topic,” Dr. Adams, vice chair of the department of dermatology at the University of Nebraska Medical Center, said at the annual meeting of the American Academy of Dermatology. “We just don’t have fantastic, well-run trials on many of the currently available treatments.”
In a 2012 Cochrane review on the topical treatment of non-genital cutaneous warts, authors drew from 85 trials involving 8,815 randomized patients. They found that most warts spontaneously resolved, and the authors determined salicylic acid to be safe and modestly beneficial. Specifically, trials of salicylic acid (SA) versus placebo showed that the former significantly increased the chance of clearance of warts at all sites (risk ratio, 1.56, 95% confidence interval [CI], 1.20-2.03). A meta-analysis of cryotherapy versus placebo for warts at all sites favored neither intervention nor control (RR, 1.45, 95% CI, 0.65-3.23).
“The authors determined that there is less evidence for cryotherapy but stated that it may work when salicylic acid does not, or in combination with salicylic acid,” Dr. Adams said. “However, salicylic acid and cryotherapy don’t do enough for our patients [with warts]. There are a lot of situations where we need to reach further into the toolbox.”
A 2021 review article listed many options for managing difficult-to-treat warts, including intralesional Candida antigen, intralesional measles-mumps-rubella (MMR), intralesional HPV vaccine, intralesional vitamin D, intralesional cidofovir, intralesional bleomycin, and intralesional 5-FU injections, and topical vitamin D, topical cidofovir, and topical bleomycin. According to Dr. Adams, clinical data exist for cidofovir and vitamin D but studies evaluated different formulations, doses, sites of administration, and limited randomized controlled trials.
“Intralesional cidofovir is more effective than the topical form, but intralesional cidofovir can be painful and both forms are expensive,” she said. “Topical vitamin D is less likely to cause dyspigmentation compared to other available treatments, so it’s a great option in skin of color, but it has been less effective compared to some of our other topical treatments.”
Newer Options Promising
On the horizon, berdazimer gel was approved in January of 2024 for the treatment of molluscum but results from trials of its use for extragenital warts are encouraging. Another promising option is topical ionic contraviral therapy (ICVT) with digoxin and furosemide combined, which inhibits cellular potassium influx. A phase 2a randomized controlled trial of 80 adults found a statistically significant reduction in the diameter of cutaneous warts among those who received ICVT compared with those who received placebo (P = .002). “It’s cheap and well tolerated,” Dr. Adams added.
Intralesional approaches to treating warts offer another alternative. A 2020 review of 43 articles concluded that intralesional treatments for warts have equal or superior efficacy to first-line salicylic acid or cryotherapy.
Dr. Adams said that she considers intralesional treatments such as vitamin D, MMR vaccine antigen, and Candida antigen for refractory, numerous, or distant site warts. “Injecting the MMR vaccine into the largest wart every 2 weeks has been found to lead to complete clearance in 60%-68% of cases in one study,” she said. “The benefit is that it’s $21 per dose, which is nice, but as with any vaccination, patients can develop flu-like symptoms as side effects.”
Use of the HPV vaccine for treating cutaneous warts remains controversial, she continued, but it seems to work better in younger patients. In one open-label study that evaluated the HPV vaccine for the treatment of multiple recalcitrant warts, with doses administered at 0. 2, and 6 months, the response rate 3 months after the third dose was 55% among those older than age 26, compared with 84% among those ages 9-26 years.
Another option, intralesional cidofovir, has been shown to be especially effective for refractory warts. “It has also been shown to work for warts in immunocompetent and immunocompromised patients,” Dr. Adams said.
In the realm of adjuvant treatments, microneedling has been found to have similar efficacy to needling, Dr. Adams said, but with minimal pain. “When we combine it with topical treatments like 5-FU, it’s even more efficacious,” she said.
One study found that combining microneedling with topical 5-FU had clearance similar to that of intralesional 5-FU or microneedling alone, but involved fewer treatment sessions and less pain in the combination group.
Autoinoculation has been used to stimulate an immune response in patients with warts, leading to clearance rates of 4% (mild clearance) to 66% (complete clearance) in one study. “We would expect this to work better in immunocompetent patients, but it’s something to keep in mind if you’re limited in the medications you can get for a patient,” Dr. Adams said. Also, results from a systematic review and meta-analysis suggest that systemic retinoids combined with intralesional immunotherapy leads to higher clearance rates and lower rates of recurrence of warts. The top performer among those tested was acitretin plus Candida antigen.
Dr. Adams advised dermatologists who try alternatives to salicylic acid and cryotherapy for warts to be “wary of a lack of high-level evidence” for their use. “They can be helpful for patients who have failed traditional therapies or have a contraindication to the usual go-tos.”
She reported having no relevant financial disclosures.
SAN DIEGO — When Jennifer Adams, MD, recently entered the search term “warts” on the ClinicalTrials.gov web site, nearly 240 results popped up.
“There is a lot of research activity around this topic,” Dr. Adams, vice chair of the department of dermatology at the University of Nebraska Medical Center, said at the annual meeting of the American Academy of Dermatology. “We just don’t have fantastic, well-run trials on many of the currently available treatments.”
In a 2012 Cochrane review on the topical treatment of non-genital cutaneous warts, authors drew from 85 trials involving 8,815 randomized patients. They found that most warts spontaneously resolved, and the authors determined salicylic acid to be safe and modestly beneficial. Specifically, trials of salicylic acid (SA) versus placebo showed that the former significantly increased the chance of clearance of warts at all sites (risk ratio, 1.56, 95% confidence interval [CI], 1.20-2.03). A meta-analysis of cryotherapy versus placebo for warts at all sites favored neither intervention nor control (RR, 1.45, 95% CI, 0.65-3.23).
“The authors determined that there is less evidence for cryotherapy but stated that it may work when salicylic acid does not, or in combination with salicylic acid,” Dr. Adams said. “However, salicylic acid and cryotherapy don’t do enough for our patients [with warts]. There are a lot of situations where we need to reach further into the toolbox.”
A 2021 review article listed many options for managing difficult-to-treat warts, including intralesional Candida antigen, intralesional measles-mumps-rubella (MMR), intralesional HPV vaccine, intralesional vitamin D, intralesional cidofovir, intralesional bleomycin, and intralesional 5-FU injections, and topical vitamin D, topical cidofovir, and topical bleomycin. According to Dr. Adams, clinical data exist for cidofovir and vitamin D but studies evaluated different formulations, doses, sites of administration, and limited randomized controlled trials.
“Intralesional cidofovir is more effective than the topical form, but intralesional cidofovir can be painful and both forms are expensive,” she said. “Topical vitamin D is less likely to cause dyspigmentation compared to other available treatments, so it’s a great option in skin of color, but it has been less effective compared to some of our other topical treatments.”
Newer Options Promising
On the horizon, berdazimer gel was approved in January of 2024 for the treatment of molluscum but results from trials of its use for extragenital warts are encouraging. Another promising option is topical ionic contraviral therapy (ICVT) with digoxin and furosemide combined, which inhibits cellular potassium influx. A phase 2a randomized controlled trial of 80 adults found a statistically significant reduction in the diameter of cutaneous warts among those who received ICVT compared with those who received placebo (P = .002). “It’s cheap and well tolerated,” Dr. Adams added.
Intralesional approaches to treating warts offer another alternative. A 2020 review of 43 articles concluded that intralesional treatments for warts have equal or superior efficacy to first-line salicylic acid or cryotherapy.
Dr. Adams said that she considers intralesional treatments such as vitamin D, MMR vaccine antigen, and Candida antigen for refractory, numerous, or distant site warts. “Injecting the MMR vaccine into the largest wart every 2 weeks has been found to lead to complete clearance in 60%-68% of cases in one study,” she said. “The benefit is that it’s $21 per dose, which is nice, but as with any vaccination, patients can develop flu-like symptoms as side effects.”
Use of the HPV vaccine for treating cutaneous warts remains controversial, she continued, but it seems to work better in younger patients. In one open-label study that evaluated the HPV vaccine for the treatment of multiple recalcitrant warts, with doses administered at 0. 2, and 6 months, the response rate 3 months after the third dose was 55% among those older than age 26, compared with 84% among those ages 9-26 years.
Another option, intralesional cidofovir, has been shown to be especially effective for refractory warts. “It has also been shown to work for warts in immunocompetent and immunocompromised patients,” Dr. Adams said.
In the realm of adjuvant treatments, microneedling has been found to have similar efficacy to needling, Dr. Adams said, but with minimal pain. “When we combine it with topical treatments like 5-FU, it’s even more efficacious,” she said.
One study found that combining microneedling with topical 5-FU had clearance similar to that of intralesional 5-FU or microneedling alone, but involved fewer treatment sessions and less pain in the combination group.
Autoinoculation has been used to stimulate an immune response in patients with warts, leading to clearance rates of 4% (mild clearance) to 66% (complete clearance) in one study. “We would expect this to work better in immunocompetent patients, but it’s something to keep in mind if you’re limited in the medications you can get for a patient,” Dr. Adams said. Also, results from a systematic review and meta-analysis suggest that systemic retinoids combined with intralesional immunotherapy leads to higher clearance rates and lower rates of recurrence of warts. The top performer among those tested was acitretin plus Candida antigen.
Dr. Adams advised dermatologists who try alternatives to salicylic acid and cryotherapy for warts to be “wary of a lack of high-level evidence” for their use. “They can be helpful for patients who have failed traditional therapies or have a contraindication to the usual go-tos.”
She reported having no relevant financial disclosures.
FROM AAD 2024
Time Is Money: Should Physicians Be Compensated for EHR Engagement?
Electronic health records (EHRs) make providing coordinated, efficient care easier and reduce medical errors and test duplications; research has also correlated EHR adoption with higher patient satisfaction and outcomes. However, for physicians, the benefits come at a cost.
Physicians spend significantly more time in healthcare portals, making notes, entering orders, reviewing clinical reports, and responding to patient messages.
“I spend at least the same amount of time in the portal that I do in scheduled clinical time with patients,” said Eve Rittenberg, MD, primary care physician at Brigham and Women’s Hospital and assistant professor at Harvard Medical School, Boston. “So, if I have a 4-hour session of seeing patients, I spend at least another 4 or more hours in the patient portal.”
The latest data showed that primary care physicians logged a median of 36.2 minutes in the healthcare portal per patient visit, spending 58.9% more time on orders, 24.4% more time reading and responding to messages, and 13% more time on chart review compared with prepandemic portal use.
“EHRs can be very powerful tools,” said Ralph DeBiasi, MD, a clinical cardiac electrophysiologist at Yale New Haven Health in Connecticut. “We’re still working on how to best harness that power to make us better doctors and better care teams and to take better care of our patients because their use can take up a lot of time.”
Portal Time Isn’t Paid Time
Sharp increases in the amount of time spent in the EHR responding to messages or dispensing medical advice via the portal often aren’t linked to increases in compensation; most portal time is unpaid.
“There isn’t specific time allocated to working in the portal; it’s either done in the office while a patient is sitting in an exam room or in the mornings and evenings outside of traditional working hours,” Dr. DeBiasi told this news organization. “I think it’s reasonable to consider it being reimbursed because we’re taking our time and effort and making decisions to help the patient.”
Compensation for portal time affects all physicians, but the degree of impact depends on their specialties. Primary care physicians spent significantly more daily and after-hours time in the EHR, entering notes and orders, and doing clinical reviews compared to surgical and medical specialties.
In addition to the outsized impact on primary care, physician compensation for portal time is also an equity issue.
Dr. Rittenberg researched the issue and found a higher volume of communication from both patients and staff to female physicians than male physicians. As a result, female physicians spend 41.4 minutes more on the EHR than their male counterparts, which equates to more unpaid time. It’s likely no coincidence then that burnout rates are also higher among female physicians, who also leave the clinical workforce in higher numbers, especially in primary care.
“Finding ways to fairly compensate physicians for their work also will address some of the equity issues in workload and the consequences,” Dr. Rittenberg said.
Addressing the Issue
Some health systems have started charging patients who seek medical advice via patient portals, equating the communication to asynchronous acute care or an additional care touch point and billing based on the length and complexity of the messages. Patient fees for seeking medical advice via portals vary widely depending on their health system and insurance.
At University of California San Francisco Health, billing patients for EHR communication led to a sharp decrease in patient messages, which eased physician workload. At Cleveland Clinic, physicians receive “productivity credits” for the time spent in the EHR that can be used to reduce their clinic hours (but have no impact on their compensation).
Changes to the Medicare Physician Fee Schedule also allow physicians to bill for “digital evaluation and management” based on the time spent in an EHR responding to patient-initiated questions and requests.
However, more efforts are needed to ease burnout and reverse the number of physicians who are seeing fewer patients or leaving medical practice altogether as a direct result of spending increasing amounts of unpaid time in the EHR. Dr. Rittenberg, who spends an estimated 50% of her working hours in the portal, had to reduce her clinical workload by 25% due to such heavy portal requirements.
“The workload has become unsustainable,” she said. “The work has undergone a dramatic change over the past decade, and the compensation system has not kept up with that change.”
Prioritizing Patient and Physician Experiences
The ever-expanding use of EHRs is a result of their value as a healthcare tool. Data showed that the electronic exchange of information between patients and physicians improves diagnostics, reduces medical errors, enhances communication, and leads to more patient-centered care — and physicians want their patients to use the portal to maximize their healthcare.
“[The EHR] is good for patients,” said Dr. DeBiasi. “Sometimes, patients have access issues with healthcare, whether that’s not knowing what number to call or getting the right message to the right person at the right office. If [the portal] is good for them and helps them get access to care, we should embrace that and figure out a way to work it into our day-to-day schedules.”
But maximizing the patient experience shouldn’t come at the physicians’ expense. Dr. Rittenberg advocates a model that compensates physicians for the time spent in the EHR and prioritizes a team approach to rebalance the EHR workload to ensure that physicians aren’t devoting too much time to administrative tasks and can, instead, focus their time on clinical tasks.
“The way in which we provide healthcare has fundamentally shifted, and compensation models need to reflect that new reality,” Dr. Rittenberg added.
A version of this article first appeared on Medscape.com.
Electronic health records (EHRs) make providing coordinated, efficient care easier and reduce medical errors and test duplications; research has also correlated EHR adoption with higher patient satisfaction and outcomes. However, for physicians, the benefits come at a cost.
Physicians spend significantly more time in healthcare portals, making notes, entering orders, reviewing clinical reports, and responding to patient messages.
“I spend at least the same amount of time in the portal that I do in scheduled clinical time with patients,” said Eve Rittenberg, MD, primary care physician at Brigham and Women’s Hospital and assistant professor at Harvard Medical School, Boston. “So, if I have a 4-hour session of seeing patients, I spend at least another 4 or more hours in the patient portal.”
The latest data showed that primary care physicians logged a median of 36.2 minutes in the healthcare portal per patient visit, spending 58.9% more time on orders, 24.4% more time reading and responding to messages, and 13% more time on chart review compared with prepandemic portal use.
“EHRs can be very powerful tools,” said Ralph DeBiasi, MD, a clinical cardiac electrophysiologist at Yale New Haven Health in Connecticut. “We’re still working on how to best harness that power to make us better doctors and better care teams and to take better care of our patients because their use can take up a lot of time.”
Portal Time Isn’t Paid Time
Sharp increases in the amount of time spent in the EHR responding to messages or dispensing medical advice via the portal often aren’t linked to increases in compensation; most portal time is unpaid.
“There isn’t specific time allocated to working in the portal; it’s either done in the office while a patient is sitting in an exam room or in the mornings and evenings outside of traditional working hours,” Dr. DeBiasi told this news organization. “I think it’s reasonable to consider it being reimbursed because we’re taking our time and effort and making decisions to help the patient.”
Compensation for portal time affects all physicians, but the degree of impact depends on their specialties. Primary care physicians spent significantly more daily and after-hours time in the EHR, entering notes and orders, and doing clinical reviews compared to surgical and medical specialties.
In addition to the outsized impact on primary care, physician compensation for portal time is also an equity issue.
Dr. Rittenberg researched the issue and found a higher volume of communication from both patients and staff to female physicians than male physicians. As a result, female physicians spend 41.4 minutes more on the EHR than their male counterparts, which equates to more unpaid time. It’s likely no coincidence then that burnout rates are also higher among female physicians, who also leave the clinical workforce in higher numbers, especially in primary care.
“Finding ways to fairly compensate physicians for their work also will address some of the equity issues in workload and the consequences,” Dr. Rittenberg said.
Addressing the Issue
Some health systems have started charging patients who seek medical advice via patient portals, equating the communication to asynchronous acute care or an additional care touch point and billing based on the length and complexity of the messages. Patient fees for seeking medical advice via portals vary widely depending on their health system and insurance.
At University of California San Francisco Health, billing patients for EHR communication led to a sharp decrease in patient messages, which eased physician workload. At Cleveland Clinic, physicians receive “productivity credits” for the time spent in the EHR that can be used to reduce their clinic hours (but have no impact on their compensation).
Changes to the Medicare Physician Fee Schedule also allow physicians to bill for “digital evaluation and management” based on the time spent in an EHR responding to patient-initiated questions and requests.
However, more efforts are needed to ease burnout and reverse the number of physicians who are seeing fewer patients or leaving medical practice altogether as a direct result of spending increasing amounts of unpaid time in the EHR. Dr. Rittenberg, who spends an estimated 50% of her working hours in the portal, had to reduce her clinical workload by 25% due to such heavy portal requirements.
“The workload has become unsustainable,” she said. “The work has undergone a dramatic change over the past decade, and the compensation system has not kept up with that change.”
Prioritizing Patient and Physician Experiences
The ever-expanding use of EHRs is a result of their value as a healthcare tool. Data showed that the electronic exchange of information between patients and physicians improves diagnostics, reduces medical errors, enhances communication, and leads to more patient-centered care — and physicians want their patients to use the portal to maximize their healthcare.
“[The EHR] is good for patients,” said Dr. DeBiasi. “Sometimes, patients have access issues with healthcare, whether that’s not knowing what number to call or getting the right message to the right person at the right office. If [the portal] is good for them and helps them get access to care, we should embrace that and figure out a way to work it into our day-to-day schedules.”
But maximizing the patient experience shouldn’t come at the physicians’ expense. Dr. Rittenberg advocates a model that compensates physicians for the time spent in the EHR and prioritizes a team approach to rebalance the EHR workload to ensure that physicians aren’t devoting too much time to administrative tasks and can, instead, focus their time on clinical tasks.
“The way in which we provide healthcare has fundamentally shifted, and compensation models need to reflect that new reality,” Dr. Rittenberg added.
A version of this article first appeared on Medscape.com.
Electronic health records (EHRs) make providing coordinated, efficient care easier and reduce medical errors and test duplications; research has also correlated EHR adoption with higher patient satisfaction and outcomes. However, for physicians, the benefits come at a cost.
Physicians spend significantly more time in healthcare portals, making notes, entering orders, reviewing clinical reports, and responding to patient messages.
“I spend at least the same amount of time in the portal that I do in scheduled clinical time with patients,” said Eve Rittenberg, MD, primary care physician at Brigham and Women’s Hospital and assistant professor at Harvard Medical School, Boston. “So, if I have a 4-hour session of seeing patients, I spend at least another 4 or more hours in the patient portal.”
The latest data showed that primary care physicians logged a median of 36.2 minutes in the healthcare portal per patient visit, spending 58.9% more time on orders, 24.4% more time reading and responding to messages, and 13% more time on chart review compared with prepandemic portal use.
“EHRs can be very powerful tools,” said Ralph DeBiasi, MD, a clinical cardiac electrophysiologist at Yale New Haven Health in Connecticut. “We’re still working on how to best harness that power to make us better doctors and better care teams and to take better care of our patients because their use can take up a lot of time.”
Portal Time Isn’t Paid Time
Sharp increases in the amount of time spent in the EHR responding to messages or dispensing medical advice via the portal often aren’t linked to increases in compensation; most portal time is unpaid.
“There isn’t specific time allocated to working in the portal; it’s either done in the office while a patient is sitting in an exam room or in the mornings and evenings outside of traditional working hours,” Dr. DeBiasi told this news organization. “I think it’s reasonable to consider it being reimbursed because we’re taking our time and effort and making decisions to help the patient.”
Compensation for portal time affects all physicians, but the degree of impact depends on their specialties. Primary care physicians spent significantly more daily and after-hours time in the EHR, entering notes and orders, and doing clinical reviews compared to surgical and medical specialties.
In addition to the outsized impact on primary care, physician compensation for portal time is also an equity issue.
Dr. Rittenberg researched the issue and found a higher volume of communication from both patients and staff to female physicians than male physicians. As a result, female physicians spend 41.4 minutes more on the EHR than their male counterparts, which equates to more unpaid time. It’s likely no coincidence then that burnout rates are also higher among female physicians, who also leave the clinical workforce in higher numbers, especially in primary care.
“Finding ways to fairly compensate physicians for their work also will address some of the equity issues in workload and the consequences,” Dr. Rittenberg said.
Addressing the Issue
Some health systems have started charging patients who seek medical advice via patient portals, equating the communication to asynchronous acute care or an additional care touch point and billing based on the length and complexity of the messages. Patient fees for seeking medical advice via portals vary widely depending on their health system and insurance.
At University of California San Francisco Health, billing patients for EHR communication led to a sharp decrease in patient messages, which eased physician workload. At Cleveland Clinic, physicians receive “productivity credits” for the time spent in the EHR that can be used to reduce their clinic hours (but have no impact on their compensation).
Changes to the Medicare Physician Fee Schedule also allow physicians to bill for “digital evaluation and management” based on the time spent in an EHR responding to patient-initiated questions and requests.
However, more efforts are needed to ease burnout and reverse the number of physicians who are seeing fewer patients or leaving medical practice altogether as a direct result of spending increasing amounts of unpaid time in the EHR. Dr. Rittenberg, who spends an estimated 50% of her working hours in the portal, had to reduce her clinical workload by 25% due to such heavy portal requirements.
“The workload has become unsustainable,” she said. “The work has undergone a dramatic change over the past decade, and the compensation system has not kept up with that change.”
Prioritizing Patient and Physician Experiences
The ever-expanding use of EHRs is a result of their value as a healthcare tool. Data showed that the electronic exchange of information between patients and physicians improves diagnostics, reduces medical errors, enhances communication, and leads to more patient-centered care — and physicians want their patients to use the portal to maximize their healthcare.
“[The EHR] is good for patients,” said Dr. DeBiasi. “Sometimes, patients have access issues with healthcare, whether that’s not knowing what number to call or getting the right message to the right person at the right office. If [the portal] is good for them and helps them get access to care, we should embrace that and figure out a way to work it into our day-to-day schedules.”
But maximizing the patient experience shouldn’t come at the physicians’ expense. Dr. Rittenberg advocates a model that compensates physicians for the time spent in the EHR and prioritizes a team approach to rebalance the EHR workload to ensure that physicians aren’t devoting too much time to administrative tasks and can, instead, focus their time on clinical tasks.
“The way in which we provide healthcare has fundamentally shifted, and compensation models need to reflect that new reality,” Dr. Rittenberg added.
A version of this article first appeared on Medscape.com.
ADHD Meds Linked to Lower Suicide, Hospitalization Risk
TOPLINE:
Certain stimulants prescribed for attention-deficit/hyperactivity disorder (ADHD) are associated with a decreased risk for psychiatric and nonpsychiatric hospitalization and suicide, new data from a national cohort study showed.
METHODOLOGY:
- Investigators used various medical and administrative databases in Sweden to identify individuals aged 16-65 years who were diagnosed with ADHD between January 2006 and December 2021.
- Participants were followed for up to 15 years (mean duration, 7 years) from date of diagnosis until death, emigration, or end of data linkage in December 2021.
- Researchers wanted to explore the link between ADHD meds and psychiatric hospitalization, nonpsychiatric hospitalization, and suicidal behavior.
TAKEAWAY:
- The cohort included 221,700 individuals with ADHD (mean age, 25 years; 54% male), and 56% had a psychiatric comorbidity such as an anxiety or stress-related disorder (24%), and depression or bipolar disorder (20%).
- Investigators found significantly lower risk for psychiatric hospitalization for the several medications. These included amphetamine (adjusted hazard ratio [aHR], 0.74), lisdexamphetamine (aHR, 0.80), dexamphetamine (aHR, 0.88), methylphenidate (aHR, 0.93), and polytherapy (aHR, 0.85). All but atomoxetine was significant at the P < .001 level.
- ADHD medications associated with a significantly lower risk for nonpsychiatric hospitalization included amphetamine (aHR, 0.62), lisdexamphetamine (aHR, 0.64), polytherapy (aHR, 0.67), dexamphetamine (aHR, 0.72), methylphenidate (aHR, 0.80), and atomoxetine (aHR, 0.84). All but atomoxetine was significant at the P < .001 level.
- Use of dexamphetamine (aHR, 0.69; P < .001), lisdexamphetamine (aHR, 0.76; P = .43), polytherapy (aHR, 0.85; P = .02), and methylphenidate (aHR, 0.92; P = .007) were associated with a significantly lower risk for suicidal behavior.
IN PRACTICE:
“Although concerns have been raised about the potential of amphetamines and methylphenidate for increasing the risk of adverse psychiatric outcomes, such as psychosis and mania, our results show that overall, the net effect on psychiatric outcomes is positive,” study authors wrote.
SOURCE:
Heidi Taipale, PhD, of Karolinska Institutet, led the study, which was published online in JAMA Network Open.
LIMITATIONS:
Due to the use of nationwide registers, there was a lack of detailed clinical data, including type and severity of symptoms. There was also no data on nonpharmacologic treatments.
DISCLOSURES:
The study was funded by the AFA Insurance Agency. Dr. Taipale reported receiving personal fees from Gedeon Richter, Janssen, Lundbeck, and Otsuka and grants from Janssen and Eli Lilly outside of the submitted work. Other disclosures are noted in the original article.
A version of this article first appeared on Medscape.com.
TOPLINE:
Certain stimulants prescribed for attention-deficit/hyperactivity disorder (ADHD) are associated with a decreased risk for psychiatric and nonpsychiatric hospitalization and suicide, new data from a national cohort study showed.
METHODOLOGY:
- Investigators used various medical and administrative databases in Sweden to identify individuals aged 16-65 years who were diagnosed with ADHD between January 2006 and December 2021.
- Participants were followed for up to 15 years (mean duration, 7 years) from date of diagnosis until death, emigration, or end of data linkage in December 2021.
- Researchers wanted to explore the link between ADHD meds and psychiatric hospitalization, nonpsychiatric hospitalization, and suicidal behavior.
TAKEAWAY:
- The cohort included 221,700 individuals with ADHD (mean age, 25 years; 54% male), and 56% had a psychiatric comorbidity such as an anxiety or stress-related disorder (24%), and depression or bipolar disorder (20%).
- Investigators found significantly lower risk for psychiatric hospitalization for the several medications. These included amphetamine (adjusted hazard ratio [aHR], 0.74), lisdexamphetamine (aHR, 0.80), dexamphetamine (aHR, 0.88), methylphenidate (aHR, 0.93), and polytherapy (aHR, 0.85). All but atomoxetine was significant at the P < .001 level.
- ADHD medications associated with a significantly lower risk for nonpsychiatric hospitalization included amphetamine (aHR, 0.62), lisdexamphetamine (aHR, 0.64), polytherapy (aHR, 0.67), dexamphetamine (aHR, 0.72), methylphenidate (aHR, 0.80), and atomoxetine (aHR, 0.84). All but atomoxetine was significant at the P < .001 level.
- Use of dexamphetamine (aHR, 0.69; P < .001), lisdexamphetamine (aHR, 0.76; P = .43), polytherapy (aHR, 0.85; P = .02), and methylphenidate (aHR, 0.92; P = .007) were associated with a significantly lower risk for suicidal behavior.
IN PRACTICE:
“Although concerns have been raised about the potential of amphetamines and methylphenidate for increasing the risk of adverse psychiatric outcomes, such as psychosis and mania, our results show that overall, the net effect on psychiatric outcomes is positive,” study authors wrote.
SOURCE:
Heidi Taipale, PhD, of Karolinska Institutet, led the study, which was published online in JAMA Network Open.
LIMITATIONS:
Due to the use of nationwide registers, there was a lack of detailed clinical data, including type and severity of symptoms. There was also no data on nonpharmacologic treatments.
DISCLOSURES:
The study was funded by the AFA Insurance Agency. Dr. Taipale reported receiving personal fees from Gedeon Richter, Janssen, Lundbeck, and Otsuka and grants from Janssen and Eli Lilly outside of the submitted work. Other disclosures are noted in the original article.
A version of this article first appeared on Medscape.com.
TOPLINE:
Certain stimulants prescribed for attention-deficit/hyperactivity disorder (ADHD) are associated with a decreased risk for psychiatric and nonpsychiatric hospitalization and suicide, new data from a national cohort study showed.
METHODOLOGY:
- Investigators used various medical and administrative databases in Sweden to identify individuals aged 16-65 years who were diagnosed with ADHD between January 2006 and December 2021.
- Participants were followed for up to 15 years (mean duration, 7 years) from date of diagnosis until death, emigration, or end of data linkage in December 2021.
- Researchers wanted to explore the link between ADHD meds and psychiatric hospitalization, nonpsychiatric hospitalization, and suicidal behavior.
TAKEAWAY:
- The cohort included 221,700 individuals with ADHD (mean age, 25 years; 54% male), and 56% had a psychiatric comorbidity such as an anxiety or stress-related disorder (24%), and depression or bipolar disorder (20%).
- Investigators found significantly lower risk for psychiatric hospitalization for the several medications. These included amphetamine (adjusted hazard ratio [aHR], 0.74), lisdexamphetamine (aHR, 0.80), dexamphetamine (aHR, 0.88), methylphenidate (aHR, 0.93), and polytherapy (aHR, 0.85). All but atomoxetine was significant at the P < .001 level.
- ADHD medications associated with a significantly lower risk for nonpsychiatric hospitalization included amphetamine (aHR, 0.62), lisdexamphetamine (aHR, 0.64), polytherapy (aHR, 0.67), dexamphetamine (aHR, 0.72), methylphenidate (aHR, 0.80), and atomoxetine (aHR, 0.84). All but atomoxetine was significant at the P < .001 level.
- Use of dexamphetamine (aHR, 0.69; P < .001), lisdexamphetamine (aHR, 0.76; P = .43), polytherapy (aHR, 0.85; P = .02), and methylphenidate (aHR, 0.92; P = .007) were associated with a significantly lower risk for suicidal behavior.
IN PRACTICE:
“Although concerns have been raised about the potential of amphetamines and methylphenidate for increasing the risk of adverse psychiatric outcomes, such as psychosis and mania, our results show that overall, the net effect on psychiatric outcomes is positive,” study authors wrote.
SOURCE:
Heidi Taipale, PhD, of Karolinska Institutet, led the study, which was published online in JAMA Network Open.
LIMITATIONS:
Due to the use of nationwide registers, there was a lack of detailed clinical data, including type and severity of symptoms. There was also no data on nonpharmacologic treatments.
DISCLOSURES:
The study was funded by the AFA Insurance Agency. Dr. Taipale reported receiving personal fees from Gedeon Richter, Janssen, Lundbeck, and Otsuka and grants from Janssen and Eli Lilly outside of the submitted work. Other disclosures are noted in the original article.
A version of this article first appeared on Medscape.com.
The Nose Knows
A few weeks ago I stumbled upon a two-sentence blurb in a pediatric newsletter summarizing the results of a study comparing the chemical profile of infant body odor with that of postpubertal adolescents. The investigators found that, not surprisingly, the smell of the chemical constituents wafting from babies was more appealing than that emanating from sweaty teenagers. I quickly moved on to the next blurb hoping to find something I hadn’t already experienced or figured out on my own.
But, as I navigated through the rest of my day filled with pickleball, bicycling, and the smell of home-cooked food, something about that study of body odor nagged at me. Who had funded that voyage into the obvious? Were my tax dollars involved? Had I been duped by some alleged nonprofit that had promised my donation would save lives or at least ameliorate suffering? Finally, as the sun dipped below the horizon, my curiosity got the best of me and I searched out the original study. Within minutes I fell down a rabbit hole into the cavernous world of odor science.
Having had zero experience in this niche field, I was amazed at the lengths to which these German odor investigators had gone to analyze the chemicals on and around their subjects. Just trying to ensure that materials and microclimates in the experimental environment were scent-free was a heroic effort. There was “Mono-trap sampling of volatiles, followed by thermodesorption-comprehensive gas chromatography, and time of flight-mass spectrometry analysis.” There were graphs and charts galore. This is serious science, folks. However, they still use the abbreviation “BO” freely, just as I learned to do in junior high. And, in some situations, the investigators relied on the observation of a panel of trained human sniffers to assess the detection threshold and the degree of pleasantness.
Ultimately, the authors’ conclusion was “sexual maturation coincides with changes in body odor chemical composition. Whether those changes explain differences in parental olfactory perception needs to be determined in future studies.” Again, no surprises here.
Exhausted by my venture into the realm of odor science, I finally found the answer to my burning question. The study was supported by the German Research Foundation and the European Union. Phew! Not on my nickel.
Lest you think that I believe any investigation into the potential role of smell in our health and well-being is pure bunk, let me make it clear that I think the role of odor detection is one of the least well-studied and potentially most valuable areas of medical research. Having had one family tell me that their black lab had twice successfully “diagnosed” their pre-verbal child’s ear infection (which I confirmed with an otoscope and the tympanic membrane was intact) I have been keenly interested in the role of animal-assisted diagnosis.
If you also have wondered whether you could write off your pedigreed Portuguese Water Dog as an office expense, I would direct you to an article titled “Canine olfactory detection and its relevance to medical detection.” The authors note that there is some evidence of dogs successfully alerting physicians to Parkinson’s disease, some cancers, malaria, and COVID-19, among others. However, they caution that the reliability is, in most cases, not of a quality that would be helpful on a larger scale.
I can understand the reasons for their caution. However, from my own personal experience, I am completely confident that I can diagnose strep throat by smell, sometimes simply on opening the examination room door. My false-positive rate over 40 years of practice is zero. Of course I still test and, not surprisingly, my false-negative rate is nothing to brag about. However, if a dog can produce even close to my zero false negative with a given disease, that information is valuable and suggests that we should be pointing the odor investigators and their tool box of skills in that direction. I’m pretty sure we don’t need them to dig much deeper into why babies smell better than teenagers.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A few weeks ago I stumbled upon a two-sentence blurb in a pediatric newsletter summarizing the results of a study comparing the chemical profile of infant body odor with that of postpubertal adolescents. The investigators found that, not surprisingly, the smell of the chemical constituents wafting from babies was more appealing than that emanating from sweaty teenagers. I quickly moved on to the next blurb hoping to find something I hadn’t already experienced or figured out on my own.
But, as I navigated through the rest of my day filled with pickleball, bicycling, and the smell of home-cooked food, something about that study of body odor nagged at me. Who had funded that voyage into the obvious? Were my tax dollars involved? Had I been duped by some alleged nonprofit that had promised my donation would save lives or at least ameliorate suffering? Finally, as the sun dipped below the horizon, my curiosity got the best of me and I searched out the original study. Within minutes I fell down a rabbit hole into the cavernous world of odor science.
Having had zero experience in this niche field, I was amazed at the lengths to which these German odor investigators had gone to analyze the chemicals on and around their subjects. Just trying to ensure that materials and microclimates in the experimental environment were scent-free was a heroic effort. There was “Mono-trap sampling of volatiles, followed by thermodesorption-comprehensive gas chromatography, and time of flight-mass spectrometry analysis.” There were graphs and charts galore. This is serious science, folks. However, they still use the abbreviation “BO” freely, just as I learned to do in junior high. And, in some situations, the investigators relied on the observation of a panel of trained human sniffers to assess the detection threshold and the degree of pleasantness.
Ultimately, the authors’ conclusion was “sexual maturation coincides with changes in body odor chemical composition. Whether those changes explain differences in parental olfactory perception needs to be determined in future studies.” Again, no surprises here.
Exhausted by my venture into the realm of odor science, I finally found the answer to my burning question. The study was supported by the German Research Foundation and the European Union. Phew! Not on my nickel.
Lest you think that I believe any investigation into the potential role of smell in our health and well-being is pure bunk, let me make it clear that I think the role of odor detection is one of the least well-studied and potentially most valuable areas of medical research. Having had one family tell me that their black lab had twice successfully “diagnosed” their pre-verbal child’s ear infection (which I confirmed with an otoscope and the tympanic membrane was intact) I have been keenly interested in the role of animal-assisted diagnosis.
If you also have wondered whether you could write off your pedigreed Portuguese Water Dog as an office expense, I would direct you to an article titled “Canine olfactory detection and its relevance to medical detection.” The authors note that there is some evidence of dogs successfully alerting physicians to Parkinson’s disease, some cancers, malaria, and COVID-19, among others. However, they caution that the reliability is, in most cases, not of a quality that would be helpful on a larger scale.
I can understand the reasons for their caution. However, from my own personal experience, I am completely confident that I can diagnose strep throat by smell, sometimes simply on opening the examination room door. My false-positive rate over 40 years of practice is zero. Of course I still test and, not surprisingly, my false-negative rate is nothing to brag about. However, if a dog can produce even close to my zero false negative with a given disease, that information is valuable and suggests that we should be pointing the odor investigators and their tool box of skills in that direction. I’m pretty sure we don’t need them to dig much deeper into why babies smell better than teenagers.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].
A few weeks ago I stumbled upon a two-sentence blurb in a pediatric newsletter summarizing the results of a study comparing the chemical profile of infant body odor with that of postpubertal adolescents. The investigators found that, not surprisingly, the smell of the chemical constituents wafting from babies was more appealing than that emanating from sweaty teenagers. I quickly moved on to the next blurb hoping to find something I hadn’t already experienced or figured out on my own.
But, as I navigated through the rest of my day filled with pickleball, bicycling, and the smell of home-cooked food, something about that study of body odor nagged at me. Who had funded that voyage into the obvious? Were my tax dollars involved? Had I been duped by some alleged nonprofit that had promised my donation would save lives or at least ameliorate suffering? Finally, as the sun dipped below the horizon, my curiosity got the best of me and I searched out the original study. Within minutes I fell down a rabbit hole into the cavernous world of odor science.
Having had zero experience in this niche field, I was amazed at the lengths to which these German odor investigators had gone to analyze the chemicals on and around their subjects. Just trying to ensure that materials and microclimates in the experimental environment were scent-free was a heroic effort. There was “Mono-trap sampling of volatiles, followed by thermodesorption-comprehensive gas chromatography, and time of flight-mass spectrometry analysis.” There were graphs and charts galore. This is serious science, folks. However, they still use the abbreviation “BO” freely, just as I learned to do in junior high. And, in some situations, the investigators relied on the observation of a panel of trained human sniffers to assess the detection threshold and the degree of pleasantness.
Ultimately, the authors’ conclusion was “sexual maturation coincides with changes in body odor chemical composition. Whether those changes explain differences in parental olfactory perception needs to be determined in future studies.” Again, no surprises here.
Exhausted by my venture into the realm of odor science, I finally found the answer to my burning question. The study was supported by the German Research Foundation and the European Union. Phew! Not on my nickel.
Lest you think that I believe any investigation into the potential role of smell in our health and well-being is pure bunk, let me make it clear that I think the role of odor detection is one of the least well-studied and potentially most valuable areas of medical research. Having had one family tell me that their black lab had twice successfully “diagnosed” their pre-verbal child’s ear infection (which I confirmed with an otoscope and the tympanic membrane was intact) I have been keenly interested in the role of animal-assisted diagnosis.
If you also have wondered whether you could write off your pedigreed Portuguese Water Dog as an office expense, I would direct you to an article titled “Canine olfactory detection and its relevance to medical detection.” The authors note that there is some evidence of dogs successfully alerting physicians to Parkinson’s disease, some cancers, malaria, and COVID-19, among others. However, they caution that the reliability is, in most cases, not of a quality that would be helpful on a larger scale.
I can understand the reasons for their caution. However, from my own personal experience, I am completely confident that I can diagnose strep throat by smell, sometimes simply on opening the examination room door. My false-positive rate over 40 years of practice is zero. Of course I still test and, not surprisingly, my false-negative rate is nothing to brag about. However, if a dog can produce even close to my zero false negative with a given disease, that information is valuable and suggests that we should be pointing the odor investigators and their tool box of skills in that direction. I’m pretty sure we don’t need them to dig much deeper into why babies smell better than teenagers.
Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Other than a Littman stethoscope he accepted as a first-year medical student in 1966, Dr. Wilkoff reports having nothing to disclose. Email him at [email protected].