Bivalent Vaccines Protect Even Children Who’ve Had COVID

Article Type
Changed
Tue, 02/13/2024 - 15:49

 



This transcript has been edited for clarity.

It was only 3 years ago when we called the pathogen we now refer to as the coronavirus “nCOV-19.” It was, in many ways, more descriptive than what we have today. The little “n” there stood for “novel” — and it was really that little “n” that caused us all the trouble.

You see, coronaviruses themselves were not really new to us. Understudied, perhaps, but with four strains running around the globe at any time giving rise to the common cold, these were viruses our bodies understood.

But the coronavirus discovered in 2019 was novel — not just to the world, but to our own immune systems. It was different enough from its circulating relatives that our immune memory cells failed to recognize it. Instead of acting like a cold, it acted like nothing we had seen before, at least in our lifetime. The story of the pandemic is very much a bildungsroman of our immune systems — a story of how our immunity grew up.

The difference between the start of 2020 and now, when infections with the coronavirus remain common but not as deadly, can be measured in terms of immune education. Some of our immune systems were educated by infection, some by vaccination, and many by both.

When the first vaccines emerged in December 2020, the opportunity to educate our immune systems was still huge. Though, at the time, an estimated 20 million had been infected in the US and 350,000 had died, there was a large population that remained immunologically naive. I was one of them.

If 2020 into early 2021 was the era of immune education, the postvaccine period was the era of the variant. From one COVID strain to two, to five, to innumerable, our immune memory — trained on a specific version of the virus or its spike protein — became imperfect again. Not naive; these variants were not “novel” in the way COVID-19 was novel, but they were different. And different enough to cause infection.

Following the playbook of another virus that loves to come dressed up in different outfits, the flu virus, we find ourselves in the booster era — a world where yearly doses of a vaccine, ideally matched to the variants circulating when the vaccine is given, are the recommendation if not the norm.

But questions remain about the vaccination program, particularly around who should get it. And two populations with big question marks over their heads are (1) people who have already been infected and (2) kids, because their risk for bad outcomes is so much lower.

This week, we finally have some evidence that can shed light on these questions. The study under the spotlight is this one, appearing in JAMA, which tries to analyze the ability of the bivalent vaccine — that’s the second one to come out, around September  2022 — to protect kids from COVID-19.

Now, right off the bat, this was not a randomized trial. The studies that established the viability of the mRNA vaccine platform were; they happened before the vaccine was authorized. But trials of the bivalent vaccine were mostly limited to proving immune response, not protection from disease.

Nevertheless, with some good observational methods and some statistics, we can try to tease out whether bivalent vaccines in kids worked.

The study combines three prospective cohort studies. The details are in the paper, but what you need to know is that the special sauce of these studies was that the kids were tested for COVID-19 on a weekly basis, whether they had symptoms or not. This is critical because asymptomatic infections can transmit COVID-19.

Let’s do the variables of interest. First and foremost, the bivalent vaccine. Some of these kids got the bivalent vaccine, some didn’t. Other key variables include prior vaccination with the monovalent vaccine. Some had been vaccinated with the monovalent vaccine before, some hadn’t. And, of course, prior infection. Some had been infected before (based on either nasal swabs or blood tests).

Let’s focus first on the primary exposure of interest: getting that bivalent vaccine. Again, this was not randomly assigned; kids who got the bivalent vaccine were different from those who did not. In general, they lived in smaller households, they were more likely to be White, less likely to have had a prior COVID infection, and quite a bit more likely to have at least one chronic condition.

JAMA


To me, this constellation of factors describes a slightly higher-risk group; it makes sense that they were more likely to get the second vaccine.

Given those factors, what were the rates of COVID infection? After nearly a year of follow-up, around 15% of the kids who hadn’t received the bivalent vaccine got infected compared with 5% of the vaccinated kids. Symptomatic infections represented roughly half of all infections in both groups.

JAMA


After adjustment for factors that differed between the groups, this difference translated into a vaccine efficacy of about 50% in this population. That’s our first data point. Yes, the bivalent vaccine worked. Not amazingly, of course. But it worked.

What about the kids who had had a prior COVID infection? Somewhat surprisingly, the vaccine was just as effective in this population, despite the fact that their immune systems already had some knowledge of COVID. Ten percent of unvaccinated kids got infected, even though they had been infected before. Just 2.5% of kids who received the bivalent vaccine got infected, suggesting some synergy between prior infection and vaccination.

JAMA


These data suggest that the bivalent vaccine did reduce the risk for COVID infection in kids. All good. But the piece still missing is how severe these infections were. It doesn’t appear that any of the 426 infections documented in this study resulted in hospitalization or death, fortunately. And no data are presented on the incidence of multisystem inflammatory syndrome of children, though given the rarity, I’d be surprised if any of these kids have this either.

So where are we? Well, it seems that the narrative out there that says “the vaccines don’t work” or “the vaccines don’t work if you’ve already been infected” is probably not true. They do work. This study and others in adults show that. If they work to reduce infections, as this study shows, they will also work to reduce deaths. It’s just that death is fortunately so rare in children that the number needed to vaccinate to prevent one death is very large. In that situation, the decision to vaccinate comes down to the risks associated with vaccination. So far, those risk seem very minimal.

Perhaps falling into a flu-like yearly vaccination schedule is not simply the result of old habits dying hard. Maybe it’s actually not a bad idea.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 



This transcript has been edited for clarity.

It was only 3 years ago when we called the pathogen we now refer to as the coronavirus “nCOV-19.” It was, in many ways, more descriptive than what we have today. The little “n” there stood for “novel” — and it was really that little “n” that caused us all the trouble.

You see, coronaviruses themselves were not really new to us. Understudied, perhaps, but with four strains running around the globe at any time giving rise to the common cold, these were viruses our bodies understood.

But the coronavirus discovered in 2019 was novel — not just to the world, but to our own immune systems. It was different enough from its circulating relatives that our immune memory cells failed to recognize it. Instead of acting like a cold, it acted like nothing we had seen before, at least in our lifetime. The story of the pandemic is very much a bildungsroman of our immune systems — a story of how our immunity grew up.

The difference between the start of 2020 and now, when infections with the coronavirus remain common but not as deadly, can be measured in terms of immune education. Some of our immune systems were educated by infection, some by vaccination, and many by both.

When the first vaccines emerged in December 2020, the opportunity to educate our immune systems was still huge. Though, at the time, an estimated 20 million had been infected in the US and 350,000 had died, there was a large population that remained immunologically naive. I was one of them.

If 2020 into early 2021 was the era of immune education, the postvaccine period was the era of the variant. From one COVID strain to two, to five, to innumerable, our immune memory — trained on a specific version of the virus or its spike protein — became imperfect again. Not naive; these variants were not “novel” in the way COVID-19 was novel, but they were different. And different enough to cause infection.

Following the playbook of another virus that loves to come dressed up in different outfits, the flu virus, we find ourselves in the booster era — a world where yearly doses of a vaccine, ideally matched to the variants circulating when the vaccine is given, are the recommendation if not the norm.

But questions remain about the vaccination program, particularly around who should get it. And two populations with big question marks over their heads are (1) people who have already been infected and (2) kids, because their risk for bad outcomes is so much lower.

This week, we finally have some evidence that can shed light on these questions. The study under the spotlight is this one, appearing in JAMA, which tries to analyze the ability of the bivalent vaccine — that’s the second one to come out, around September  2022 — to protect kids from COVID-19.

Now, right off the bat, this was not a randomized trial. The studies that established the viability of the mRNA vaccine platform were; they happened before the vaccine was authorized. But trials of the bivalent vaccine were mostly limited to proving immune response, not protection from disease.

Nevertheless, with some good observational methods and some statistics, we can try to tease out whether bivalent vaccines in kids worked.

The study combines three prospective cohort studies. The details are in the paper, but what you need to know is that the special sauce of these studies was that the kids were tested for COVID-19 on a weekly basis, whether they had symptoms or not. This is critical because asymptomatic infections can transmit COVID-19.

Let’s do the variables of interest. First and foremost, the bivalent vaccine. Some of these kids got the bivalent vaccine, some didn’t. Other key variables include prior vaccination with the monovalent vaccine. Some had been vaccinated with the monovalent vaccine before, some hadn’t. And, of course, prior infection. Some had been infected before (based on either nasal swabs or blood tests).

Let’s focus first on the primary exposure of interest: getting that bivalent vaccine. Again, this was not randomly assigned; kids who got the bivalent vaccine were different from those who did not. In general, they lived in smaller households, they were more likely to be White, less likely to have had a prior COVID infection, and quite a bit more likely to have at least one chronic condition.

JAMA


To me, this constellation of factors describes a slightly higher-risk group; it makes sense that they were more likely to get the second vaccine.

Given those factors, what were the rates of COVID infection? After nearly a year of follow-up, around 15% of the kids who hadn’t received the bivalent vaccine got infected compared with 5% of the vaccinated kids. Symptomatic infections represented roughly half of all infections in both groups.

JAMA


After adjustment for factors that differed between the groups, this difference translated into a vaccine efficacy of about 50% in this population. That’s our first data point. Yes, the bivalent vaccine worked. Not amazingly, of course. But it worked.

What about the kids who had had a prior COVID infection? Somewhat surprisingly, the vaccine was just as effective in this population, despite the fact that their immune systems already had some knowledge of COVID. Ten percent of unvaccinated kids got infected, even though they had been infected before. Just 2.5% of kids who received the bivalent vaccine got infected, suggesting some synergy between prior infection and vaccination.

JAMA


These data suggest that the bivalent vaccine did reduce the risk for COVID infection in kids. All good. But the piece still missing is how severe these infections were. It doesn’t appear that any of the 426 infections documented in this study resulted in hospitalization or death, fortunately. And no data are presented on the incidence of multisystem inflammatory syndrome of children, though given the rarity, I’d be surprised if any of these kids have this either.

So where are we? Well, it seems that the narrative out there that says “the vaccines don’t work” or “the vaccines don’t work if you’ve already been infected” is probably not true. They do work. This study and others in adults show that. If they work to reduce infections, as this study shows, they will also work to reduce deaths. It’s just that death is fortunately so rare in children that the number needed to vaccinate to prevent one death is very large. In that situation, the decision to vaccinate comes down to the risks associated with vaccination. So far, those risk seem very minimal.

Perhaps falling into a flu-like yearly vaccination schedule is not simply the result of old habits dying hard. Maybe it’s actually not a bad idea.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

 



This transcript has been edited for clarity.

It was only 3 years ago when we called the pathogen we now refer to as the coronavirus “nCOV-19.” It was, in many ways, more descriptive than what we have today. The little “n” there stood for “novel” — and it was really that little “n” that caused us all the trouble.

You see, coronaviruses themselves were not really new to us. Understudied, perhaps, but with four strains running around the globe at any time giving rise to the common cold, these were viruses our bodies understood.

But the coronavirus discovered in 2019 was novel — not just to the world, but to our own immune systems. It was different enough from its circulating relatives that our immune memory cells failed to recognize it. Instead of acting like a cold, it acted like nothing we had seen before, at least in our lifetime. The story of the pandemic is very much a bildungsroman of our immune systems — a story of how our immunity grew up.

The difference between the start of 2020 and now, when infections with the coronavirus remain common but not as deadly, can be measured in terms of immune education. Some of our immune systems were educated by infection, some by vaccination, and many by both.

When the first vaccines emerged in December 2020, the opportunity to educate our immune systems was still huge. Though, at the time, an estimated 20 million had been infected in the US and 350,000 had died, there was a large population that remained immunologically naive. I was one of them.

If 2020 into early 2021 was the era of immune education, the postvaccine period was the era of the variant. From one COVID strain to two, to five, to innumerable, our immune memory — trained on a specific version of the virus or its spike protein — became imperfect again. Not naive; these variants were not “novel” in the way COVID-19 was novel, but they were different. And different enough to cause infection.

Following the playbook of another virus that loves to come dressed up in different outfits, the flu virus, we find ourselves in the booster era — a world where yearly doses of a vaccine, ideally matched to the variants circulating when the vaccine is given, are the recommendation if not the norm.

But questions remain about the vaccination program, particularly around who should get it. And two populations with big question marks over their heads are (1) people who have already been infected and (2) kids, because their risk for bad outcomes is so much lower.

This week, we finally have some evidence that can shed light on these questions. The study under the spotlight is this one, appearing in JAMA, which tries to analyze the ability of the bivalent vaccine — that’s the second one to come out, around September  2022 — to protect kids from COVID-19.

Now, right off the bat, this was not a randomized trial. The studies that established the viability of the mRNA vaccine platform were; they happened before the vaccine was authorized. But trials of the bivalent vaccine were mostly limited to proving immune response, not protection from disease.

Nevertheless, with some good observational methods and some statistics, we can try to tease out whether bivalent vaccines in kids worked.

The study combines three prospective cohort studies. The details are in the paper, but what you need to know is that the special sauce of these studies was that the kids were tested for COVID-19 on a weekly basis, whether they had symptoms or not. This is critical because asymptomatic infections can transmit COVID-19.

Let’s do the variables of interest. First and foremost, the bivalent vaccine. Some of these kids got the bivalent vaccine, some didn’t. Other key variables include prior vaccination with the monovalent vaccine. Some had been vaccinated with the monovalent vaccine before, some hadn’t. And, of course, prior infection. Some had been infected before (based on either nasal swabs or blood tests).

Let’s focus first on the primary exposure of interest: getting that bivalent vaccine. Again, this was not randomly assigned; kids who got the bivalent vaccine were different from those who did not. In general, they lived in smaller households, they were more likely to be White, less likely to have had a prior COVID infection, and quite a bit more likely to have at least one chronic condition.

JAMA


To me, this constellation of factors describes a slightly higher-risk group; it makes sense that they were more likely to get the second vaccine.

Given those factors, what were the rates of COVID infection? After nearly a year of follow-up, around 15% of the kids who hadn’t received the bivalent vaccine got infected compared with 5% of the vaccinated kids. Symptomatic infections represented roughly half of all infections in both groups.

JAMA


After adjustment for factors that differed between the groups, this difference translated into a vaccine efficacy of about 50% in this population. That’s our first data point. Yes, the bivalent vaccine worked. Not amazingly, of course. But it worked.

What about the kids who had had a prior COVID infection? Somewhat surprisingly, the vaccine was just as effective in this population, despite the fact that their immune systems already had some knowledge of COVID. Ten percent of unvaccinated kids got infected, even though they had been infected before. Just 2.5% of kids who received the bivalent vaccine got infected, suggesting some synergy between prior infection and vaccination.

JAMA


These data suggest that the bivalent vaccine did reduce the risk for COVID infection in kids. All good. But the piece still missing is how severe these infections were. It doesn’t appear that any of the 426 infections documented in this study resulted in hospitalization or death, fortunately. And no data are presented on the incidence of multisystem inflammatory syndrome of children, though given the rarity, I’d be surprised if any of these kids have this either.

So where are we? Well, it seems that the narrative out there that says “the vaccines don’t work” or “the vaccines don’t work if you’ve already been infected” is probably not true. They do work. This study and others in adults show that. If they work to reduce infections, as this study shows, they will also work to reduce deaths. It’s just that death is fortunately so rare in children that the number needed to vaccinate to prevent one death is very large. In that situation, the decision to vaccinate comes down to the risks associated with vaccination. So far, those risk seem very minimal.

Perhaps falling into a flu-like yearly vaccination schedule is not simply the result of old habits dying hard. Maybe it’s actually not a bad idea.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

More Young Women Being Diagnosed With Breast Cancer Than Ever Before

Article Type
Changed
Tue, 01/30/2024 - 13:56

This transcript has been edited for clarity.

From the year 2000 until around 2016, the incidence of breast cancer among young women — those under age 50 — rose steadily, if slowly.

JAMA Network Open


And then this happened:

JAMA Network Open


I look at a lot of graphs in my line of work, and it’s not too often that one actually makes me say “What the hell?” out loud. But this one did. Why are young women all of a sudden more likely to get breast cancer?

The graph comes from this paper, Breast cancer incidence among us women aged 20 to 49 years by race, stage, and hormone receptor status, appearing in JAMA Network Open

Researchers from Washington University in St. Louis utilized SEER registries to conduct their analyses. SEER is a public database from the National Cancer Institute with coverage of 27% of the US population and a long track record of statistical backbone to translate the data from SEER to numbers that are representative of the population at large.

From 2000 to 2019, more than 200,000 women were diagnosed with primary invasive breast cancer in the dataset, and I’ve already given you the top-line results. Of course, when you see a graph like this, the next question really needs to be why?

Fortunately, the SEER dataset contains a lot more information than simply whether someone was diagnosed with cancer. In the case of breast cancer, there is information about the patient’s demographics, the hormone status of the cancer, the stage, and so on. Using those additional data points can help the authors, and us, start to formulate some hypotheses as to what is happening here.

Let’s start with something a bit tricky about this kind of data. We see an uptick in new breast cancer diagnoses among young women in recent years. We need to tease that uptick apart a bit. It could be that it is the year that is the key factor here. In other words, it is simply that more women are getting breast cancer since 2016 and so more young women are getting breast cancer since 2016. These are known as period effects.

Or is there something unique to young women — something about their environmental exposures that put them at higher risk than they would have been had they been born at some other time? These are known as cohort effects.

The researchers teased these two effects apart, as you can see here, and concluded that, well, it’s both.

The rising incidence of breast cancer in young women is due both to the general increased incidence over time and the unique risk of being born in the late 1970s to early 1980s.

Stage of cancer at diagnosis can give us some more insight into what is happening. These results are pretty interesting. These higher cancer rates are due primarily to stage I and stage IV cancers, not stage II and stage III cancers.

JAMA Network Open


The rising incidence of stage I cancers could reflect better detection, though many of the women in this cohort would not have been old enough to quality for screening mammograms. That said, increased awareness about genetic risk and family history might be leading younger women to get screened, picking up more early cancers. Additionally, much of the increased incidence was with estrogen receptor–positive tumors, which might reflect the fact that women in this cohort are tending to have fewer children, and children later in life.

So why the rise in stage IV breast cancer? Well, precisely because younger women are not recommended to get screening mammograms; those who detect a lump on their own are likely to be at a more advanced stage. But I’m not sure why that would be changing recently. The authors argue that an increase in overweight and obesity in the country might be to blame here. Prior studies have shown that higher BMI is associated with higher stage at breast cancer diagnosis.

Of course, we can speculate as to multiple other causes as well: environmental toxins, pollution, hormone exposures, and so on. Figuring this out will be the work of multiple other studies. In the meantime, we should remember that the landscape of cancer is continuously changing. And that means we need to adapt to it. If these trends continue, national agencies may need to reconsider their guidelines for when screening mammography should begin — at least in some groups of young women.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

From the year 2000 until around 2016, the incidence of breast cancer among young women — those under age 50 — rose steadily, if slowly.

JAMA Network Open


And then this happened:

JAMA Network Open


I look at a lot of graphs in my line of work, and it’s not too often that one actually makes me say “What the hell?” out loud. But this one did. Why are young women all of a sudden more likely to get breast cancer?

The graph comes from this paper, Breast cancer incidence among us women aged 20 to 49 years by race, stage, and hormone receptor status, appearing in JAMA Network Open

Researchers from Washington University in St. Louis utilized SEER registries to conduct their analyses. SEER is a public database from the National Cancer Institute with coverage of 27% of the US population and a long track record of statistical backbone to translate the data from SEER to numbers that are representative of the population at large.

From 2000 to 2019, more than 200,000 women were diagnosed with primary invasive breast cancer in the dataset, and I’ve already given you the top-line results. Of course, when you see a graph like this, the next question really needs to be why?

Fortunately, the SEER dataset contains a lot more information than simply whether someone was diagnosed with cancer. In the case of breast cancer, there is information about the patient’s demographics, the hormone status of the cancer, the stage, and so on. Using those additional data points can help the authors, and us, start to formulate some hypotheses as to what is happening here.

Let’s start with something a bit tricky about this kind of data. We see an uptick in new breast cancer diagnoses among young women in recent years. We need to tease that uptick apart a bit. It could be that it is the year that is the key factor here. In other words, it is simply that more women are getting breast cancer since 2016 and so more young women are getting breast cancer since 2016. These are known as period effects.

Or is there something unique to young women — something about their environmental exposures that put them at higher risk than they would have been had they been born at some other time? These are known as cohort effects.

The researchers teased these two effects apart, as you can see here, and concluded that, well, it’s both.

The rising incidence of breast cancer in young women is due both to the general increased incidence over time and the unique risk of being born in the late 1970s to early 1980s.

Stage of cancer at diagnosis can give us some more insight into what is happening. These results are pretty interesting. These higher cancer rates are due primarily to stage I and stage IV cancers, not stage II and stage III cancers.

JAMA Network Open


The rising incidence of stage I cancers could reflect better detection, though many of the women in this cohort would not have been old enough to quality for screening mammograms. That said, increased awareness about genetic risk and family history might be leading younger women to get screened, picking up more early cancers. Additionally, much of the increased incidence was with estrogen receptor–positive tumors, which might reflect the fact that women in this cohort are tending to have fewer children, and children later in life.

So why the rise in stage IV breast cancer? Well, precisely because younger women are not recommended to get screening mammograms; those who detect a lump on their own are likely to be at a more advanced stage. But I’m not sure why that would be changing recently. The authors argue that an increase in overweight and obesity in the country might be to blame here. Prior studies have shown that higher BMI is associated with higher stage at breast cancer diagnosis.

Of course, we can speculate as to multiple other causes as well: environmental toxins, pollution, hormone exposures, and so on. Figuring this out will be the work of multiple other studies. In the meantime, we should remember that the landscape of cancer is continuously changing. And that means we need to adapt to it. If these trends continue, national agencies may need to reconsider their guidelines for when screening mammography should begin — at least in some groups of young women.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

From the year 2000 until around 2016, the incidence of breast cancer among young women — those under age 50 — rose steadily, if slowly.

JAMA Network Open


And then this happened:

JAMA Network Open


I look at a lot of graphs in my line of work, and it’s not too often that one actually makes me say “What the hell?” out loud. But this one did. Why are young women all of a sudden more likely to get breast cancer?

The graph comes from this paper, Breast cancer incidence among us women aged 20 to 49 years by race, stage, and hormone receptor status, appearing in JAMA Network Open

Researchers from Washington University in St. Louis utilized SEER registries to conduct their analyses. SEER is a public database from the National Cancer Institute with coverage of 27% of the US population and a long track record of statistical backbone to translate the data from SEER to numbers that are representative of the population at large.

From 2000 to 2019, more than 200,000 women were diagnosed with primary invasive breast cancer in the dataset, and I’ve already given you the top-line results. Of course, when you see a graph like this, the next question really needs to be why?

Fortunately, the SEER dataset contains a lot more information than simply whether someone was diagnosed with cancer. In the case of breast cancer, there is information about the patient’s demographics, the hormone status of the cancer, the stage, and so on. Using those additional data points can help the authors, and us, start to formulate some hypotheses as to what is happening here.

Let’s start with something a bit tricky about this kind of data. We see an uptick in new breast cancer diagnoses among young women in recent years. We need to tease that uptick apart a bit. It could be that it is the year that is the key factor here. In other words, it is simply that more women are getting breast cancer since 2016 and so more young women are getting breast cancer since 2016. These are known as period effects.

Or is there something unique to young women — something about their environmental exposures that put them at higher risk than they would have been had they been born at some other time? These are known as cohort effects.

The researchers teased these two effects apart, as you can see here, and concluded that, well, it’s both.

The rising incidence of breast cancer in young women is due both to the general increased incidence over time and the unique risk of being born in the late 1970s to early 1980s.

Stage of cancer at diagnosis can give us some more insight into what is happening. These results are pretty interesting. These higher cancer rates are due primarily to stage I and stage IV cancers, not stage II and stage III cancers.

JAMA Network Open


The rising incidence of stage I cancers could reflect better detection, though many of the women in this cohort would not have been old enough to quality for screening mammograms. That said, increased awareness about genetic risk and family history might be leading younger women to get screened, picking up more early cancers. Additionally, much of the increased incidence was with estrogen receptor–positive tumors, which might reflect the fact that women in this cohort are tending to have fewer children, and children later in life.

So why the rise in stage IV breast cancer? Well, precisely because younger women are not recommended to get screening mammograms; those who detect a lump on their own are likely to be at a more advanced stage. But I’m not sure why that would be changing recently. The authors argue that an increase in overweight and obesity in the country might be to blame here. Prior studies have shown that higher BMI is associated with higher stage at breast cancer diagnosis.

Of course, we can speculate as to multiple other causes as well: environmental toxins, pollution, hormone exposures, and so on. Figuring this out will be the work of multiple other studies. In the meantime, we should remember that the landscape of cancer is continuously changing. And that means we need to adapt to it. If these trends continue, national agencies may need to reconsider their guidelines for when screening mammography should begin — at least in some groups of young women.

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Even Intentional Weight Loss Linked With Cancer

Article Type
Changed
Wed, 01/24/2024 - 15:07

This transcript has been edited for clarity.

As anyone who has been through medical training will tell you, some little scenes just stick with you. I had been seeing a patient in our resident clinic in West Philly for a couple of years. She was in her mid-60s with diabetes and hypertension and a distant smoking history. She was overweight and had been trying to improve her diet and lose weight since I started seeing her. One day she came in and was delighted to report that she had finally started shedding some pounds — about 15 in the past 2 months.

I enthusiastically told my preceptor that my careful dietary counseling had finally done the job. She looked through the chart for a moment and asked, “Is she up to date on her cancer screening?” A workup revealed adenocarcinoma of the lung. The patient did well, actually, but the story stuck with me.

The textbooks call it “unintentional weight loss,” often in big, scary letters, and every doctor will go just a bit pale if a patient tells them that, despite efforts not to, they are losing weight. But true unintentional weight loss is not that common. After all, most of us are at least half-heartedly trying to lose weight all the time. Should doctors be worried when we are successful?

A new study suggests that perhaps they should. We’re talking about this study, appearing in JAMA, which combined participants from two long-running observational cohorts: 120,000 women from the Nurses’ Health Study, and 50,000 men from the Health Professionals Follow-Up Study. (These cohorts started in the 1970s and 1980s, so we’ll give them a pass on the gender-specific study designs.)

The rationale of enrolling healthcare providers in these cohort studies is that they would be reliable witnesses of their own health status. If a nurse or doctor says they have pancreatic cancer, it’s likely that they truly have pancreatic cancer. Detailed health surveys were distributed to the participants every other year, and the average follow-up was more than a decade.

JAMA


Participants recorded their weight — as an aside, a nested study found that self-reported rate was extremely well correlated with professionally measured weight — and whether they had received a cancer diagnosis since the last survey.

This allowed researchers to look at the phenomenon described above. Would weight loss precede a new diagnosis of cancer? And, more interestingly, would intentional weight loss precede a new diagnosis of cancer.

I don’t think it will surprise you to hear that individuals in the highest category of weight loss, those who lost more than 10% of their body weight over a 2-year period, had a larger risk of being diagnosed with cancer in the next year. That’s the yellow line in this graph. In fact, they had about a 40% higher risk than those who did not lose weight.

JAMA


Increased risk was found across multiple cancer types, though cancers of the gastrointestinal tract, not surprisingly, were most strongly associated with antecedent weight loss.

JAMA


What about intentionality of weight loss? Unfortunately, the surveys did not ask participants whether they were trying to lose weight. Rather, the surveys asked about exercise and dietary habits. The researchers leveraged these responses to create three categories of participants: those who seemed to be trying to lose weight (defined as people who had increased their exercise and dietary quality); those who didn’t seem to be trying to lose weight (they changed neither exercise nor dietary behaviors); and a middle group, which changed one or the other of these behaviors but not both.

Let’s look at those who really seemed to be trying to lose weight. Over 2 years, they got more exercise and improved their diet.

If they succeeded in losing 10% or more of their body weight, they still had a higher risk for cancer than those who had not lost weight — about 30% higher, which is not that different from the 40% increased risk when you include those folks who weren’t changing their lifestyle.

JAMA


This is why this study is important. The classic teaching is that unintentional weight loss is a bad thing and needs a workup. That’s fine. But we live in a world where perhaps the majority of people are, at any given time, trying to lose weight. The truth is that losing weight only with lifestyle modifications — exercise and diet — is actually really hard. So “success” could be a sign that something else is going on.

We need to be careful here. I am not by any means trying to say that people who have successfully lost weight have cancer. Both of the following statements can be true:

Significant weight loss, whether intentional or not, is associated with a higher risk for cancer.

Most people with significant weight loss will not have cancer.

Both of these can be true because cancer is, fortunately, rare. Of people who lose weight, the vast majority will lose weight because they are engaging in healthier behaviors. A small number may lose weight because something else is wrong. It’s just hard to tell the two apart.

Out of the nearly 200,000 people in this study, only around 16,000 developed cancer during follow-up. Again, although the chance of having cancer is slightly higher if someone has experienced weight loss, the chance is still very low.

We also need to avoid suggesting that weight loss causes cancer. Some people lose weight because of an existing, as of yet undiagnosed cancer and its metabolic effects. This is borne out if you look at the risk of being diagnosed with cancer as you move further away from the interval of weight loss.

JAMA


The further you get from the year of that 10% weight loss, the less likely you are to be diagnosed with cancer. Most of these cancers are diagnosed within a year of losing weight. In other words, if you’re reading this and getting worried that you lost weight 10 years ago, you’re probably out of the woods. That was, most likely, just you getting healthier.

Last thing: We have methods for weight loss now that are way more effective than diet or exercise. I’m looking at you, Ozempic. But aside from the weight loss wonder drugs, we have surgery and other interventions. This study did not capture any of that data. Ozempic wasn’t even on the market during this study, so we can’t say anything about the relationship between weight loss and cancer among people using nonlifestyle mechanisms to lose weight.

It’s a complicated system. But the clinically actionable point here is to notice if patients have lost weight. If they’ve lost it without trying, further workup is reasonable. If they’ve lost it but were trying to lose it, tell them “good job.” And consider a workup anyway.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

As anyone who has been through medical training will tell you, some little scenes just stick with you. I had been seeing a patient in our resident clinic in West Philly for a couple of years. She was in her mid-60s with diabetes and hypertension and a distant smoking history. She was overweight and had been trying to improve her diet and lose weight since I started seeing her. One day she came in and was delighted to report that she had finally started shedding some pounds — about 15 in the past 2 months.

I enthusiastically told my preceptor that my careful dietary counseling had finally done the job. She looked through the chart for a moment and asked, “Is she up to date on her cancer screening?” A workup revealed adenocarcinoma of the lung. The patient did well, actually, but the story stuck with me.

The textbooks call it “unintentional weight loss,” often in big, scary letters, and every doctor will go just a bit pale if a patient tells them that, despite efforts not to, they are losing weight. But true unintentional weight loss is not that common. After all, most of us are at least half-heartedly trying to lose weight all the time. Should doctors be worried when we are successful?

A new study suggests that perhaps they should. We’re talking about this study, appearing in JAMA, which combined participants from two long-running observational cohorts: 120,000 women from the Nurses’ Health Study, and 50,000 men from the Health Professionals Follow-Up Study. (These cohorts started in the 1970s and 1980s, so we’ll give them a pass on the gender-specific study designs.)

The rationale of enrolling healthcare providers in these cohort studies is that they would be reliable witnesses of their own health status. If a nurse or doctor says they have pancreatic cancer, it’s likely that they truly have pancreatic cancer. Detailed health surveys were distributed to the participants every other year, and the average follow-up was more than a decade.

JAMA


Participants recorded their weight — as an aside, a nested study found that self-reported rate was extremely well correlated with professionally measured weight — and whether they had received a cancer diagnosis since the last survey.

This allowed researchers to look at the phenomenon described above. Would weight loss precede a new diagnosis of cancer? And, more interestingly, would intentional weight loss precede a new diagnosis of cancer.

I don’t think it will surprise you to hear that individuals in the highest category of weight loss, those who lost more than 10% of their body weight over a 2-year period, had a larger risk of being diagnosed with cancer in the next year. That’s the yellow line in this graph. In fact, they had about a 40% higher risk than those who did not lose weight.

JAMA


Increased risk was found across multiple cancer types, though cancers of the gastrointestinal tract, not surprisingly, were most strongly associated with antecedent weight loss.

JAMA


What about intentionality of weight loss? Unfortunately, the surveys did not ask participants whether they were trying to lose weight. Rather, the surveys asked about exercise and dietary habits. The researchers leveraged these responses to create three categories of participants: those who seemed to be trying to lose weight (defined as people who had increased their exercise and dietary quality); those who didn’t seem to be trying to lose weight (they changed neither exercise nor dietary behaviors); and a middle group, which changed one or the other of these behaviors but not both.

Let’s look at those who really seemed to be trying to lose weight. Over 2 years, they got more exercise and improved their diet.

If they succeeded in losing 10% or more of their body weight, they still had a higher risk for cancer than those who had not lost weight — about 30% higher, which is not that different from the 40% increased risk when you include those folks who weren’t changing their lifestyle.

JAMA


This is why this study is important. The classic teaching is that unintentional weight loss is a bad thing and needs a workup. That’s fine. But we live in a world where perhaps the majority of people are, at any given time, trying to lose weight. The truth is that losing weight only with lifestyle modifications — exercise and diet — is actually really hard. So “success” could be a sign that something else is going on.

We need to be careful here. I am not by any means trying to say that people who have successfully lost weight have cancer. Both of the following statements can be true:

Significant weight loss, whether intentional or not, is associated with a higher risk for cancer.

Most people with significant weight loss will not have cancer.

Both of these can be true because cancer is, fortunately, rare. Of people who lose weight, the vast majority will lose weight because they are engaging in healthier behaviors. A small number may lose weight because something else is wrong. It’s just hard to tell the two apart.

Out of the nearly 200,000 people in this study, only around 16,000 developed cancer during follow-up. Again, although the chance of having cancer is slightly higher if someone has experienced weight loss, the chance is still very low.

We also need to avoid suggesting that weight loss causes cancer. Some people lose weight because of an existing, as of yet undiagnosed cancer and its metabolic effects. This is borne out if you look at the risk of being diagnosed with cancer as you move further away from the interval of weight loss.

JAMA


The further you get from the year of that 10% weight loss, the less likely you are to be diagnosed with cancer. Most of these cancers are diagnosed within a year of losing weight. In other words, if you’re reading this and getting worried that you lost weight 10 years ago, you’re probably out of the woods. That was, most likely, just you getting healthier.

Last thing: We have methods for weight loss now that are way more effective than diet or exercise. I’m looking at you, Ozempic. But aside from the weight loss wonder drugs, we have surgery and other interventions. This study did not capture any of that data. Ozempic wasn’t even on the market during this study, so we can’t say anything about the relationship between weight loss and cancer among people using nonlifestyle mechanisms to lose weight.

It’s a complicated system. But the clinically actionable point here is to notice if patients have lost weight. If they’ve lost it without trying, further workup is reasonable. If they’ve lost it but were trying to lose it, tell them “good job.” And consider a workup anyway.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

As anyone who has been through medical training will tell you, some little scenes just stick with you. I had been seeing a patient in our resident clinic in West Philly for a couple of years. She was in her mid-60s with diabetes and hypertension and a distant smoking history. She was overweight and had been trying to improve her diet and lose weight since I started seeing her. One day she came in and was delighted to report that she had finally started shedding some pounds — about 15 in the past 2 months.

I enthusiastically told my preceptor that my careful dietary counseling had finally done the job. She looked through the chart for a moment and asked, “Is she up to date on her cancer screening?” A workup revealed adenocarcinoma of the lung. The patient did well, actually, but the story stuck with me.

The textbooks call it “unintentional weight loss,” often in big, scary letters, and every doctor will go just a bit pale if a patient tells them that, despite efforts not to, they are losing weight. But true unintentional weight loss is not that common. After all, most of us are at least half-heartedly trying to lose weight all the time. Should doctors be worried when we are successful?

A new study suggests that perhaps they should. We’re talking about this study, appearing in JAMA, which combined participants from two long-running observational cohorts: 120,000 women from the Nurses’ Health Study, and 50,000 men from the Health Professionals Follow-Up Study. (These cohorts started in the 1970s and 1980s, so we’ll give them a pass on the gender-specific study designs.)

The rationale of enrolling healthcare providers in these cohort studies is that they would be reliable witnesses of their own health status. If a nurse or doctor says they have pancreatic cancer, it’s likely that they truly have pancreatic cancer. Detailed health surveys were distributed to the participants every other year, and the average follow-up was more than a decade.

JAMA


Participants recorded their weight — as an aside, a nested study found that self-reported rate was extremely well correlated with professionally measured weight — and whether they had received a cancer diagnosis since the last survey.

This allowed researchers to look at the phenomenon described above. Would weight loss precede a new diagnosis of cancer? And, more interestingly, would intentional weight loss precede a new diagnosis of cancer.

I don’t think it will surprise you to hear that individuals in the highest category of weight loss, those who lost more than 10% of their body weight over a 2-year period, had a larger risk of being diagnosed with cancer in the next year. That’s the yellow line in this graph. In fact, they had about a 40% higher risk than those who did not lose weight.

JAMA


Increased risk was found across multiple cancer types, though cancers of the gastrointestinal tract, not surprisingly, were most strongly associated with antecedent weight loss.

JAMA


What about intentionality of weight loss? Unfortunately, the surveys did not ask participants whether they were trying to lose weight. Rather, the surveys asked about exercise and dietary habits. The researchers leveraged these responses to create three categories of participants: those who seemed to be trying to lose weight (defined as people who had increased their exercise and dietary quality); those who didn’t seem to be trying to lose weight (they changed neither exercise nor dietary behaviors); and a middle group, which changed one or the other of these behaviors but not both.

Let’s look at those who really seemed to be trying to lose weight. Over 2 years, they got more exercise and improved their diet.

If they succeeded in losing 10% or more of their body weight, they still had a higher risk for cancer than those who had not lost weight — about 30% higher, which is not that different from the 40% increased risk when you include those folks who weren’t changing their lifestyle.

JAMA


This is why this study is important. The classic teaching is that unintentional weight loss is a bad thing and needs a workup. That’s fine. But we live in a world where perhaps the majority of people are, at any given time, trying to lose weight. The truth is that losing weight only with lifestyle modifications — exercise and diet — is actually really hard. So “success” could be a sign that something else is going on.

We need to be careful here. I am not by any means trying to say that people who have successfully lost weight have cancer. Both of the following statements can be true:

Significant weight loss, whether intentional or not, is associated with a higher risk for cancer.

Most people with significant weight loss will not have cancer.

Both of these can be true because cancer is, fortunately, rare. Of people who lose weight, the vast majority will lose weight because they are engaging in healthier behaviors. A small number may lose weight because something else is wrong. It’s just hard to tell the two apart.

Out of the nearly 200,000 people in this study, only around 16,000 developed cancer during follow-up. Again, although the chance of having cancer is slightly higher if someone has experienced weight loss, the chance is still very low.

We also need to avoid suggesting that weight loss causes cancer. Some people lose weight because of an existing, as of yet undiagnosed cancer and its metabolic effects. This is borne out if you look at the risk of being diagnosed with cancer as you move further away from the interval of weight loss.

JAMA


The further you get from the year of that 10% weight loss, the less likely you are to be diagnosed with cancer. Most of these cancers are diagnosed within a year of losing weight. In other words, if you’re reading this and getting worried that you lost weight 10 years ago, you’re probably out of the woods. That was, most likely, just you getting healthier.

Last thing: We have methods for weight loss now that are way more effective than diet or exercise. I’m looking at you, Ozempic. But aside from the weight loss wonder drugs, we have surgery and other interventions. This study did not capture any of that data. Ozempic wasn’t even on the market during this study, so we can’t say anything about the relationship between weight loss and cancer among people using nonlifestyle mechanisms to lose weight.

It’s a complicated system. But the clinically actionable point here is to notice if patients have lost weight. If they’ve lost it without trying, further workup is reasonable. If they’ve lost it but were trying to lose it, tell them “good job.” And consider a workup anyway.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Testosterone Replacement May Cause ... Fracture?

Article Type
Changed
Wed, 01/24/2024 - 07:15

This transcript has been edited for clarity.

I am showing you a graph without any labels.

Dr. F. Perry Wilson


What could this line represent? The stock price of some company that made a big splash but failed to live up to expectations? An outbreak curve charting the introduction of a new infectious agent to a population? The performance of a viral tweet?

I’ll tell you what it is in a moment, but I wanted you to recognize that there is something inherently wistful in this shape, something that speaks of past glory and inevitable declines. It’s a graph that induces a feeling of resistance — no, do not go gently into that good night.

The graph actually represents (roughly) the normal level of serum testosterone in otherwise-healthy men as they age.

Dr. F. Perry Wilson


A caveat here: These numbers are not as well defined as I made them seem on this graph,  particularly for those older than 65 years. But it is clear that testosterone levels decline with time, and the idea to supplement testosterone is hardly new. Like all treatments, testosterone supplementation has risks and benefits. Some risks are predictable, like exacerbating the symptoms of benign prostatic hyperplasia. Some risks seem to come completely out of left field. That’s what we have today, in a study suggesting that testosterone supplementation increases the risk for bone fractures.

Let me set the stage here by saying that nearly all prior research into the effects of testosterone supplementation has suggested that it is pretty good for bone health. It increases bone mineral density, bone strength, and improves bone architecture.

So if you were to do a randomized trial of testosterone supplementation and look at fracture risk in the testosterone group compared with the placebo group, you would expect the fracture risk would be much lower in those getting supplemented. Of course, this is why we actually do studies instead of assuming we know the answer already — because in this case, you’d be wrong.

I’m talking about this study, appearing in The New England Journal of Medicine.

It’s a prespecified secondary analysis of a randomized trial known as the TRAVERSE trial, which randomly assigned 5246 men with low testosterone levels to transdermal testosterone gel vs placebo. The primary goal of that trial was to assess the cardiovascular risk associated with testosterone supplementation, and the major take-home was that there was no difference in cardiovascular event rates between the testosterone and placebo groups.

This secondary analysis looked at fracture incidence. Researchers contacted participants multiple times in the first year of the study and yearly thereafter. Each time, they asked whether the participant had sustained a fracture. If they answered in the affirmative, a request for medical records was made and the researchers, still blinded to randomization status, adjudicated whether there was indeed a fracture or not, along with some details as to location, situation, and so on.

The breaking news is that there were 154 confirmed fractures in the testosterone arm and 97 in the placebo arm. This was a big study, though, and that translates to just a 3.5% fracture rate in testosterone vs 2.5% in control, but the difference was statistically significant.

The New England Journal of Medicine


This difference persisted across various fracture types (non–high-impact fractures, for example) after excluding the small percentage of men taking osteoporosis medication.

THE NEW ENGLAND JOURNAL OF MEDICINE


How does a drug that increases bone mineral density and bone strength increase the risk for fracture?

Well, one clue — and this was pointed out in a nice editorial by Matthis Grossman and Bradley Anawalt — is that the increased risk for fracture occurs quite soon after starting treatment, which is not consistent with direct bone effects. Rather, this might represent behavioral differences. Testosterone supplementation seems to increase energy levels; might it lead men to engage in activities that put them at higher risk for fracture?

Regardless of the cause, this adds to our knowledge about the rather complex mix of risks and benefits of testosterone supplementation and probably puts a bit more weight on the risks side. The truth is that testosterone levels do decline with age, as do many things, and it may not be appropriate to try to fight against that in all people. It’s worth noting that all of these studies use low levels of total serum testosterone as an entry criterion. But total testosterone is not what your body “sees.” It sees free testosterone, the portion not bound to sex hormone–binding globulin. And that binding protein is affected by lots of stuff — diabetes and obesity lower it, for example — making total testosterone levels seem low when free testosterone might be just fine.

In other words, testosterone supplementation is probably not terrible, but it is definitely not the cure for aging. In situations like this, we need better data to guide exactly who will benefit from the therapy and who will only be exposed to the risks.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

I am showing you a graph without any labels.

Dr. F. Perry Wilson


What could this line represent? The stock price of some company that made a big splash but failed to live up to expectations? An outbreak curve charting the introduction of a new infectious agent to a population? The performance of a viral tweet?

I’ll tell you what it is in a moment, but I wanted you to recognize that there is something inherently wistful in this shape, something that speaks of past glory and inevitable declines. It’s a graph that induces a feeling of resistance — no, do not go gently into that good night.

The graph actually represents (roughly) the normal level of serum testosterone in otherwise-healthy men as they age.

Dr. F. Perry Wilson


A caveat here: These numbers are not as well defined as I made them seem on this graph,  particularly for those older than 65 years. But it is clear that testosterone levels decline with time, and the idea to supplement testosterone is hardly new. Like all treatments, testosterone supplementation has risks and benefits. Some risks are predictable, like exacerbating the symptoms of benign prostatic hyperplasia. Some risks seem to come completely out of left field. That’s what we have today, in a study suggesting that testosterone supplementation increases the risk for bone fractures.

Let me set the stage here by saying that nearly all prior research into the effects of testosterone supplementation has suggested that it is pretty good for bone health. It increases bone mineral density, bone strength, and improves bone architecture.

So if you were to do a randomized trial of testosterone supplementation and look at fracture risk in the testosterone group compared with the placebo group, you would expect the fracture risk would be much lower in those getting supplemented. Of course, this is why we actually do studies instead of assuming we know the answer already — because in this case, you’d be wrong.

I’m talking about this study, appearing in The New England Journal of Medicine.

It’s a prespecified secondary analysis of a randomized trial known as the TRAVERSE trial, which randomly assigned 5246 men with low testosterone levels to transdermal testosterone gel vs placebo. The primary goal of that trial was to assess the cardiovascular risk associated with testosterone supplementation, and the major take-home was that there was no difference in cardiovascular event rates between the testosterone and placebo groups.

This secondary analysis looked at fracture incidence. Researchers contacted participants multiple times in the first year of the study and yearly thereafter. Each time, they asked whether the participant had sustained a fracture. If they answered in the affirmative, a request for medical records was made and the researchers, still blinded to randomization status, adjudicated whether there was indeed a fracture or not, along with some details as to location, situation, and so on.

The breaking news is that there were 154 confirmed fractures in the testosterone arm and 97 in the placebo arm. This was a big study, though, and that translates to just a 3.5% fracture rate in testosterone vs 2.5% in control, but the difference was statistically significant.

The New England Journal of Medicine


This difference persisted across various fracture types (non–high-impact fractures, for example) after excluding the small percentage of men taking osteoporosis medication.

THE NEW ENGLAND JOURNAL OF MEDICINE


How does a drug that increases bone mineral density and bone strength increase the risk for fracture?

Well, one clue — and this was pointed out in a nice editorial by Matthis Grossman and Bradley Anawalt — is that the increased risk for fracture occurs quite soon after starting treatment, which is not consistent with direct bone effects. Rather, this might represent behavioral differences. Testosterone supplementation seems to increase energy levels; might it lead men to engage in activities that put them at higher risk for fracture?

Regardless of the cause, this adds to our knowledge about the rather complex mix of risks and benefits of testosterone supplementation and probably puts a bit more weight on the risks side. The truth is that testosterone levels do decline with age, as do many things, and it may not be appropriate to try to fight against that in all people. It’s worth noting that all of these studies use low levels of total serum testosterone as an entry criterion. But total testosterone is not what your body “sees.” It sees free testosterone, the portion not bound to sex hormone–binding globulin. And that binding protein is affected by lots of stuff — diabetes and obesity lower it, for example — making total testosterone levels seem low when free testosterone might be just fine.

In other words, testosterone supplementation is probably not terrible, but it is definitely not the cure for aging. In situations like this, we need better data to guide exactly who will benefit from the therapy and who will only be exposed to the risks.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

I am showing you a graph without any labels.

Dr. F. Perry Wilson


What could this line represent? The stock price of some company that made a big splash but failed to live up to expectations? An outbreak curve charting the introduction of a new infectious agent to a population? The performance of a viral tweet?

I’ll tell you what it is in a moment, but I wanted you to recognize that there is something inherently wistful in this shape, something that speaks of past glory and inevitable declines. It’s a graph that induces a feeling of resistance — no, do not go gently into that good night.

The graph actually represents (roughly) the normal level of serum testosterone in otherwise-healthy men as they age.

Dr. F. Perry Wilson


A caveat here: These numbers are not as well defined as I made them seem on this graph,  particularly for those older than 65 years. But it is clear that testosterone levels decline with time, and the idea to supplement testosterone is hardly new. Like all treatments, testosterone supplementation has risks and benefits. Some risks are predictable, like exacerbating the symptoms of benign prostatic hyperplasia. Some risks seem to come completely out of left field. That’s what we have today, in a study suggesting that testosterone supplementation increases the risk for bone fractures.

Let me set the stage here by saying that nearly all prior research into the effects of testosterone supplementation has suggested that it is pretty good for bone health. It increases bone mineral density, bone strength, and improves bone architecture.

So if you were to do a randomized trial of testosterone supplementation and look at fracture risk in the testosterone group compared with the placebo group, you would expect the fracture risk would be much lower in those getting supplemented. Of course, this is why we actually do studies instead of assuming we know the answer already — because in this case, you’d be wrong.

I’m talking about this study, appearing in The New England Journal of Medicine.

It’s a prespecified secondary analysis of a randomized trial known as the TRAVERSE trial, which randomly assigned 5246 men with low testosterone levels to transdermal testosterone gel vs placebo. The primary goal of that trial was to assess the cardiovascular risk associated with testosterone supplementation, and the major take-home was that there was no difference in cardiovascular event rates between the testosterone and placebo groups.

This secondary analysis looked at fracture incidence. Researchers contacted participants multiple times in the first year of the study and yearly thereafter. Each time, they asked whether the participant had sustained a fracture. If they answered in the affirmative, a request for medical records was made and the researchers, still blinded to randomization status, adjudicated whether there was indeed a fracture or not, along with some details as to location, situation, and so on.

The breaking news is that there were 154 confirmed fractures in the testosterone arm and 97 in the placebo arm. This was a big study, though, and that translates to just a 3.5% fracture rate in testosterone vs 2.5% in control, but the difference was statistically significant.

The New England Journal of Medicine


This difference persisted across various fracture types (non–high-impact fractures, for example) after excluding the small percentage of men taking osteoporosis medication.

THE NEW ENGLAND JOURNAL OF MEDICINE


How does a drug that increases bone mineral density and bone strength increase the risk for fracture?

Well, one clue — and this was pointed out in a nice editorial by Matthis Grossman and Bradley Anawalt — is that the increased risk for fracture occurs quite soon after starting treatment, which is not consistent with direct bone effects. Rather, this might represent behavioral differences. Testosterone supplementation seems to increase energy levels; might it lead men to engage in activities that put them at higher risk for fracture?

Regardless of the cause, this adds to our knowledge about the rather complex mix of risks and benefits of testosterone supplementation and probably puts a bit more weight on the risks side. The truth is that testosterone levels do decline with age, as do many things, and it may not be appropriate to try to fight against that in all people. It’s worth noting that all of these studies use low levels of total serum testosterone as an entry criterion. But total testosterone is not what your body “sees.” It sees free testosterone, the portion not bound to sex hormone–binding globulin. And that binding protein is affected by lots of stuff — diabetes and obesity lower it, for example — making total testosterone levels seem low when free testosterone might be just fine.

In other words, testosterone supplementation is probably not terrible, but it is definitely not the cure for aging. In situations like this, we need better data to guide exactly who will benefit from the therapy and who will only be exposed to the risks.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Yes, Patients Are Getting More Complicated

Article Type
Changed
Wed, 01/24/2024 - 15:03

This transcript has been edited for clarity.

The first time I saw a patient in the hospital was in 2004, twenty years ago, when I was a third-year med student. I mean, look at that guy. The things I could tell him.

Since that time, I have spent countless hours in the hospital as a resident, a renal fellow, and finally as an attending. And I’m sure many of you in the medical community feel the same thing I do, which is that patients are much more complicated now than they used to be. I’ll listen to an intern present a new case on rounds and she’ll have an assessment and plan that encompasses a dozen individual medical problems. Sometimes I have to literally be like, “Wait, why is this patient here again?”

But until now, I had no data to convince myself that this feeling was real — that hospitalized patients are getting more and more complicated, or that they only seem more complicated because I’m getting older. Maybe I was better able to keep track of things when I was an intern rather than now as an attending, spending just a couple months of the year in the hospital. I mean, after all, if patients were getting more complicated, surely hospitals would know this and allocate more resources to patient care, right?

Right?

It’s not an illusion. At least not according to this paper, Population-Based Trends in Complexity of Hospital Inpatients, appearing in JAMA Internal Medicine, which examines about 15 years of inpatient hospital admissions in British Columbia.

I like Canada for this study for two reasons: First, their electronic health record system is province-wide, so they don’t have issues of getting data from hospital A vs hospital B. All the data are there — in this case, more than 3 million nonelective hospital admissions from British Columbia. Second, there is universal healthcare. We don’t have to worry about insurance companies changing, or the start of a new program like the Affordable Care Act. It’s just a cleaner set-up.

Of course, complexity is hard to define, and the authors here decide to look at a variety of metrics I think we can agree are tied into complexity. These include things like patient age, comorbidities, medications, frequency of hospitalization, and so on. They also looked at outcomes associated with hospitalization: Did the patient require the ICU? Did they survive? Were they readmitted?

And the tale of the tape is as clear as that British Columbian air: Over the past 15 years, your average hospitalized patient is about 3 years older, is twice as likely to have kidney disease, 70% more likely to have diabetes, is on more medications (particularly anticoagulants), and is much more likely to be admitted through the emergency room. They’ve also spent more time in the hospital in the past year.

Given the increased complexity, you might expect that the outcomes for these patients are worse than years ago, but the data do not bear that out. In fact, inpatient mortality is lower now than it was 15 years ago, although 30-day postdischarge mortality is higher. Put those together and it turns out that death rates are pretty stable: 9% of people admitted for nonelective reasons to the hospital will die within 30 days. It’s just that nowadays, we tend to discharge them before that happens.

Why are our patients getting more complex? Some of it is demographics; the population is aging, after all. Some of it relates to the increasing burden of comorbidities like diabetes and kidney disease, which are associated with the obesity epidemic. But in some ways, we’re a victim of our own success. We have the ability to keep people alive today who would not have survived 15 years ago. We have better treatments for metastatic cancer, less-invasive therapies for heart disease, better protocolized ICU care.

Given all that, does it make any sense that many of our hospitals are at skeleton-crew staffing levels? That hospitalists report taking care of more patients than they ever have before?

There’s been so much talk about burnout in the health professions lately. Maybe something people need to start acknowledging — particularly those who haven’t practiced on the front lines for a decade or two — is that the job is, quite simply, harder now. As patients become more complex, we need more resources, human and otherwise, to care for them.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his bookHow Medicine Works and When It Doesn’tis available now. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

The first time I saw a patient in the hospital was in 2004, twenty years ago, when I was a third-year med student. I mean, look at that guy. The things I could tell him.

Since that time, I have spent countless hours in the hospital as a resident, a renal fellow, and finally as an attending. And I’m sure many of you in the medical community feel the same thing I do, which is that patients are much more complicated now than they used to be. I’ll listen to an intern present a new case on rounds and she’ll have an assessment and plan that encompasses a dozen individual medical problems. Sometimes I have to literally be like, “Wait, why is this patient here again?”

But until now, I had no data to convince myself that this feeling was real — that hospitalized patients are getting more and more complicated, or that they only seem more complicated because I’m getting older. Maybe I was better able to keep track of things when I was an intern rather than now as an attending, spending just a couple months of the year in the hospital. I mean, after all, if patients were getting more complicated, surely hospitals would know this and allocate more resources to patient care, right?

Right?

It’s not an illusion. At least not according to this paper, Population-Based Trends in Complexity of Hospital Inpatients, appearing in JAMA Internal Medicine, which examines about 15 years of inpatient hospital admissions in British Columbia.

I like Canada for this study for two reasons: First, their electronic health record system is province-wide, so they don’t have issues of getting data from hospital A vs hospital B. All the data are there — in this case, more than 3 million nonelective hospital admissions from British Columbia. Second, there is universal healthcare. We don’t have to worry about insurance companies changing, or the start of a new program like the Affordable Care Act. It’s just a cleaner set-up.

Of course, complexity is hard to define, and the authors here decide to look at a variety of metrics I think we can agree are tied into complexity. These include things like patient age, comorbidities, medications, frequency of hospitalization, and so on. They also looked at outcomes associated with hospitalization: Did the patient require the ICU? Did they survive? Were they readmitted?

And the tale of the tape is as clear as that British Columbian air: Over the past 15 years, your average hospitalized patient is about 3 years older, is twice as likely to have kidney disease, 70% more likely to have diabetes, is on more medications (particularly anticoagulants), and is much more likely to be admitted through the emergency room. They’ve also spent more time in the hospital in the past year.

Given the increased complexity, you might expect that the outcomes for these patients are worse than years ago, but the data do not bear that out. In fact, inpatient mortality is lower now than it was 15 years ago, although 30-day postdischarge mortality is higher. Put those together and it turns out that death rates are pretty stable: 9% of people admitted for nonelective reasons to the hospital will die within 30 days. It’s just that nowadays, we tend to discharge them before that happens.

Why are our patients getting more complex? Some of it is demographics; the population is aging, after all. Some of it relates to the increasing burden of comorbidities like diabetes and kidney disease, which are associated with the obesity epidemic. But in some ways, we’re a victim of our own success. We have the ability to keep people alive today who would not have survived 15 years ago. We have better treatments for metastatic cancer, less-invasive therapies for heart disease, better protocolized ICU care.

Given all that, does it make any sense that many of our hospitals are at skeleton-crew staffing levels? That hospitalists report taking care of more patients than they ever have before?

There’s been so much talk about burnout in the health professions lately. Maybe something people need to start acknowledging — particularly those who haven’t practiced on the front lines for a decade or two — is that the job is, quite simply, harder now. As patients become more complex, we need more resources, human and otherwise, to care for them.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his bookHow Medicine Works and When It Doesn’tis available now. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

The first time I saw a patient in the hospital was in 2004, twenty years ago, when I was a third-year med student. I mean, look at that guy. The things I could tell him.

Since that time, I have spent countless hours in the hospital as a resident, a renal fellow, and finally as an attending. And I’m sure many of you in the medical community feel the same thing I do, which is that patients are much more complicated now than they used to be. I’ll listen to an intern present a new case on rounds and she’ll have an assessment and plan that encompasses a dozen individual medical problems. Sometimes I have to literally be like, “Wait, why is this patient here again?”

But until now, I had no data to convince myself that this feeling was real — that hospitalized patients are getting more and more complicated, or that they only seem more complicated because I’m getting older. Maybe I was better able to keep track of things when I was an intern rather than now as an attending, spending just a couple months of the year in the hospital. I mean, after all, if patients were getting more complicated, surely hospitals would know this and allocate more resources to patient care, right?

Right?

It’s not an illusion. At least not according to this paper, Population-Based Trends in Complexity of Hospital Inpatients, appearing in JAMA Internal Medicine, which examines about 15 years of inpatient hospital admissions in British Columbia.

I like Canada for this study for two reasons: First, their electronic health record system is province-wide, so they don’t have issues of getting data from hospital A vs hospital B. All the data are there — in this case, more than 3 million nonelective hospital admissions from British Columbia. Second, there is universal healthcare. We don’t have to worry about insurance companies changing, or the start of a new program like the Affordable Care Act. It’s just a cleaner set-up.

Of course, complexity is hard to define, and the authors here decide to look at a variety of metrics I think we can agree are tied into complexity. These include things like patient age, comorbidities, medications, frequency of hospitalization, and so on. They also looked at outcomes associated with hospitalization: Did the patient require the ICU? Did they survive? Were they readmitted?

And the tale of the tape is as clear as that British Columbian air: Over the past 15 years, your average hospitalized patient is about 3 years older, is twice as likely to have kidney disease, 70% more likely to have diabetes, is on more medications (particularly anticoagulants), and is much more likely to be admitted through the emergency room. They’ve also spent more time in the hospital in the past year.

Given the increased complexity, you might expect that the outcomes for these patients are worse than years ago, but the data do not bear that out. In fact, inpatient mortality is lower now than it was 15 years ago, although 30-day postdischarge mortality is higher. Put those together and it turns out that death rates are pretty stable: 9% of people admitted for nonelective reasons to the hospital will die within 30 days. It’s just that nowadays, we tend to discharge them before that happens.

Why are our patients getting more complex? Some of it is demographics; the population is aging, after all. Some of it relates to the increasing burden of comorbidities like diabetes and kidney disease, which are associated with the obesity epidemic. But in some ways, we’re a victim of our own success. We have the ability to keep people alive today who would not have survived 15 years ago. We have better treatments for metastatic cancer, less-invasive therapies for heart disease, better protocolized ICU care.

Given all that, does it make any sense that many of our hospitals are at skeleton-crew staffing levels? That hospitalists report taking care of more patients than they ever have before?

There’s been so much talk about burnout in the health professions lately. Maybe something people need to start acknowledging — particularly those who haven’t practiced on the front lines for a decade or two — is that the job is, quite simply, harder now. As patients become more complex, we need more resources, human and otherwise, to care for them.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his bookHow Medicine Works and When It Doesn’tis available now. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Why Are Prion Diseases on the Rise?

Article Type
Changed
Tue, 12/12/2023 - 12:10

This transcript has been edited for clarity.

In 1986, in Britain, cattle started dying.

The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.

The United States banned UK beef imports in 1996 and only lifted the ban in 2020.

The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”

Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.

Courtesy Dr. F. Perry Wilson


And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.

Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.

But one thing is known: Cases are increasing.

I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.

Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.

The main findings are seen here. Since 1990, there has been a steady uptick in the number of deaths due to CJD in this country, as well as an increase in overall incidence.

Courtesy Dr. F. Perry Wilson


Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.

Courtesy Dr. F. Perry Wilson


The question is, why are there more cases?

Whenever this type of question comes up with any disease, there are basically three possibilities:

First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.

Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.

Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.

But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.

F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

In 1986, in Britain, cattle started dying.

The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.

The United States banned UK beef imports in 1996 and only lifted the ban in 2020.

The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”

Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.

Courtesy Dr. F. Perry Wilson


And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.

Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.

But one thing is known: Cases are increasing.

I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.

Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.

The main findings are seen here. Since 1990, there has been a steady uptick in the number of deaths due to CJD in this country, as well as an increase in overall incidence.

Courtesy Dr. F. Perry Wilson


Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.

Courtesy Dr. F. Perry Wilson


The question is, why are there more cases?

Whenever this type of question comes up with any disease, there are basically three possibilities:

First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.

Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.

Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.

But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.

F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

In 1986, in Britain, cattle started dying.

The condition, quickly nicknamed “mad cow disease,” was clearly infectious, but the particular pathogen was difficult to identify. By 1993, 120,000 cattle in Britain were identified as being infected. As yet, no human cases had occurred and the UK government insisted that cattle were a dead-end host for the pathogen. By the mid-1990s, however, multiple human cases, attributable to ingestion of meat and organs from infected cattle, were discovered. In humans, variant Creutzfeldt-Jakob disease (CJD) was a media sensation — a nearly uniformly fatal, untreatable condition with a rapid onset of dementia, mobility issues characterized by jerky movements, and autopsy reports finding that the brain itself had turned into a spongy mess.

The United States banned UK beef imports in 1996 and only lifted the ban in 2020.

The disease was made all the more mysterious because the pathogen involved was not a bacterium, parasite, or virus, but a protein — or a proteinaceous infectious particle, shortened to “prion.”

Prions are misfolded proteins that aggregate in cells — in this case, in nerve cells. But what makes prions different from other misfolded proteins is that the misfolded protein catalyzes the conversion of its non-misfolded counterpart into the misfolded configuration. It creates a chain reaction, leading to rapid accumulation of misfolded proteins and cell death.

Courtesy Dr. F. Perry Wilson


And, like a time bomb, we all have prion protein inside us. In its normally folded state, the function of prion protein remains unclear — knockout mice do okay without it — but it is also highly conserved across mammalian species, so it probably does something worthwhile, perhaps protecting nerve fibers.

Far more common than humans contracting mad cow disease is the condition known as sporadic CJD, responsible for 85% of all cases of prion-induced brain disease. The cause of sporadic CJD is unknown.

But one thing is known: Cases are increasing.

I don’t want you to freak out; we are not in the midst of a CJD epidemic. But it’s been a while since I’ve seen people discussing the condition — which remains as horrible as it was in the 1990s — and a new research letter appearing in JAMA Neurology brought it back to the top of my mind.

Researchers, led by Matthew Crane at Hopkins, used the CDC’s WONDER cause-of-death database, which pulls diagnoses from death certificates. Normally, I’m not a fan of using death certificates for cause-of-death analyses, but in this case I’ll give it a pass. Assuming that the diagnosis of CJD is made, it would be really unlikely for it not to appear on a death certificate.

The main findings are seen here. Since 1990, there has been a steady uptick in the number of deaths due to CJD in this country, as well as an increase in overall incidence.

Courtesy Dr. F. Perry Wilson


Note that we can’t tell whether these are sporadic CJD cases or variant CJD cases or even familial CJD cases; however, unless there has been a dramatic change in epidemiology, the vast majority of these will be sporadic.

Courtesy Dr. F. Perry Wilson


The question is, why are there more cases?

Whenever this type of question comes up with any disease, there are basically three possibilities:

First, there may be an increase in the susceptible, or at-risk, population. In this case, we know that older people are at higher risk of developing sporadic CJD, and over time, the population has aged. To be fair, the authors adjusted for this and still saw an increase, though it was attenuated.

Second, we might be better at diagnosing the condition. A lot has happened since the mid-1990s, when the diagnosis was based more or less on symptoms. The advent of more sophisticated MRI protocols as well as a new diagnostic test called “real-time quaking-induced conversion testing” may mean we are just better at detecting people with this disease.

Third (and most concerning), a new exposure has occurred. What that exposure might be, where it might come from, is anyone’s guess. It’s hard to do broad-scale epidemiology on very rare diseases.

But given these findings, it seems that a bit more surveillance for this rare but devastating condition is well merited.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and here on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now.

F. Perry Wilson, MD, MSCE, has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Are you sure your patient is alive?

Article Type
Changed
Tue, 12/19/2023 - 11:28

 

This transcript has been edited for clarity.

Much of my research focuses on what is known as clinical decision support — prompts and messages to providers to help them make good decisions for their patients. I know that these things can be annoying, which is exactly why I study them — to figure out which ones actually help.

When I got started on this about 10 years ago, we were learning a lot about how best to message providers about their patients. My team had developed a simple alert for acute kidney injury (AKI). We knew that providers often missed the diagnosis, so maybe letting them know would improve patient outcomes.

As we tested the alert, we got feedback, and I have kept an email from an ICU doctor from those early days. It read:

Dear Dr. Wilson: Thank you for the automated alert informing me that my patient had AKI. Regrettably, the alert fired about an hour after the patient had died. I feel that the information is less than actionable at this time.

Our early system had neglected to add a conditional flag ensuring that the patient was still alive at the time it sent the alert message. A small oversight, but one that had very large implications. Future studies would show that “false positive” alerts like this seriously degrade physician confidence in the system. And why wouldn’t they?

Knowing whether a patient is alive or dead seems like it should be trivial. But, as it turns out, in our modern balkanized health care system, it can be quite difficult. Not knowing the vital status of a patient can have major consequences.

Health systems send messages to their patients all the time: reminders of appointments, reminders for preventive care, reminders for vaccinations, and so on.

But what if the patient being reminded has died? It’s a waste of resources, of course, but more than that, it can be painful for their families and reflects poorly on the health care system. Of all the people who should know whether someone is alive or dead, shouldn’t their doctor be at the top of the list?

new study in JAMA Internal Medicine quantifies this very phenomenon.

Researchers examined 11,658 primary care patients in their health system who met the criteria of being “seriously ill” and followed them for 2 years. During that period of time, 25% were recorded as deceased in the electronic health record. But 30.8% had died. That left 676 patients who had died, but were not known to have died, left in the system.

Courtesy Dr. F. Perry Wilson


And those 676 were not left to rest in peace. They received 221 telephone and 338 health portal messages not related to death, and 920 letters reminding them about unmet primary care metrics like flu shots and cancer screening. Orders were entered into the health record for things like vaccines and routine screenings for 158 patients, and 310 future appointments — destined to be no-shows — were still on the books. One can only imagine the frustration of families checking their mail and finding yet another letter reminding their deceased loved one to get a mammogram.

Courtesy Dr. F. Perry Wilson


How did the researchers figure out who had died? It turns out it’s not that hard. California keeps a record of all deaths in the state; they simply had to search it. Like all state death records, they tend to lag a bit so it’s not clinically terribly useful, but it works. California and most other states also have a very accurate and up-to-date death file which can only be used by law enforcement to investigate criminal activity and fraud; health care is left in the lurch.

Nationwide, there is the real-time fact of death service, supported by the National Association for Public Health Statistics and Information Systems. This allows employers to verify, in real time, whether the person applying for a job is alive. Healthcare systems are not allowed to use it.

Let’s also remember that very few people die in this country without some health care agency knowing about it and recording it. But sharing of medical information is so poor in the United States that your patient could die in a hospital one city away from you and you might not find out until you’re calling them to see why they missed a scheduled follow-up appointment.

These events — the embarrassing lack of knowledge about the very vital status of our patients — highlight a huge problem with health care in our country. The fragmented health care system is terrible at data sharing, in part because of poor protocols, in part because of unfounded concerns about patient privacy, and in part because of a tendency to hoard data that might be valuable in the future. It has to stop. We need to know how our patients are doing even when they are not sitting in front of us. When it comes to life and death, the knowledge is out there; we just can’t access it. Seems like a pretty easy fix.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com .

Publications
Topics
Sections

 

This transcript has been edited for clarity.

Much of my research focuses on what is known as clinical decision support — prompts and messages to providers to help them make good decisions for their patients. I know that these things can be annoying, which is exactly why I study them — to figure out which ones actually help.

When I got started on this about 10 years ago, we were learning a lot about how best to message providers about their patients. My team had developed a simple alert for acute kidney injury (AKI). We knew that providers often missed the diagnosis, so maybe letting them know would improve patient outcomes.

As we tested the alert, we got feedback, and I have kept an email from an ICU doctor from those early days. It read:

Dear Dr. Wilson: Thank you for the automated alert informing me that my patient had AKI. Regrettably, the alert fired about an hour after the patient had died. I feel that the information is less than actionable at this time.

Our early system had neglected to add a conditional flag ensuring that the patient was still alive at the time it sent the alert message. A small oversight, but one that had very large implications. Future studies would show that “false positive” alerts like this seriously degrade physician confidence in the system. And why wouldn’t they?

Knowing whether a patient is alive or dead seems like it should be trivial. But, as it turns out, in our modern balkanized health care system, it can be quite difficult. Not knowing the vital status of a patient can have major consequences.

Health systems send messages to their patients all the time: reminders of appointments, reminders for preventive care, reminders for vaccinations, and so on.

But what if the patient being reminded has died? It’s a waste of resources, of course, but more than that, it can be painful for their families and reflects poorly on the health care system. Of all the people who should know whether someone is alive or dead, shouldn’t their doctor be at the top of the list?

new study in JAMA Internal Medicine quantifies this very phenomenon.

Researchers examined 11,658 primary care patients in their health system who met the criteria of being “seriously ill” and followed them for 2 years. During that period of time, 25% were recorded as deceased in the electronic health record. But 30.8% had died. That left 676 patients who had died, but were not known to have died, left in the system.

Courtesy Dr. F. Perry Wilson


And those 676 were not left to rest in peace. They received 221 telephone and 338 health portal messages not related to death, and 920 letters reminding them about unmet primary care metrics like flu shots and cancer screening. Orders were entered into the health record for things like vaccines and routine screenings for 158 patients, and 310 future appointments — destined to be no-shows — were still on the books. One can only imagine the frustration of families checking their mail and finding yet another letter reminding their deceased loved one to get a mammogram.

Courtesy Dr. F. Perry Wilson


How did the researchers figure out who had died? It turns out it’s not that hard. California keeps a record of all deaths in the state; they simply had to search it. Like all state death records, they tend to lag a bit so it’s not clinically terribly useful, but it works. California and most other states also have a very accurate and up-to-date death file which can only be used by law enforcement to investigate criminal activity and fraud; health care is left in the lurch.

Nationwide, there is the real-time fact of death service, supported by the National Association for Public Health Statistics and Information Systems. This allows employers to verify, in real time, whether the person applying for a job is alive. Healthcare systems are not allowed to use it.

Let’s also remember that very few people die in this country without some health care agency knowing about it and recording it. But sharing of medical information is so poor in the United States that your patient could die in a hospital one city away from you and you might not find out until you’re calling them to see why they missed a scheduled follow-up appointment.

These events — the embarrassing lack of knowledge about the very vital status of our patients — highlight a huge problem with health care in our country. The fragmented health care system is terrible at data sharing, in part because of poor protocols, in part because of unfounded concerns about patient privacy, and in part because of a tendency to hoard data that might be valuable in the future. It has to stop. We need to know how our patients are doing even when they are not sitting in front of us. When it comes to life and death, the knowledge is out there; we just can’t access it. Seems like a pretty easy fix.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com .

 

This transcript has been edited for clarity.

Much of my research focuses on what is known as clinical decision support — prompts and messages to providers to help them make good decisions for their patients. I know that these things can be annoying, which is exactly why I study them — to figure out which ones actually help.

When I got started on this about 10 years ago, we were learning a lot about how best to message providers about their patients. My team had developed a simple alert for acute kidney injury (AKI). We knew that providers often missed the diagnosis, so maybe letting them know would improve patient outcomes.

As we tested the alert, we got feedback, and I have kept an email from an ICU doctor from those early days. It read:

Dear Dr. Wilson: Thank you for the automated alert informing me that my patient had AKI. Regrettably, the alert fired about an hour after the patient had died. I feel that the information is less than actionable at this time.

Our early system had neglected to add a conditional flag ensuring that the patient was still alive at the time it sent the alert message. A small oversight, but one that had very large implications. Future studies would show that “false positive” alerts like this seriously degrade physician confidence in the system. And why wouldn’t they?

Knowing whether a patient is alive or dead seems like it should be trivial. But, as it turns out, in our modern balkanized health care system, it can be quite difficult. Not knowing the vital status of a patient can have major consequences.

Health systems send messages to their patients all the time: reminders of appointments, reminders for preventive care, reminders for vaccinations, and so on.

But what if the patient being reminded has died? It’s a waste of resources, of course, but more than that, it can be painful for their families and reflects poorly on the health care system. Of all the people who should know whether someone is alive or dead, shouldn’t their doctor be at the top of the list?

new study in JAMA Internal Medicine quantifies this very phenomenon.

Researchers examined 11,658 primary care patients in their health system who met the criteria of being “seriously ill” and followed them for 2 years. During that period of time, 25% were recorded as deceased in the electronic health record. But 30.8% had died. That left 676 patients who had died, but were not known to have died, left in the system.

Courtesy Dr. F. Perry Wilson


And those 676 were not left to rest in peace. They received 221 telephone and 338 health portal messages not related to death, and 920 letters reminding them about unmet primary care metrics like flu shots and cancer screening. Orders were entered into the health record for things like vaccines and routine screenings for 158 patients, and 310 future appointments — destined to be no-shows — were still on the books. One can only imagine the frustration of families checking their mail and finding yet another letter reminding their deceased loved one to get a mammogram.

Courtesy Dr. F. Perry Wilson


How did the researchers figure out who had died? It turns out it’s not that hard. California keeps a record of all deaths in the state; they simply had to search it. Like all state death records, they tend to lag a bit so it’s not clinically terribly useful, but it works. California and most other states also have a very accurate and up-to-date death file which can only be used by law enforcement to investigate criminal activity and fraud; health care is left in the lurch.

Nationwide, there is the real-time fact of death service, supported by the National Association for Public Health Statistics and Information Systems. This allows employers to verify, in real time, whether the person applying for a job is alive. Healthcare systems are not allowed to use it.

Let’s also remember that very few people die in this country without some health care agency knowing about it and recording it. But sharing of medical information is so poor in the United States that your patient could die in a hospital one city away from you and you might not find out until you’re calling them to see why they missed a scheduled follow-up appointment.

These events — the embarrassing lack of knowledge about the very vital status of our patients — highlight a huge problem with health care in our country. The fragmented health care system is terrible at data sharing, in part because of poor protocols, in part because of unfounded concerns about patient privacy, and in part because of a tendency to hoard data that might be valuable in the future. It has to stop. We need to know how our patients are doing even when they are not sitting in front of us. When it comes to life and death, the knowledge is out there; we just can’t access it. Seems like a pretty easy fix.
 

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Connecticut. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com .

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Is air filtration the best public health intervention against respiratory viruses?

Article Type
Changed
Tue, 11/28/2023 - 11:53

 

This transcript has been edited for clarity.

When it comes to the public health fight against respiratory viruses – COVID, flu, RSV,  and so on – it has always struck me as strange how staunchly basically any intervention is opposed. Masking was, of course, the prototypical entrenched warfare of opposing ideologies, with advocates pointing to studies suggesting the efficacy of masking to prevent transmission and advocating for broad masking recommendations, and detractors citing studies that suggested masks were ineffective and characterizing masking policies as fascist overreach. I’ll admit that I was always perplexed by this a bit, as that particular intervention seemed so benign – a bit annoying, I guess, but not crazy.

I have come to appreciate what I call status quo bias, which is the tendency to reject any policy, advice, or intervention that would force you, as an individual, to change your usual behavior. We just don’t like to do that. It has made me think that the most successful public health interventions might be the ones that take the individual out of the loop. And air quality control seems an ideal fit here. Here is a potential intervention where you, the individual, have to do precisely nothing. The status quo is preserved. We just, you know, have cleaner indoor air.

But even the suggestion of air treatment systems as a bulwark against respiratory virus transmission has been met with not just skepticism but cynicism, and perhaps even defeatism. It seems that there are those out there who think there really is nothing we can do. Sickness is interpreted in a Calvinistic framework: You become ill because it is your pre-destiny. But maybe air treatment could actually work. It seems like it might, if a new paper from PLOS One is to be believed.

What we’re talking about is a study titled “Bipolar Ionization Rapidly Inactivates Real-World, Airborne Concentrations of Infective Respiratory Viruses” – a highly controlled, laboratory-based analysis of a bipolar ionization system which seems to rapidly reduce viral counts in the air.

The proposed mechanism of action is pretty simple. The ionization system – which, don’t worry, has been shown not to produce ozone – spits out positively and negatively charged particles, which float around the test chamber, designed to look like a pretty standard room that you might find in an office or a school.

courtesy PLOS One


Virus is then injected into the chamber through an aerosolization machine, to achieve concentrations on the order of what you might get standing within 6 feet or so of someone actively infected with COVID while they are breathing and talking.

The idea is that those ions stick to the virus particles, similar to how a balloon sticks to the wall after you rub it on your hair, and that tends to cause them to clump together and settle on surfaces more rapidly, and thus get farther away from their ports of entry to the human system: nose, mouth, and eyes. But the ions may also interfere with viruses’ ability to bind to cellular receptors, even in the air.

To quantify viral infectivity, the researchers used a biological system. Basically, you take air samples and expose a petri dish of cells to them and see how many cells die. Fewer cells dying, less infective. Under control conditions, you can see that virus infectivity does decrease over time. Time zero here is the end of a SARS-CoV-2 aerosolization.

courtesy PLOS One


This may simply reflect the fact that virus particles settle out of the air. But when the ionization system was added, infectivity decreases much more quickly. As you can see, within about an hour, you have almost no infective virus detectable. That’s fairly impressive.

courtesy PLOS One


Now, I’m not saying that this is a panacea, but it is certainly worth considering the use of technologies like these if we are going to revamp the infrastructure of our offices and schools. And, of course, it would be nice to see this tested in a rigorous clinical trial with actual infected people, not cells, as the outcome. But I continue to be encouraged by interventions like this which, to be honest, ask very little of us as individuals. Maybe it’s time we accept the things, or people, that we cannot change.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity.

When it comes to the public health fight against respiratory viruses – COVID, flu, RSV,  and so on – it has always struck me as strange how staunchly basically any intervention is opposed. Masking was, of course, the prototypical entrenched warfare of opposing ideologies, with advocates pointing to studies suggesting the efficacy of masking to prevent transmission and advocating for broad masking recommendations, and detractors citing studies that suggested masks were ineffective and characterizing masking policies as fascist overreach. I’ll admit that I was always perplexed by this a bit, as that particular intervention seemed so benign – a bit annoying, I guess, but not crazy.

I have come to appreciate what I call status quo bias, which is the tendency to reject any policy, advice, or intervention that would force you, as an individual, to change your usual behavior. We just don’t like to do that. It has made me think that the most successful public health interventions might be the ones that take the individual out of the loop. And air quality control seems an ideal fit here. Here is a potential intervention where you, the individual, have to do precisely nothing. The status quo is preserved. We just, you know, have cleaner indoor air.

But even the suggestion of air treatment systems as a bulwark against respiratory virus transmission has been met with not just skepticism but cynicism, and perhaps even defeatism. It seems that there are those out there who think there really is nothing we can do. Sickness is interpreted in a Calvinistic framework: You become ill because it is your pre-destiny. But maybe air treatment could actually work. It seems like it might, if a new paper from PLOS One is to be believed.

What we’re talking about is a study titled “Bipolar Ionization Rapidly Inactivates Real-World, Airborne Concentrations of Infective Respiratory Viruses” – a highly controlled, laboratory-based analysis of a bipolar ionization system which seems to rapidly reduce viral counts in the air.

The proposed mechanism of action is pretty simple. The ionization system – which, don’t worry, has been shown not to produce ozone – spits out positively and negatively charged particles, which float around the test chamber, designed to look like a pretty standard room that you might find in an office or a school.

courtesy PLOS One


Virus is then injected into the chamber through an aerosolization machine, to achieve concentrations on the order of what you might get standing within 6 feet or so of someone actively infected with COVID while they are breathing and talking.

The idea is that those ions stick to the virus particles, similar to how a balloon sticks to the wall after you rub it on your hair, and that tends to cause them to clump together and settle on surfaces more rapidly, and thus get farther away from their ports of entry to the human system: nose, mouth, and eyes. But the ions may also interfere with viruses’ ability to bind to cellular receptors, even in the air.

To quantify viral infectivity, the researchers used a biological system. Basically, you take air samples and expose a petri dish of cells to them and see how many cells die. Fewer cells dying, less infective. Under control conditions, you can see that virus infectivity does decrease over time. Time zero here is the end of a SARS-CoV-2 aerosolization.

courtesy PLOS One


This may simply reflect the fact that virus particles settle out of the air. But when the ionization system was added, infectivity decreases much more quickly. As you can see, within about an hour, you have almost no infective virus detectable. That’s fairly impressive.

courtesy PLOS One


Now, I’m not saying that this is a panacea, but it is certainly worth considering the use of technologies like these if we are going to revamp the infrastructure of our offices and schools. And, of course, it would be nice to see this tested in a rigorous clinical trial with actual infected people, not cells, as the outcome. But I continue to be encouraged by interventions like this which, to be honest, ask very little of us as individuals. Maybe it’s time we accept the things, or people, that we cannot change.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

 

This transcript has been edited for clarity.

When it comes to the public health fight against respiratory viruses – COVID, flu, RSV,  and so on – it has always struck me as strange how staunchly basically any intervention is opposed. Masking was, of course, the prototypical entrenched warfare of opposing ideologies, with advocates pointing to studies suggesting the efficacy of masking to prevent transmission and advocating for broad masking recommendations, and detractors citing studies that suggested masks were ineffective and characterizing masking policies as fascist overreach. I’ll admit that I was always perplexed by this a bit, as that particular intervention seemed so benign – a bit annoying, I guess, but not crazy.

I have come to appreciate what I call status quo bias, which is the tendency to reject any policy, advice, or intervention that would force you, as an individual, to change your usual behavior. We just don’t like to do that. It has made me think that the most successful public health interventions might be the ones that take the individual out of the loop. And air quality control seems an ideal fit here. Here is a potential intervention where you, the individual, have to do precisely nothing. The status quo is preserved. We just, you know, have cleaner indoor air.

But even the suggestion of air treatment systems as a bulwark against respiratory virus transmission has been met with not just skepticism but cynicism, and perhaps even defeatism. It seems that there are those out there who think there really is nothing we can do. Sickness is interpreted in a Calvinistic framework: You become ill because it is your pre-destiny. But maybe air treatment could actually work. It seems like it might, if a new paper from PLOS One is to be believed.

What we’re talking about is a study titled “Bipolar Ionization Rapidly Inactivates Real-World, Airborne Concentrations of Infective Respiratory Viruses” – a highly controlled, laboratory-based analysis of a bipolar ionization system which seems to rapidly reduce viral counts in the air.

The proposed mechanism of action is pretty simple. The ionization system – which, don’t worry, has been shown not to produce ozone – spits out positively and negatively charged particles, which float around the test chamber, designed to look like a pretty standard room that you might find in an office or a school.

courtesy PLOS One


Virus is then injected into the chamber through an aerosolization machine, to achieve concentrations on the order of what you might get standing within 6 feet or so of someone actively infected with COVID while they are breathing and talking.

The idea is that those ions stick to the virus particles, similar to how a balloon sticks to the wall after you rub it on your hair, and that tends to cause them to clump together and settle on surfaces more rapidly, and thus get farther away from their ports of entry to the human system: nose, mouth, and eyes. But the ions may also interfere with viruses’ ability to bind to cellular receptors, even in the air.

To quantify viral infectivity, the researchers used a biological system. Basically, you take air samples and expose a petri dish of cells to them and see how many cells die. Fewer cells dying, less infective. Under control conditions, you can see that virus infectivity does decrease over time. Time zero here is the end of a SARS-CoV-2 aerosolization.

courtesy PLOS One


This may simply reflect the fact that virus particles settle out of the air. But when the ionization system was added, infectivity decreases much more quickly. As you can see, within about an hour, you have almost no infective virus detectable. That’s fairly impressive.

courtesy PLOS One


Now, I’m not saying that this is a panacea, but it is certainly worth considering the use of technologies like these if we are going to revamp the infrastructure of our offices and schools. And, of course, it would be nice to see this tested in a rigorous clinical trial with actual infected people, not cells, as the outcome. But I continue to be encouraged by interventions like this which, to be honest, ask very little of us as individuals. Maybe it’s time we accept the things, or people, that we cannot change.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Headache after drinking red wine? This could be why

Article Type
Changed
Mon, 11/27/2023 - 22:13

 



This transcript has been edited for clarity.

Robert Louis Stevenson famously said, “Wine is bottled poetry.” And I think it works quite well. I’ve had wines that are simple, elegant, and unpretentious like Emily Dickinson, and passionate and mysterious like Pablo Neruda. And I’ve had wines that are more analogous to the limerick you might read scrawled on a rest-stop bathroom wall. Those ones give me headaches.

Wine headaches are on my mind this week, not only because of the incoming tide of Beaujolais nouveau, but because of a new study which claims to have finally explained the link between wine consumption and headaches – and apparently it’s not just the alcohol.

Headaches are common, and headaches after drinking alcohol are particularly common. An interesting epidemiologic phenomenon, not yet adequately explained, is why red wine is associated with more headache than other forms of alcohol. There have been many studies fingering many suspects, from sulfites to tannins to various phenolic compounds, but none have really provided a concrete explanation for what might be going on.

A new hypothesis came to the fore on Nov. 20 in the journal Scientific Reports:

To understand the idea, first a reminder of what happens when you drink alcohol, physiologically.

Alcohol is metabolized by the enzyme alcohol dehydrogenase in the gut and then in the liver. That turns it into acetaldehyde, a toxic metabolite. In most of us, aldehyde dehydrogenase (ALDH) quickly metabolizes acetaldehyde to the inert acetate, which can be safely excreted.

Dr. F. Perry Wilson


I say “most of us” because some populations, particularly those with East Asian ancestry, have a mutation in the ALDH gene which can lead to accumulation of toxic acetaldehyde with alcohol consumption – leading to facial flushing, nausea, and headache.

We can also inhibit the enzyme medically. That’s what the drug disulfiram, also known as Antabuse, does. It doesn’t prevent you from wanting to drink; it makes the consequences of drinking incredibly aversive.

The researchers focused in on the aldehyde dehydrogenase enzyme and conducted a screening study. Are there any compounds in red wine that naturally inhibit ALDH?

The results pointed squarely at quercetin, and particularly its metabolite quercetin glucuronide, which, at 20 micromolar concentrations, inhibited about 80% of ALDH activity.

Dr. F. Perry Wilson


Quercetin is a flavonoid – a compound that gives color to a variety of vegetables and fruits, including grapes. In a test tube, it is an antioxidant, which is enough evidence to spawn a small quercetin-as-supplement industry, but there is no convincing evidence that it is medically useful. The authors then examined the concentration of quercetin glucuronide to achieve various inhibitions of ALDH, as you can see in this graph here.

Scientific Reports


By about 10 micromolar, we see a decent amount of inhibition. Disulfiram is about 10 times more potent than that, but then again, you don’t drink three glasses of disulfiram with Thanksgiving dinner.

This is where this study stops. But it obviously tells us very little about what might be happening in the human body. For that, we need to ask the question: Can we get our quercetin levels to 10 micromolar? Is that remotely achievable?

Let’s start with how much quercetin there is in red wine. Like all things wine, it varies, but this study examining Australian wines found mean concentrations of 11 mg/L. The highest value I saw was close to 50 mg/L.



So let’s do some math. To make the numbers easy, let’s say you drank a liter of Australian wine, taking in 50 mg of quercetin glucuronide.

How much of that gets into your bloodstream? Some studies suggest a bioavailability of less than 1%, which basically means none and should probably put the quercetin hypothesis to bed. But there is some variation here too; it seems to depend on the form of quercetin you ingest.

Let’s say all 50 mg gets into your bloodstream. What blood concentration would that lead to? Well, I’ll keep the stoichiometry in the graphics and just say that if we assume that the volume of distribution of the compound is restricted to plasma alone, then you could achieve similar concentrations to what was done in petri dishes during this study.

Dr. F. Perry Wilson


Of course, if quercetin is really the culprit behind red wine headache, I have some questions: Why aren’t the Amazon reviews of quercetin supplements chock full of warnings not to take them with alcohol? And other foods have way higher quercetin concentration than wine, but you don’t hear people warning not to take your red onions with alcohol, or your capers, or lingonberries.

There’s some more work to be done here – most importantly, some human studies. Let’s give people wine with different amounts of quercetin and see what happens. Sign me up. Seriously.

As for Thanksgiving, it’s worth noting that cranberries have a lot of quercetin in them. So between the cranberry sauce, the Beaujolais, and your uncle ranting about the contrails again, the probability of headache is pretty darn high. Stay safe out there, and Happy Thanksgiving.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

 



This transcript has been edited for clarity.

Robert Louis Stevenson famously said, “Wine is bottled poetry.” And I think it works quite well. I’ve had wines that are simple, elegant, and unpretentious like Emily Dickinson, and passionate and mysterious like Pablo Neruda. And I’ve had wines that are more analogous to the limerick you might read scrawled on a rest-stop bathroom wall. Those ones give me headaches.

Wine headaches are on my mind this week, not only because of the incoming tide of Beaujolais nouveau, but because of a new study which claims to have finally explained the link between wine consumption and headaches – and apparently it’s not just the alcohol.

Headaches are common, and headaches after drinking alcohol are particularly common. An interesting epidemiologic phenomenon, not yet adequately explained, is why red wine is associated with more headache than other forms of alcohol. There have been many studies fingering many suspects, from sulfites to tannins to various phenolic compounds, but none have really provided a concrete explanation for what might be going on.

A new hypothesis came to the fore on Nov. 20 in the journal Scientific Reports:

To understand the idea, first a reminder of what happens when you drink alcohol, physiologically.

Alcohol is metabolized by the enzyme alcohol dehydrogenase in the gut and then in the liver. That turns it into acetaldehyde, a toxic metabolite. In most of us, aldehyde dehydrogenase (ALDH) quickly metabolizes acetaldehyde to the inert acetate, which can be safely excreted.

Dr. F. Perry Wilson


I say “most of us” because some populations, particularly those with East Asian ancestry, have a mutation in the ALDH gene which can lead to accumulation of toxic acetaldehyde with alcohol consumption – leading to facial flushing, nausea, and headache.

We can also inhibit the enzyme medically. That’s what the drug disulfiram, also known as Antabuse, does. It doesn’t prevent you from wanting to drink; it makes the consequences of drinking incredibly aversive.

The researchers focused in on the aldehyde dehydrogenase enzyme and conducted a screening study. Are there any compounds in red wine that naturally inhibit ALDH?

The results pointed squarely at quercetin, and particularly its metabolite quercetin glucuronide, which, at 20 micromolar concentrations, inhibited about 80% of ALDH activity.

Dr. F. Perry Wilson


Quercetin is a flavonoid – a compound that gives color to a variety of vegetables and fruits, including grapes. In a test tube, it is an antioxidant, which is enough evidence to spawn a small quercetin-as-supplement industry, but there is no convincing evidence that it is medically useful. The authors then examined the concentration of quercetin glucuronide to achieve various inhibitions of ALDH, as you can see in this graph here.

Scientific Reports


By about 10 micromolar, we see a decent amount of inhibition. Disulfiram is about 10 times more potent than that, but then again, you don’t drink three glasses of disulfiram with Thanksgiving dinner.

This is where this study stops. But it obviously tells us very little about what might be happening in the human body. For that, we need to ask the question: Can we get our quercetin levels to 10 micromolar? Is that remotely achievable?

Let’s start with how much quercetin there is in red wine. Like all things wine, it varies, but this study examining Australian wines found mean concentrations of 11 mg/L. The highest value I saw was close to 50 mg/L.



So let’s do some math. To make the numbers easy, let’s say you drank a liter of Australian wine, taking in 50 mg of quercetin glucuronide.

How much of that gets into your bloodstream? Some studies suggest a bioavailability of less than 1%, which basically means none and should probably put the quercetin hypothesis to bed. But there is some variation here too; it seems to depend on the form of quercetin you ingest.

Let’s say all 50 mg gets into your bloodstream. What blood concentration would that lead to? Well, I’ll keep the stoichiometry in the graphics and just say that if we assume that the volume of distribution of the compound is restricted to plasma alone, then you could achieve similar concentrations to what was done in petri dishes during this study.

Dr. F. Perry Wilson


Of course, if quercetin is really the culprit behind red wine headache, I have some questions: Why aren’t the Amazon reviews of quercetin supplements chock full of warnings not to take them with alcohol? And other foods have way higher quercetin concentration than wine, but you don’t hear people warning not to take your red onions with alcohol, or your capers, or lingonberries.

There’s some more work to be done here – most importantly, some human studies. Let’s give people wine with different amounts of quercetin and see what happens. Sign me up. Seriously.

As for Thanksgiving, it’s worth noting that cranberries have a lot of quercetin in them. So between the cranberry sauce, the Beaujolais, and your uncle ranting about the contrails again, the probability of headache is pretty darn high. Stay safe out there, and Happy Thanksgiving.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

 



This transcript has been edited for clarity.

Robert Louis Stevenson famously said, “Wine is bottled poetry.” And I think it works quite well. I’ve had wines that are simple, elegant, and unpretentious like Emily Dickinson, and passionate and mysterious like Pablo Neruda. And I’ve had wines that are more analogous to the limerick you might read scrawled on a rest-stop bathroom wall. Those ones give me headaches.

Wine headaches are on my mind this week, not only because of the incoming tide of Beaujolais nouveau, but because of a new study which claims to have finally explained the link between wine consumption and headaches – and apparently it’s not just the alcohol.

Headaches are common, and headaches after drinking alcohol are particularly common. An interesting epidemiologic phenomenon, not yet adequately explained, is why red wine is associated with more headache than other forms of alcohol. There have been many studies fingering many suspects, from sulfites to tannins to various phenolic compounds, but none have really provided a concrete explanation for what might be going on.

A new hypothesis came to the fore on Nov. 20 in the journal Scientific Reports:

To understand the idea, first a reminder of what happens when you drink alcohol, physiologically.

Alcohol is metabolized by the enzyme alcohol dehydrogenase in the gut and then in the liver. That turns it into acetaldehyde, a toxic metabolite. In most of us, aldehyde dehydrogenase (ALDH) quickly metabolizes acetaldehyde to the inert acetate, which can be safely excreted.

Dr. F. Perry Wilson


I say “most of us” because some populations, particularly those with East Asian ancestry, have a mutation in the ALDH gene which can lead to accumulation of toxic acetaldehyde with alcohol consumption – leading to facial flushing, nausea, and headache.

We can also inhibit the enzyme medically. That’s what the drug disulfiram, also known as Antabuse, does. It doesn’t prevent you from wanting to drink; it makes the consequences of drinking incredibly aversive.

The researchers focused in on the aldehyde dehydrogenase enzyme and conducted a screening study. Are there any compounds in red wine that naturally inhibit ALDH?

The results pointed squarely at quercetin, and particularly its metabolite quercetin glucuronide, which, at 20 micromolar concentrations, inhibited about 80% of ALDH activity.

Dr. F. Perry Wilson


Quercetin is a flavonoid – a compound that gives color to a variety of vegetables and fruits, including grapes. In a test tube, it is an antioxidant, which is enough evidence to spawn a small quercetin-as-supplement industry, but there is no convincing evidence that it is medically useful. The authors then examined the concentration of quercetin glucuronide to achieve various inhibitions of ALDH, as you can see in this graph here.

Scientific Reports


By about 10 micromolar, we see a decent amount of inhibition. Disulfiram is about 10 times more potent than that, but then again, you don’t drink three glasses of disulfiram with Thanksgiving dinner.

This is where this study stops. But it obviously tells us very little about what might be happening in the human body. For that, we need to ask the question: Can we get our quercetin levels to 10 micromolar? Is that remotely achievable?

Let’s start with how much quercetin there is in red wine. Like all things wine, it varies, but this study examining Australian wines found mean concentrations of 11 mg/L. The highest value I saw was close to 50 mg/L.



So let’s do some math. To make the numbers easy, let’s say you drank a liter of Australian wine, taking in 50 mg of quercetin glucuronide.

How much of that gets into your bloodstream? Some studies suggest a bioavailability of less than 1%, which basically means none and should probably put the quercetin hypothesis to bed. But there is some variation here too; it seems to depend on the form of quercetin you ingest.

Let’s say all 50 mg gets into your bloodstream. What blood concentration would that lead to? Well, I’ll keep the stoichiometry in the graphics and just say that if we assume that the volume of distribution of the compound is restricted to plasma alone, then you could achieve similar concentrations to what was done in petri dishes during this study.

Dr. F. Perry Wilson


Of course, if quercetin is really the culprit behind red wine headache, I have some questions: Why aren’t the Amazon reviews of quercetin supplements chock full of warnings not to take them with alcohol? And other foods have way higher quercetin concentration than wine, but you don’t hear people warning not to take your red onions with alcohol, or your capers, or lingonberries.

There’s some more work to be done here – most importantly, some human studies. Let’s give people wine with different amounts of quercetin and see what happens. Sign me up. Seriously.

As for Thanksgiving, it’s worth noting that cranberries have a lot of quercetin in them. So between the cranberry sauce, the Beaujolais, and your uncle ranting about the contrails again, the probability of headache is pretty darn high. Stay safe out there, and Happy Thanksgiving.

Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The future of medicine is RNA

Article Type
Changed
Tue, 11/14/2023 - 15:54

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Every once in a while, medicine changes in a fundamental way, and we may not realize it while it’s happening. I wasn’t around in 1928 when Fleming discovered penicillin; or in 1953 when Watson, Crick, and Franklin characterized the double-helical structure of DNA.

But looking at medicine today, there are essentially two places where I think we will see, in retrospect, that we were at a fundamental turning point. One is artificial intelligence, which gets so much attention and hype that I will simply say yes, this will change things, stay tuned.

The other is a bit more obscure, but I suspect it may be just as impactful. That other thing is RNA therapeutics – the medicines of the future.

Dr. F. Perry Wilson

I want to start with the idea that many diseases are, fundamentally, a problem of proteins. In some cases, like hypercholesterolemia, the body produces too much protein; in others, like hemophilia, too little.

Dr. F. Perry Wilson


When you think about disease this way, you realize that our current medications take effect late in the disease game. We have these molecules that try to block a protein from its receptor, prevent a protein from cleaving another protein, or increase the rate that a protein is broken down. It’s all distal to the fundamental problem: the production of the bad protein in the first place.

Enter small inhibitory RNAs, or siRNAs for short, discovered in 1998 by Andrew Fire and Craig Mello at UMass Worcester. The two won the Nobel prize in medicine just 8 years later; that’s a really short time, highlighting just how important this discovery was. In contrast, Karikó and Weissman won the Nobel for mRNA vaccines this year, after inventing them 18 years ago.

siRNAs are the body’s way of targeting proteins for destruction before they are ever created. About 20 base pairs long, siRNAs seek out a complementary target mRNA, attach to it, and call in a group of proteins to destroy it. With the target mRNA gone, no protein can be created.

Dr. F. Perry Wilson


You see where this is going, right? How does high cholesterol kill you? Proteins. How does Staphylococcus aureus kill you? Proteins. Even viruses can’t replicate if their RNA is prevented from being turned into proteins.

So, how do we use siRNAs? A new paper appearing in JAMA  describes a fairly impressive use case.

The background here is that higher levels of lipoprotein(a), an LDL-like protein, are associated with cardiovascular disease, heart attack, and stroke. But unfortunately, statins really don’t have any effect on lipoprotein(a) levels. Neither does diet. Your lipoprotein(a) level seems to be more or less hard-coded genetically.

So, what if we stop the genetic machinery from working? Enter lepodisiran, a drug from Eli Lilly. Unlike so many other medications, which are usually found in nature, purified, and synthesized, lepodisiran was created from scratch. It’s not hard. Thanks to the Human Genome Project, we know the genetic code for lipoprotein(a), so inventing an siRNA to target it specifically is trivial. That’s one of the key features of siRNA – you don’t have to find a chemical that binds strongly to some protein receptor, and worry about the off-target effects and all that nonsense. You just pick a protein you want to suppress and you suppress it.

Okay, it’s not that simple. siRNA is broken down very quickly by the body, so it needs to be targeted to the organ of interest – in this case, the liver, since that is where lipoprotein(a) is synthesized. Lepodisiran is targeted to the liver by this special targeting label here.

JAMA


The report is a standard dose-escalation trial. Six patients, all with elevated lipoprotein(a) levels, were started with a 4-mg dose (two additional individuals got placebo). They were intensely monitored, spending 3 days in a research unit for multiple blood draws followed by weekly, and then biweekly outpatient visits. Once they had done well, the next group of six people received a higher dose (two more got placebo), and the process was repeated – six times total – until the highest dose, 608 mg, was reached.

JAMA


This is an injection, of course; siRNA wouldn’t withstand the harshness of the digestive system. And it’s only one injection. You can see from the blood concentration curves that within about 48 hours, circulating lepodisiran was not detectable.

JAMA


But check out these results. Remember, this is from a single injection of lepodisiran.

Lipoprotein(a) levels start to drop within a week of administration, and they stay down. In the higher-dose groups, levels are nearly undetectable a year after that injection.

JAMA


It was this graph that made me sit back and think that there might be something new under the sun. A single injection that can suppress protein synthesis for an entire year? If it really works, it changes the game.

Of course, this study wasn’t powered to look at important outcomes like heart attacks and strokes. It was primarily designed to assess safety, and the drug was pretty well tolerated, with similar rates of adverse events in the drug and placebo groups.

As crazy as it sounds, the real concern here might be that this drug is too good; is it safe to drop your lipoprotein(a) levels to zero for a year? I don’t know. But lower doses don’t have quite as strong an effect.

Trust me, these drugs are going to change things. They already are. In July, The New England Journal of Medicine published a study of zilebesiran, an siRNA that inhibits the production of angiotensinogen, to control blood pressure. Similar story: One injection led to a basically complete suppression of angiotensinogen and a sustained decrease in blood pressure.

The New England Journal of Medicine


I’m not exaggerating when I say that there may come a time when you go to your doctor once a year, get your RNA shots, and don’t have to take any other medication from that point on. And that time may be, like, 5 years from now. It’s wild.

Seems to me that that rapid Nobel Prize was very well deserved.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Every once in a while, medicine changes in a fundamental way, and we may not realize it while it’s happening. I wasn’t around in 1928 when Fleming discovered penicillin; or in 1953 when Watson, Crick, and Franklin characterized the double-helical structure of DNA.

But looking at medicine today, there are essentially two places where I think we will see, in retrospect, that we were at a fundamental turning point. One is artificial intelligence, which gets so much attention and hype that I will simply say yes, this will change things, stay tuned.

The other is a bit more obscure, but I suspect it may be just as impactful. That other thing is RNA therapeutics – the medicines of the future.

Dr. F. Perry Wilson

I want to start with the idea that many diseases are, fundamentally, a problem of proteins. In some cases, like hypercholesterolemia, the body produces too much protein; in others, like hemophilia, too little.

Dr. F. Perry Wilson


When you think about disease this way, you realize that our current medications take effect late in the disease game. We have these molecules that try to block a protein from its receptor, prevent a protein from cleaving another protein, or increase the rate that a protein is broken down. It’s all distal to the fundamental problem: the production of the bad protein in the first place.

Enter small inhibitory RNAs, or siRNAs for short, discovered in 1998 by Andrew Fire and Craig Mello at UMass Worcester. The two won the Nobel prize in medicine just 8 years later; that’s a really short time, highlighting just how important this discovery was. In contrast, Karikó and Weissman won the Nobel for mRNA vaccines this year, after inventing them 18 years ago.

siRNAs are the body’s way of targeting proteins for destruction before they are ever created. About 20 base pairs long, siRNAs seek out a complementary target mRNA, attach to it, and call in a group of proteins to destroy it. With the target mRNA gone, no protein can be created.

Dr. F. Perry Wilson


You see where this is going, right? How does high cholesterol kill you? Proteins. How does Staphylococcus aureus kill you? Proteins. Even viruses can’t replicate if their RNA is prevented from being turned into proteins.

So, how do we use siRNAs? A new paper appearing in JAMA  describes a fairly impressive use case.

The background here is that higher levels of lipoprotein(a), an LDL-like protein, are associated with cardiovascular disease, heart attack, and stroke. But unfortunately, statins really don’t have any effect on lipoprotein(a) levels. Neither does diet. Your lipoprotein(a) level seems to be more or less hard-coded genetically.

So, what if we stop the genetic machinery from working? Enter lepodisiran, a drug from Eli Lilly. Unlike so many other medications, which are usually found in nature, purified, and synthesized, lepodisiran was created from scratch. It’s not hard. Thanks to the Human Genome Project, we know the genetic code for lipoprotein(a), so inventing an siRNA to target it specifically is trivial. That’s one of the key features of siRNA – you don’t have to find a chemical that binds strongly to some protein receptor, and worry about the off-target effects and all that nonsense. You just pick a protein you want to suppress and you suppress it.

Okay, it’s not that simple. siRNA is broken down very quickly by the body, so it needs to be targeted to the organ of interest – in this case, the liver, since that is where lipoprotein(a) is synthesized. Lepodisiran is targeted to the liver by this special targeting label here.

JAMA


The report is a standard dose-escalation trial. Six patients, all with elevated lipoprotein(a) levels, were started with a 4-mg dose (two additional individuals got placebo). They were intensely monitored, spending 3 days in a research unit for multiple blood draws followed by weekly, and then biweekly outpatient visits. Once they had done well, the next group of six people received a higher dose (two more got placebo), and the process was repeated – six times total – until the highest dose, 608 mg, was reached.

JAMA


This is an injection, of course; siRNA wouldn’t withstand the harshness of the digestive system. And it’s only one injection. You can see from the blood concentration curves that within about 48 hours, circulating lepodisiran was not detectable.

JAMA


But check out these results. Remember, this is from a single injection of lepodisiran.

Lipoprotein(a) levels start to drop within a week of administration, and they stay down. In the higher-dose groups, levels are nearly undetectable a year after that injection.

JAMA


It was this graph that made me sit back and think that there might be something new under the sun. A single injection that can suppress protein synthesis for an entire year? If it really works, it changes the game.

Of course, this study wasn’t powered to look at important outcomes like heart attacks and strokes. It was primarily designed to assess safety, and the drug was pretty well tolerated, with similar rates of adverse events in the drug and placebo groups.

As crazy as it sounds, the real concern here might be that this drug is too good; is it safe to drop your lipoprotein(a) levels to zero for a year? I don’t know. But lower doses don’t have quite as strong an effect.

Trust me, these drugs are going to change things. They already are. In July, The New England Journal of Medicine published a study of zilebesiran, an siRNA that inhibits the production of angiotensinogen, to control blood pressure. Similar story: One injection led to a basically complete suppression of angiotensinogen and a sustained decrease in blood pressure.

The New England Journal of Medicine


I’m not exaggerating when I say that there may come a time when you go to your doctor once a year, get your RNA shots, and don’t have to take any other medication from that point on. And that time may be, like, 5 years from now. It’s wild.

Seems to me that that rapid Nobel Prize was very well deserved.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Every once in a while, medicine changes in a fundamental way, and we may not realize it while it’s happening. I wasn’t around in 1928 when Fleming discovered penicillin; or in 1953 when Watson, Crick, and Franklin characterized the double-helical structure of DNA.

But looking at medicine today, there are essentially two places where I think we will see, in retrospect, that we were at a fundamental turning point. One is artificial intelligence, which gets so much attention and hype that I will simply say yes, this will change things, stay tuned.

The other is a bit more obscure, but I suspect it may be just as impactful. That other thing is RNA therapeutics – the medicines of the future.

Dr. F. Perry Wilson

I want to start with the idea that many diseases are, fundamentally, a problem of proteins. In some cases, like hypercholesterolemia, the body produces too much protein; in others, like hemophilia, too little.

Dr. F. Perry Wilson


When you think about disease this way, you realize that our current medications take effect late in the disease game. We have these molecules that try to block a protein from its receptor, prevent a protein from cleaving another protein, or increase the rate that a protein is broken down. It’s all distal to the fundamental problem: the production of the bad protein in the first place.

Enter small inhibitory RNAs, or siRNAs for short, discovered in 1998 by Andrew Fire and Craig Mello at UMass Worcester. The two won the Nobel prize in medicine just 8 years later; that’s a really short time, highlighting just how important this discovery was. In contrast, Karikó and Weissman won the Nobel for mRNA vaccines this year, after inventing them 18 years ago.

siRNAs are the body’s way of targeting proteins for destruction before they are ever created. About 20 base pairs long, siRNAs seek out a complementary target mRNA, attach to it, and call in a group of proteins to destroy it. With the target mRNA gone, no protein can be created.

Dr. F. Perry Wilson


You see where this is going, right? How does high cholesterol kill you? Proteins. How does Staphylococcus aureus kill you? Proteins. Even viruses can’t replicate if their RNA is prevented from being turned into proteins.

So, how do we use siRNAs? A new paper appearing in JAMA  describes a fairly impressive use case.

The background here is that higher levels of lipoprotein(a), an LDL-like protein, are associated with cardiovascular disease, heart attack, and stroke. But unfortunately, statins really don’t have any effect on lipoprotein(a) levels. Neither does diet. Your lipoprotein(a) level seems to be more or less hard-coded genetically.

So, what if we stop the genetic machinery from working? Enter lepodisiran, a drug from Eli Lilly. Unlike so many other medications, which are usually found in nature, purified, and synthesized, lepodisiran was created from scratch. It’s not hard. Thanks to the Human Genome Project, we know the genetic code for lipoprotein(a), so inventing an siRNA to target it specifically is trivial. That’s one of the key features of siRNA – you don’t have to find a chemical that binds strongly to some protein receptor, and worry about the off-target effects and all that nonsense. You just pick a protein you want to suppress and you suppress it.

Okay, it’s not that simple. siRNA is broken down very quickly by the body, so it needs to be targeted to the organ of interest – in this case, the liver, since that is where lipoprotein(a) is synthesized. Lepodisiran is targeted to the liver by this special targeting label here.

JAMA


The report is a standard dose-escalation trial. Six patients, all with elevated lipoprotein(a) levels, were started with a 4-mg dose (two additional individuals got placebo). They were intensely monitored, spending 3 days in a research unit for multiple blood draws followed by weekly, and then biweekly outpatient visits. Once they had done well, the next group of six people received a higher dose (two more got placebo), and the process was repeated – six times total – until the highest dose, 608 mg, was reached.

JAMA


This is an injection, of course; siRNA wouldn’t withstand the harshness of the digestive system. And it’s only one injection. You can see from the blood concentration curves that within about 48 hours, circulating lepodisiran was not detectable.

JAMA


But check out these results. Remember, this is from a single injection of lepodisiran.

Lipoprotein(a) levels start to drop within a week of administration, and they stay down. In the higher-dose groups, levels are nearly undetectable a year after that injection.

JAMA


It was this graph that made me sit back and think that there might be something new under the sun. A single injection that can suppress protein synthesis for an entire year? If it really works, it changes the game.

Of course, this study wasn’t powered to look at important outcomes like heart attacks and strokes. It was primarily designed to assess safety, and the drug was pretty well tolerated, with similar rates of adverse events in the drug and placebo groups.

As crazy as it sounds, the real concern here might be that this drug is too good; is it safe to drop your lipoprotein(a) levels to zero for a year? I don’t know. But lower doses don’t have quite as strong an effect.

Trust me, these drugs are going to change things. They already are. In July, The New England Journal of Medicine published a study of zilebesiran, an siRNA that inhibits the production of angiotensinogen, to control blood pressure. Similar story: One injection led to a basically complete suppression of angiotensinogen and a sustained decrease in blood pressure.

The New England Journal of Medicine


I’m not exaggerating when I say that there may come a time when you go to your doctor once a year, get your RNA shots, and don’t have to take any other medication from that point on. And that time may be, like, 5 years from now. It’s wild.

Seems to me that that rapid Nobel Prize was very well deserved.
 

Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships. This transcript has been edited for clarity.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article