Really? Cancer screening doesn’t save lives?

Article Type
Changed
Wed, 08/30/2023 - 14:53

 

This transcript from Impact Factor has been edited for clarity.

If you are my age or older, and like me, you are something of a rule follower, then you’re getting screened for various cancers.

Colonoscopies, mammograms, cervical cancer screening, chest CTs for people with a significant smoking history. The tests are done and usually, but not always, they are negative. And if positive, usually, but not always, follow-up tests are negative, and if they aren’t and a new cancer is diagnosed you tell yourself, Well, at least we caught it early. Isn’t it good that I’m a rule follower? My life was just saved.

But it turns out, proving that cancer screening actually saves lives is quite difficult. Is it possible that all this screening is for nothing?

The benefits, risks, or perhaps futility of cancer screening is in the news this week because of this article, appearing in JAMA Internal Medicine.

It’s a meta-analysis of very specific randomized trials of cancer screening modalities and concludes that, with the exception of sigmoidoscopy for colon cancer screening, none of them meaningfully change life expectancy.

Now – a bit of inside baseball here – I almost never choose to discuss meta-analyses on Impact Factor. It’s hard enough to dig deep into the methodology of a single study, but with a meta-analysis, you’re sort of obligated to review all the included studies, and, what’s worse, the studies that were not included but might bear on the central question.

In this case, though, the topic is important enough to think about a bit more, and the conclusions have large enough implications for public health that we should question them a bit.

First, let’s run down the study as presented.

The authors searched for randomized trials of cancer screening modalities. This is important, and I think appropriate. They wanted studies that took some people and assigned them to screening, and some people to no screening – avoiding the confounding that would come from observational data (rule followers like me tend to live longer owing to a variety of healthful behaviors, not just cancer screening).

Dr. F. Perry Wilson


They didn’t stop at just randomized trials, though. They wanted trials that reported on all-cause, not cancer-specific, mortality. We’ll dig into the distinction in a sec. Finally, they wanted trials with at least 10 years of follow-up time.

These are pretty strict criteria – and after applying that filter, we are left with a grand total of 18 studies to analyze. Most were in the colon cancer space; only two studies met criteria for mammography screening.

Right off the bat, this raises concerns to me. In the universe of high-quality studies of cancer screening modalities, this is just the tip of the iceberg. And the results of meta-analyses are always dependent on the included studies – definitionally.

The results as presented are compelling. None of the individual screening modalities significantly improve life expectancy, except for sigmoidoscopy, which improves it by a whopping 110 days.

JAMA Internal Medicine


(Side note: Averages are tricky here. It’s not like everyone who gets screened gets 110 extra days. Most people get nothing, and some people – those whose colon cancer was detected early – get a bunch of extra days.)

Dr. F. Perry Wilson


And a thing about meta-analysis: Meeting the criteria to be included in a meta-analysis does not necessarily mean the study was a good one. For example, one of the two mammography screening studies included is this one, from Miller and colleagues.

On the surface, it looks good – a large randomized trial of mammography screening in Canada, with long-term follow-up including all-cause mortality. Showing, by the way, no effect of screening on either breast cancer–specific or all-cause mortality.

But that study came under a lot of criticism owing to allegations that randomization was broken and women with palpable breast masses were preferentially put into the mammography group, making those outcomes worse.

The authors of the current meta-analysis don’t mention this. Indeed, they state that they don’t perform any assessments of the quality of the included studies.

But I don’t want to criticize all the included studies. Let’s think bigger picture.

Randomized trials of screening for cancers like colon, breast, and lung cancer in smokers have generally shown that those randomized to screening had lower target-cancer–specific mortality. Across all the randomized mammography studies, for example, women randomized to mammography were about 20% less likely to die of breast cancer than were those who were randomized to not be screened – particularly among those above age 50.

But it’s true that all-cause mortality, on the whole, has not differed statistically between those randomized to mammography vs. no mammography. What’s the deal?

Well, the authors of the meta-analysis engage in some zero-sum thinking here. They say that if it is true that screening tests reduce cancer-specific deaths, but all-cause mortality is not different, screening tests must increase mortality due to other causes. How? They cite colonic perforation during colonoscopy as an example of a harm that could lead to earlier death, which makes some sense. For mammogram and other less invasive screening modalities, they suggest that the stress and anxiety associated with screening might increase the risk for death – this is a bit harder for me to defend.

The thing is, statistics really isn’t a zero-sum game. It’s a question of signal vs. noise. Take breast cancer, for example. Without screening, about 3.2% of women in this country would die of breast cancer. With screening, 2.8% would die (that’s a 20% reduction on the relative scale). The truth is, most women don’t die of breast cancer. Most people don’t die of colon cancer. Even most smokers don’t die of lung cancer. Most people die of heart disease. And then cancer – but there are a lot of cancers out there, and only a handful have decent screening tests.

In other words, the screening tests are unlikely to help most people because most people will not die of the particular type of cancer being screened for. But it will help some small number of those people being screened a lot, potentially saving their lives. If we knew who those people were in advance, it would be great, but then I suppose we wouldn’t need the screening test in the first place.

It’s not fair, then, to say that mammography increases non–breast cancer causes of death. In reality, it’s just that the impact of mammography on all-cause mortality is washed out by the random noise inherent to studying a sample of individuals rather than the entire population.

I’m reminded of that old story about the girl on the beach after a storm, throwing beached starfish back into the water. Someone comes by and says, “Why are you doing that? There are millions of starfish here – it doesn’t matter if you throw a few back.” And she says, “It matters for this one.”

There are other issues with aggregating data like these and concluding that there is no effect on all-cause mortality. For one, it assumes the people randomized to no screening never got screening. Most of these studies lasted 5-10 years, some with longer follow-up, but many people in the no-screening arm may have been screened as recommendations have changed. That would tend to bias the results against screening because the so-called control group, well, isn’t.

It also fails to acknowledge the reality that screening for disease can be thought of as a package deal. Instead of asking whether screening for breast cancer, and colon cancer, and lung cancer individually saves lives, the real relevant question is whether a policy of screening for cancer in general saves lives. And that hasn’t been studied very broadly, except in one trial looking at screening for four cancers. That study is in this meta-analysis and, interestingly, seems to suggest that the policy does extend life – by 123 days. Again, be careful how you think about that average.

I don’t want to be an absolutist here. Whether these screening tests are a good idea or not is actually a moving target. As treatment for cancer gets better, detecting cancer early may not be as important. As new screening modalities emerge, older ones may not be preferable any longer. Better testing, genetic or otherwise, might allow us to tailor screening more narrowly than the population-based approach we have now.

But I worry that a meta-analysis like this, which concludes that screening doesn’t help on the basis of a handful of studies – without acknowledgment of the signal-to-noise problem, without accounting for screening in the control group, without acknowledging that screening should be thought of as a package – will lead some people to make the decision to forgo screening. for, say, 49 out of 50 of them, that may be fine. But for 1 out of 50 or so, well, it matters for that one.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript from Impact Factor has been edited for clarity.

If you are my age or older, and like me, you are something of a rule follower, then you’re getting screened for various cancers.

Colonoscopies, mammograms, cervical cancer screening, chest CTs for people with a significant smoking history. The tests are done and usually, but not always, they are negative. And if positive, usually, but not always, follow-up tests are negative, and if they aren’t and a new cancer is diagnosed you tell yourself, Well, at least we caught it early. Isn’t it good that I’m a rule follower? My life was just saved.

But it turns out, proving that cancer screening actually saves lives is quite difficult. Is it possible that all this screening is for nothing?

The benefits, risks, or perhaps futility of cancer screening is in the news this week because of this article, appearing in JAMA Internal Medicine.

It’s a meta-analysis of very specific randomized trials of cancer screening modalities and concludes that, with the exception of sigmoidoscopy for colon cancer screening, none of them meaningfully change life expectancy.

Now – a bit of inside baseball here – I almost never choose to discuss meta-analyses on Impact Factor. It’s hard enough to dig deep into the methodology of a single study, but with a meta-analysis, you’re sort of obligated to review all the included studies, and, what’s worse, the studies that were not included but might bear on the central question.

In this case, though, the topic is important enough to think about a bit more, and the conclusions have large enough implications for public health that we should question them a bit.

First, let’s run down the study as presented.

The authors searched for randomized trials of cancer screening modalities. This is important, and I think appropriate. They wanted studies that took some people and assigned them to screening, and some people to no screening – avoiding the confounding that would come from observational data (rule followers like me tend to live longer owing to a variety of healthful behaviors, not just cancer screening).

Dr. F. Perry Wilson


They didn’t stop at just randomized trials, though. They wanted trials that reported on all-cause, not cancer-specific, mortality. We’ll dig into the distinction in a sec. Finally, they wanted trials with at least 10 years of follow-up time.

These are pretty strict criteria – and after applying that filter, we are left with a grand total of 18 studies to analyze. Most were in the colon cancer space; only two studies met criteria for mammography screening.

Right off the bat, this raises concerns to me. In the universe of high-quality studies of cancer screening modalities, this is just the tip of the iceberg. And the results of meta-analyses are always dependent on the included studies – definitionally.

The results as presented are compelling. None of the individual screening modalities significantly improve life expectancy, except for sigmoidoscopy, which improves it by a whopping 110 days.

JAMA Internal Medicine


(Side note: Averages are tricky here. It’s not like everyone who gets screened gets 110 extra days. Most people get nothing, and some people – those whose colon cancer was detected early – get a bunch of extra days.)

Dr. F. Perry Wilson


And a thing about meta-analysis: Meeting the criteria to be included in a meta-analysis does not necessarily mean the study was a good one. For example, one of the two mammography screening studies included is this one, from Miller and colleagues.

On the surface, it looks good – a large randomized trial of mammography screening in Canada, with long-term follow-up including all-cause mortality. Showing, by the way, no effect of screening on either breast cancer–specific or all-cause mortality.

But that study came under a lot of criticism owing to allegations that randomization was broken and women with palpable breast masses were preferentially put into the mammography group, making those outcomes worse.

The authors of the current meta-analysis don’t mention this. Indeed, they state that they don’t perform any assessments of the quality of the included studies.

But I don’t want to criticize all the included studies. Let’s think bigger picture.

Randomized trials of screening for cancers like colon, breast, and lung cancer in smokers have generally shown that those randomized to screening had lower target-cancer–specific mortality. Across all the randomized mammography studies, for example, women randomized to mammography were about 20% less likely to die of breast cancer than were those who were randomized to not be screened – particularly among those above age 50.

But it’s true that all-cause mortality, on the whole, has not differed statistically between those randomized to mammography vs. no mammography. What’s the deal?

Well, the authors of the meta-analysis engage in some zero-sum thinking here. They say that if it is true that screening tests reduce cancer-specific deaths, but all-cause mortality is not different, screening tests must increase mortality due to other causes. How? They cite colonic perforation during colonoscopy as an example of a harm that could lead to earlier death, which makes some sense. For mammogram and other less invasive screening modalities, they suggest that the stress and anxiety associated with screening might increase the risk for death – this is a bit harder for me to defend.

The thing is, statistics really isn’t a zero-sum game. It’s a question of signal vs. noise. Take breast cancer, for example. Without screening, about 3.2% of women in this country would die of breast cancer. With screening, 2.8% would die (that’s a 20% reduction on the relative scale). The truth is, most women don’t die of breast cancer. Most people don’t die of colon cancer. Even most smokers don’t die of lung cancer. Most people die of heart disease. And then cancer – but there are a lot of cancers out there, and only a handful have decent screening tests.

In other words, the screening tests are unlikely to help most people because most people will not die of the particular type of cancer being screened for. But it will help some small number of those people being screened a lot, potentially saving their lives. If we knew who those people were in advance, it would be great, but then I suppose we wouldn’t need the screening test in the first place.

It’s not fair, then, to say that mammography increases non–breast cancer causes of death. In reality, it’s just that the impact of mammography on all-cause mortality is washed out by the random noise inherent to studying a sample of individuals rather than the entire population.

I’m reminded of that old story about the girl on the beach after a storm, throwing beached starfish back into the water. Someone comes by and says, “Why are you doing that? There are millions of starfish here – it doesn’t matter if you throw a few back.” And she says, “It matters for this one.”

There are other issues with aggregating data like these and concluding that there is no effect on all-cause mortality. For one, it assumes the people randomized to no screening never got screening. Most of these studies lasted 5-10 years, some with longer follow-up, but many people in the no-screening arm may have been screened as recommendations have changed. That would tend to bias the results against screening because the so-called control group, well, isn’t.

It also fails to acknowledge the reality that screening for disease can be thought of as a package deal. Instead of asking whether screening for breast cancer, and colon cancer, and lung cancer individually saves lives, the real relevant question is whether a policy of screening for cancer in general saves lives. And that hasn’t been studied very broadly, except in one trial looking at screening for four cancers. That study is in this meta-analysis and, interestingly, seems to suggest that the policy does extend life – by 123 days. Again, be careful how you think about that average.

I don’t want to be an absolutist here. Whether these screening tests are a good idea or not is actually a moving target. As treatment for cancer gets better, detecting cancer early may not be as important. As new screening modalities emerge, older ones may not be preferable any longer. Better testing, genetic or otherwise, might allow us to tailor screening more narrowly than the population-based approach we have now.

But I worry that a meta-analysis like this, which concludes that screening doesn’t help on the basis of a handful of studies – without acknowledgment of the signal-to-noise problem, without accounting for screening in the control group, without acknowledging that screening should be thought of as a package – will lead some people to make the decision to forgo screening. for, say, 49 out of 50 of them, that may be fine. But for 1 out of 50 or so, well, it matters for that one.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

This transcript from Impact Factor has been edited for clarity.

If you are my age or older, and like me, you are something of a rule follower, then you’re getting screened for various cancers.

Colonoscopies, mammograms, cervical cancer screening, chest CTs for people with a significant smoking history. The tests are done and usually, but not always, they are negative. And if positive, usually, but not always, follow-up tests are negative, and if they aren’t and a new cancer is diagnosed you tell yourself, Well, at least we caught it early. Isn’t it good that I’m a rule follower? My life was just saved.

But it turns out, proving that cancer screening actually saves lives is quite difficult. Is it possible that all this screening is for nothing?

The benefits, risks, or perhaps futility of cancer screening is in the news this week because of this article, appearing in JAMA Internal Medicine.

It’s a meta-analysis of very specific randomized trials of cancer screening modalities and concludes that, with the exception of sigmoidoscopy for colon cancer screening, none of them meaningfully change life expectancy.

Now – a bit of inside baseball here – I almost never choose to discuss meta-analyses on Impact Factor. It’s hard enough to dig deep into the methodology of a single study, but with a meta-analysis, you’re sort of obligated to review all the included studies, and, what’s worse, the studies that were not included but might bear on the central question.

In this case, though, the topic is important enough to think about a bit more, and the conclusions have large enough implications for public health that we should question them a bit.

First, let’s run down the study as presented.

The authors searched for randomized trials of cancer screening modalities. This is important, and I think appropriate. They wanted studies that took some people and assigned them to screening, and some people to no screening – avoiding the confounding that would come from observational data (rule followers like me tend to live longer owing to a variety of healthful behaviors, not just cancer screening).

Dr. F. Perry Wilson


They didn’t stop at just randomized trials, though. They wanted trials that reported on all-cause, not cancer-specific, mortality. We’ll dig into the distinction in a sec. Finally, they wanted trials with at least 10 years of follow-up time.

These are pretty strict criteria – and after applying that filter, we are left with a grand total of 18 studies to analyze. Most were in the colon cancer space; only two studies met criteria for mammography screening.

Right off the bat, this raises concerns to me. In the universe of high-quality studies of cancer screening modalities, this is just the tip of the iceberg. And the results of meta-analyses are always dependent on the included studies – definitionally.

The results as presented are compelling. None of the individual screening modalities significantly improve life expectancy, except for sigmoidoscopy, which improves it by a whopping 110 days.

JAMA Internal Medicine


(Side note: Averages are tricky here. It’s not like everyone who gets screened gets 110 extra days. Most people get nothing, and some people – those whose colon cancer was detected early – get a bunch of extra days.)

Dr. F. Perry Wilson


And a thing about meta-analysis: Meeting the criteria to be included in a meta-analysis does not necessarily mean the study was a good one. For example, one of the two mammography screening studies included is this one, from Miller and colleagues.

On the surface, it looks good – a large randomized trial of mammography screening in Canada, with long-term follow-up including all-cause mortality. Showing, by the way, no effect of screening on either breast cancer–specific or all-cause mortality.

But that study came under a lot of criticism owing to allegations that randomization was broken and women with palpable breast masses were preferentially put into the mammography group, making those outcomes worse.

The authors of the current meta-analysis don’t mention this. Indeed, they state that they don’t perform any assessments of the quality of the included studies.

But I don’t want to criticize all the included studies. Let’s think bigger picture.

Randomized trials of screening for cancers like colon, breast, and lung cancer in smokers have generally shown that those randomized to screening had lower target-cancer–specific mortality. Across all the randomized mammography studies, for example, women randomized to mammography were about 20% less likely to die of breast cancer than were those who were randomized to not be screened – particularly among those above age 50.

But it’s true that all-cause mortality, on the whole, has not differed statistically between those randomized to mammography vs. no mammography. What’s the deal?

Well, the authors of the meta-analysis engage in some zero-sum thinking here. They say that if it is true that screening tests reduce cancer-specific deaths, but all-cause mortality is not different, screening tests must increase mortality due to other causes. How? They cite colonic perforation during colonoscopy as an example of a harm that could lead to earlier death, which makes some sense. For mammogram and other less invasive screening modalities, they suggest that the stress and anxiety associated with screening might increase the risk for death – this is a bit harder for me to defend.

The thing is, statistics really isn’t a zero-sum game. It’s a question of signal vs. noise. Take breast cancer, for example. Without screening, about 3.2% of women in this country would die of breast cancer. With screening, 2.8% would die (that’s a 20% reduction on the relative scale). The truth is, most women don’t die of breast cancer. Most people don’t die of colon cancer. Even most smokers don’t die of lung cancer. Most people die of heart disease. And then cancer – but there are a lot of cancers out there, and only a handful have decent screening tests.

In other words, the screening tests are unlikely to help most people because most people will not die of the particular type of cancer being screened for. But it will help some small number of those people being screened a lot, potentially saving their lives. If we knew who those people were in advance, it would be great, but then I suppose we wouldn’t need the screening test in the first place.

It’s not fair, then, to say that mammography increases non–breast cancer causes of death. In reality, it’s just that the impact of mammography on all-cause mortality is washed out by the random noise inherent to studying a sample of individuals rather than the entire population.

I’m reminded of that old story about the girl on the beach after a storm, throwing beached starfish back into the water. Someone comes by and says, “Why are you doing that? There are millions of starfish here – it doesn’t matter if you throw a few back.” And she says, “It matters for this one.”

There are other issues with aggregating data like these and concluding that there is no effect on all-cause mortality. For one, it assumes the people randomized to no screening never got screening. Most of these studies lasted 5-10 years, some with longer follow-up, but many people in the no-screening arm may have been screened as recommendations have changed. That would tend to bias the results against screening because the so-called control group, well, isn’t.

It also fails to acknowledge the reality that screening for disease can be thought of as a package deal. Instead of asking whether screening for breast cancer, and colon cancer, and lung cancer individually saves lives, the real relevant question is whether a policy of screening for cancer in general saves lives. And that hasn’t been studied very broadly, except in one trial looking at screening for four cancers. That study is in this meta-analysis and, interestingly, seems to suggest that the policy does extend life – by 123 days. Again, be careful how you think about that average.

I don’t want to be an absolutist here. Whether these screening tests are a good idea or not is actually a moving target. As treatment for cancer gets better, detecting cancer early may not be as important. As new screening modalities emerge, older ones may not be preferable any longer. Better testing, genetic or otherwise, might allow us to tailor screening more narrowly than the population-based approach we have now.

But I worry that a meta-analysis like this, which concludes that screening doesn’t help on the basis of a handful of studies – without acknowledgment of the signal-to-noise problem, without accounting for screening in the control group, without acknowledging that screening should be thought of as a package – will lead some people to make the decision to forgo screening. for, say, 49 out of 50 of them, that may be fine. But for 1 out of 50 or so, well, it matters for that one.
 

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. He has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

On the best way to exercise

Article Type
Changed
Wed, 08/09/2023 - 13:05

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

I’m going to talk about something important to a lot of us, based on a new study that has just come out that promises to tell us the right way to exercise. This is a major issue as we think about the best ways to stay healthy.

There are basically two main types of exercise that exercise physiologists think about. There are aerobic exercises: the cardiovascular things like running on a treadmill or outside. Then there are muscle-strengthening exercises: lifting weights, calisthenics, and so on. And of course, plenty of exercises do both at the same time.

It seems that the era of aerobic exercise as the main way to improve health was the 1980s and early 1990s. Then we started to increasingly recognize that muscle-strengthening exercise was really important too. We’ve got a ton of data on the benefits of cardiovascular and aerobic exercise (a reduced risk for cardiovascular disease, cancer, and all-cause mortality, and even improved cognitive function) across a variety of study designs, including cohort studies, but also some randomized controlled trials where people were randomized to aerobic activity.

We’re starting to get more data on the benefits of muscle-strengthening exercises, although it hasn’t been in the zeitgeist as much. Obviously, this increases strength and may reduce visceral fat, increase anaerobic capacity and muscle mass, and therefore [increase the] basal metabolic rate. What is really interesting about muscle strengthening is that muscle just takes up more energy at rest, so building bigger muscles increases your basal energy expenditure and increases insulin sensitivity because muscle is a good insulin sensitizer.

So, do you do both? Do you do one? Do you do the other? What’s the right answer here?

it depends on who you ask. The Center for Disease Control and Prevention’s recommendation, which changes from time to time, is that you should do at least 150 minutes a week of moderate-intensity aerobic activity. Anything that gets your heart beating faster counts here. So that’s 30 minutes, 5 days a week. They also say you can do 75 minutes a week of vigorous-intensity aerobic activity – something that really gets your heart rate up and you are breaking a sweat. Now they also recommend at least 2 days a week of a muscle-strengthening activity that makes your muscles work harder than usual, whether that’s push-ups or lifting weights or something like that.

The World Health Organization is similar. They don’t target 150 minutes a week. They actually say at least 150 and up to 300 minutes of moderate-intensity physical activity or 75-150 minutes of vigorous intensity aerobic physical activity. They are setting the floor, whereas the CDC sets its target and then they go a bit higher. They also recommend 2 days of muscle strengthening per week for optimal health.

But what do the data show? Why am I talking about this? It’s because of this new study in JAMA Internal Medicine by Ruben Lopez Bueno and colleagues. I’m going to focus on all-cause mortality for brevity, but the results are broadly similar.

The data source is the U.S. National Health Interview Survey. A total of 500,705 people took part in the survey and answered a slew of questions (including self-reports on their exercise amounts), with a median follow-up of about 10 years looking for things like cardiovascular deaths, cancer deaths, and so on.

The survey classified people into different exercise categories – how much time they spent doing moderate physical activity (MPA), vigorous physical activity (VPA), or muscle-strengthening activity (MSA).

Dr. Wilson


There are six categories based on duration of MPA (the WHO targets are highlighted in green), four categories based on length of time of VPA, and two categories of MSA (≥ or < two times per week). This gives a total of 48 possible combinations of exercise you could do in a typical week.

JAMA Internal Medicine


Here are the percentages of people who fell into each of these 48 potential categories. The largest is the 35% of people who fell into the “nothing” category (no MPA, no VPA, and less than two sessions per week of MSA). These “nothing” people are going to be a reference category moving forward.

JAMA Internal Medicine


So who are these people? On the far left are the 361,000 people (the vast majority) who don’t hit that 150 minutes a week of MPA or 75 minutes a week of VPA, and they don’t do 2 days a week of MSA. The other three categories are increasing amounts of exercise. Younger people seem to be doing more exercise at the higher ends, and men are more likely to be doing exercise at the higher end. There are also some interesting findings from the alcohol drinking survey. The people who do more exercise are more likely to be current drinkers. This is interesting. I confirmed these data with the investigator. This might suggest one of the reasons why some studies have shown that drinkers have better outcomes in terms of either cardiovascular or cognitive outcomes over time. There’s a lot of conflicting data there, but in part, it might be that healthier people might drink more alcohol. It could be a socioeconomic phenomenon as well.

Now, what blew my mind were these smoker numbers, but don’t get too excited about it. What it looks like from the table in JAMA Internal Medicine is that 20% of the people who don’t do much exercise smoke, and then something like 60% of the people who do more exercise smoke. That can’t be right. So I checked with the lead study author. There is a mistake in these columns for smoking. They were supposed to flip the “never smoker” and “current smoker” numbers. You can actually see that just 15.2% of those who exercise a lot are current smokers, not 63.8%. This has been fixed online, but just in case you saw this and you were as confused as I was that these incredibly healthy smokers are out there exercising all the time, it was just a typo.

Dr. Wilson


There is bias here. One of the big ones is called reverse causation bias. This is what might happen if, let’s say you’re already sick, you have cancer, you have some serious cardiovascular disease, or heart failure. You can’t exercise that much. You physically can’t do it. And then if you die, we wouldn’t find that exercise is beneficial. We would see that sicker people aren’t as able to exercise. The investigators got around this a bit by excluding mortality events within 2 years of the initial survey. Anyone who died within 2 years after saying how often they exercised was not included in this analysis.

This is known as the healthy exerciser or healthy user effect. Sometimes this means that people who exercise a lot probably do other healthy things; they might eat better or get out in the sun more. Researchers try to get around this through multivariable adjustment. They adjust for age, sex, race, marital status, etc. No adjustment is perfect. There’s always residual confounding. But this is probably the best you can do with the dataset like the one they had access to.

JAMA Internal Medicine


Let’s go to the results, which are nicely heat-mapped in the paper. They’re divided into people who have less or more than 2 days of MSA. Our reference groups that we want to pay attention to are the people who don’t do anything. The highest mortality of 9.8 individuals per 1,000 person-years is seen in the group that reported no moderate physical activity, no VPA, and less than 2 days a week of MSA.

As you move up and to the right (more VPA and MPA), you see lower numbers. The lowest number was 4.9 among people who reported more than 150 minutes per week of VPA and 2 days of MSA.

Looking at these data, the benefit, or the bang for your buck is higher for VPA than for MPA. Getting 2 days of MSA does have a tendency to reduce overall mortality. This is not necessarily causal, but it is rather potent and consistent across all the different groups.

So, what are we supposed to do here? I think the most clear finding from the study is that anything is better than nothing. This study suggests that if you are going to get activity, push on the vigorous activity if you’re physically able to do it. And of course, layering in the MSA as well seems to be associated with benefit.

Like everything in life, there’s no one simple solution. It’s a mix. But telling ourselves and our patients to get out there if you can and break a sweat as often as you can during the week, and take a couple of days to get those muscles a little bigger, may increase insulin sensitivity and basal metabolic rate – is it guaranteed to extend life? No. This is an observational study. We can’t say; we don’t have causal data here, but it’s unlikely to cause much harm. I’m particularly happy that people are doing a much better job now of really dissecting out the kinds of physical activity that are beneficial. It turns out that all of it is, and probably a mixture is best.

Dr. Wilson is associate professor, department of medicine, and interim director, program of applied translational research, Yale University, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

A new and completely different pain medicine

Article Type
Changed
Mon, 08/14/2023 - 14:46

This transcript has been edited for clarity.

When you stub your toe or get a paper cut on your finger, you feel the pain in that part of your body. It feels like the pain is coming from that place. But, of course, that’s not really what is happening. Pain doesn’t really happen in your toe or your finger. It happens in your brain.

It’s a game of telephone, really. The afferent nerve fiber detects the noxious stimulus, passing that signal to the second-order neuron in the dorsal root ganglia of the spinal cord, which runs it up to the thalamus to be passed to the third-order neuron which brings it to the cortex for localization and conscious perception. It’s not even a very good game of telephone. It takes about 100 ms for a pain signal to get from the hand to the brain – longer from the feet, given the greater distance. You see your foot hit the corner of the coffee table and have just enough time to think: “Oh no!” before the pain hits.

Wikimedia Commons


Given the Rube Goldberg nature of the process, it would seem like there are any number of places we could stop pain sensation. And sure, local anesthetics at the site of injury, or even spinal anesthetics, are powerful – if temporary and hard to administer – solutions to acute pain.

But in our everyday armamentarium, let’s be honest – we essentially have three options: opiates and opioids, which activate the mu-receptors in the brain to dull pain (and cause a host of other nasty side effects); NSAIDs, which block prostaglandin synthesis and thus limit the ability for pain-conducting neurons to get excited; and acetaminophen, which, despite being used for a century, is poorly understood.

Dr. F. Perry Wilson


But now, we enter the prologue of what might be the next big story in pain control. Let’s talk about VX-548.

If you were to zoom in on the connection between that first afferent pain fiber and the secondary nerve in the spinal cord dorsal root ganglion, you would see a receptor called Nav1.8, a voltage-gated sodium channel.

This receptor is a key part of the apparatus that passes information from nerve 1 to nerve 2, but only for fibers that transmit pain signals. In fact, humans with mutations in this receptor that leave it always in the “open” state have a severe pain syndrome. Blocking the receptor, therefore, might reduce pain.

In preclinical work, researchers identified VX-548, which doesn’t have a brand name yet, as a potent blocker of that channel even in nanomolar concentrations. Importantly, the compound was highly selective for that particular channel – about 30,000 times more selective than it was for the other sodium channels in that family.

Of course, a highly selective and specific drug does not a blockbuster analgesic make. To determine how this drug would work on humans in pain, they turned to two populations: 303 individuals undergoing abdominoplasty and 274 undergoing bunionectomy, as reported in a new paper in the New England Journal of Medicine.

I know this seems a bit random, but abdominoplasty is quite painful and a good model for soft-tissue pain. Bunionectomy is also quite a painful procedure and a useful model of bone pain. After the surgeries, patients were randomized to several different doses of VX-548, hydrocodone plus acetaminophen, or placebo for 48 hours.

At 19 time points over that 48-hour period, participants were asked to rate their pain on a scale from 0 to 10. The primary outcome was the cumulative pain experienced over the 48 hours. So, higher pain would be worse here, but longer duration of pain would also be worse.

The story of the study is really told in this chart.

The New England Journal of Medicine


Yes, those assigned to the highest dose of VX-548 had a statistically significant lower cumulative amount of pain in the 48 hours after surgery. But the picture is really worth more than the stats here. You can see that the onset of pain relief was fairly quick, and that pain relief was sustained over time. You can also see that this is not a miracle drug. Pain scores were a bit better 48 hours out, but only by about a point and a half.

Placebo isn’t really the fair comparison here; few of us treat our postabdominoplasty patients with placebo, after all. The authors do not formally compare the effect of VX-548 with that of the opioid hydrocodone, for instance. But that doesn’t stop us.

This graph, which I put together from data in the paper, shows pain control across the four randomization categories, with higher numbers indicating more (cumulative) control. While all the active agents do a bit better than placebo, VX-548 at the higher dose appears to do the best. But I should note that 5 mg of hydrocodone may not be an adequate dose for most people.

Dr. F. Perry Wilson


Yes, I would really have killed for an NSAID arm in this trial. Its absence, given that NSAIDs are a staple of postoperative care, is ... well, let’s just say, notable.

Although not a pain-destroying machine, VX-548 has some other things to recommend it. The receptor is really not found in the brain at all, which suggests that the drug should not carry much risk for dependency, though that has not been formally studied.

The side effects were generally mild – headache was the most common – and less prevalent than what you see even in the placebo arm.

The New England Journal of Medicine


Perhaps most notable is the fact that the rate of discontinuation of the study drug was lowest in the VX-548 arm. Patients could stop taking the pill they were assigned for any reason, ranging from perceived lack of efficacy to side effects. A low discontinuation rate indicates to me a sort of “voting with your feet” that suggests this might be a well-tolerated and reasonably effective drug.

VX-548 isn’t on the market yet; phase 3 trials are ongoing. But whether it is this particular drug or another in this class, I’m happy to see researchers trying to find new ways to target that most primeval form of suffering: pain.

Dr. Wilson is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

When you stub your toe or get a paper cut on your finger, you feel the pain in that part of your body. It feels like the pain is coming from that place. But, of course, that’s not really what is happening. Pain doesn’t really happen in your toe or your finger. It happens in your brain.

It’s a game of telephone, really. The afferent nerve fiber detects the noxious stimulus, passing that signal to the second-order neuron in the dorsal root ganglia of the spinal cord, which runs it up to the thalamus to be passed to the third-order neuron which brings it to the cortex for localization and conscious perception. It’s not even a very good game of telephone. It takes about 100 ms for a pain signal to get from the hand to the brain – longer from the feet, given the greater distance. You see your foot hit the corner of the coffee table and have just enough time to think: “Oh no!” before the pain hits.

Wikimedia Commons


Given the Rube Goldberg nature of the process, it would seem like there are any number of places we could stop pain sensation. And sure, local anesthetics at the site of injury, or even spinal anesthetics, are powerful – if temporary and hard to administer – solutions to acute pain.

But in our everyday armamentarium, let’s be honest – we essentially have three options: opiates and opioids, which activate the mu-receptors in the brain to dull pain (and cause a host of other nasty side effects); NSAIDs, which block prostaglandin synthesis and thus limit the ability for pain-conducting neurons to get excited; and acetaminophen, which, despite being used for a century, is poorly understood.

Dr. F. Perry Wilson


But now, we enter the prologue of what might be the next big story in pain control. Let’s talk about VX-548.

If you were to zoom in on the connection between that first afferent pain fiber and the secondary nerve in the spinal cord dorsal root ganglion, you would see a receptor called Nav1.8, a voltage-gated sodium channel.

This receptor is a key part of the apparatus that passes information from nerve 1 to nerve 2, but only for fibers that transmit pain signals. In fact, humans with mutations in this receptor that leave it always in the “open” state have a severe pain syndrome. Blocking the receptor, therefore, might reduce pain.

In preclinical work, researchers identified VX-548, which doesn’t have a brand name yet, as a potent blocker of that channel even in nanomolar concentrations. Importantly, the compound was highly selective for that particular channel – about 30,000 times more selective than it was for the other sodium channels in that family.

Of course, a highly selective and specific drug does not a blockbuster analgesic make. To determine how this drug would work on humans in pain, they turned to two populations: 303 individuals undergoing abdominoplasty and 274 undergoing bunionectomy, as reported in a new paper in the New England Journal of Medicine.

I know this seems a bit random, but abdominoplasty is quite painful and a good model for soft-tissue pain. Bunionectomy is also quite a painful procedure and a useful model of bone pain. After the surgeries, patients were randomized to several different doses of VX-548, hydrocodone plus acetaminophen, or placebo for 48 hours.

At 19 time points over that 48-hour period, participants were asked to rate their pain on a scale from 0 to 10. The primary outcome was the cumulative pain experienced over the 48 hours. So, higher pain would be worse here, but longer duration of pain would also be worse.

The story of the study is really told in this chart.

The New England Journal of Medicine


Yes, those assigned to the highest dose of VX-548 had a statistically significant lower cumulative amount of pain in the 48 hours after surgery. But the picture is really worth more than the stats here. You can see that the onset of pain relief was fairly quick, and that pain relief was sustained over time. You can also see that this is not a miracle drug. Pain scores were a bit better 48 hours out, but only by about a point and a half.

Placebo isn’t really the fair comparison here; few of us treat our postabdominoplasty patients with placebo, after all. The authors do not formally compare the effect of VX-548 with that of the opioid hydrocodone, for instance. But that doesn’t stop us.

This graph, which I put together from data in the paper, shows pain control across the four randomization categories, with higher numbers indicating more (cumulative) control. While all the active agents do a bit better than placebo, VX-548 at the higher dose appears to do the best. But I should note that 5 mg of hydrocodone may not be an adequate dose for most people.

Dr. F. Perry Wilson


Yes, I would really have killed for an NSAID arm in this trial. Its absence, given that NSAIDs are a staple of postoperative care, is ... well, let’s just say, notable.

Although not a pain-destroying machine, VX-548 has some other things to recommend it. The receptor is really not found in the brain at all, which suggests that the drug should not carry much risk for dependency, though that has not been formally studied.

The side effects were generally mild – headache was the most common – and less prevalent than what you see even in the placebo arm.

The New England Journal of Medicine


Perhaps most notable is the fact that the rate of discontinuation of the study drug was lowest in the VX-548 arm. Patients could stop taking the pill they were assigned for any reason, ranging from perceived lack of efficacy to side effects. A low discontinuation rate indicates to me a sort of “voting with your feet” that suggests this might be a well-tolerated and reasonably effective drug.

VX-548 isn’t on the market yet; phase 3 trials are ongoing. But whether it is this particular drug or another in this class, I’m happy to see researchers trying to find new ways to target that most primeval form of suffering: pain.

Dr. Wilson is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

This transcript has been edited for clarity.

When you stub your toe or get a paper cut on your finger, you feel the pain in that part of your body. It feels like the pain is coming from that place. But, of course, that’s not really what is happening. Pain doesn’t really happen in your toe or your finger. It happens in your brain.

It’s a game of telephone, really. The afferent nerve fiber detects the noxious stimulus, passing that signal to the second-order neuron in the dorsal root ganglia of the spinal cord, which runs it up to the thalamus to be passed to the third-order neuron which brings it to the cortex for localization and conscious perception. It’s not even a very good game of telephone. It takes about 100 ms for a pain signal to get from the hand to the brain – longer from the feet, given the greater distance. You see your foot hit the corner of the coffee table and have just enough time to think: “Oh no!” before the pain hits.

Wikimedia Commons


Given the Rube Goldberg nature of the process, it would seem like there are any number of places we could stop pain sensation. And sure, local anesthetics at the site of injury, or even spinal anesthetics, are powerful – if temporary and hard to administer – solutions to acute pain.

But in our everyday armamentarium, let’s be honest – we essentially have three options: opiates and opioids, which activate the mu-receptors in the brain to dull pain (and cause a host of other nasty side effects); NSAIDs, which block prostaglandin synthesis and thus limit the ability for pain-conducting neurons to get excited; and acetaminophen, which, despite being used for a century, is poorly understood.

Dr. F. Perry Wilson


But now, we enter the prologue of what might be the next big story in pain control. Let’s talk about VX-548.

If you were to zoom in on the connection between that first afferent pain fiber and the secondary nerve in the spinal cord dorsal root ganglion, you would see a receptor called Nav1.8, a voltage-gated sodium channel.

This receptor is a key part of the apparatus that passes information from nerve 1 to nerve 2, but only for fibers that transmit pain signals. In fact, humans with mutations in this receptor that leave it always in the “open” state have a severe pain syndrome. Blocking the receptor, therefore, might reduce pain.

In preclinical work, researchers identified VX-548, which doesn’t have a brand name yet, as a potent blocker of that channel even in nanomolar concentrations. Importantly, the compound was highly selective for that particular channel – about 30,000 times more selective than it was for the other sodium channels in that family.

Of course, a highly selective and specific drug does not a blockbuster analgesic make. To determine how this drug would work on humans in pain, they turned to two populations: 303 individuals undergoing abdominoplasty and 274 undergoing bunionectomy, as reported in a new paper in the New England Journal of Medicine.

I know this seems a bit random, but abdominoplasty is quite painful and a good model for soft-tissue pain. Bunionectomy is also quite a painful procedure and a useful model of bone pain. After the surgeries, patients were randomized to several different doses of VX-548, hydrocodone plus acetaminophen, or placebo for 48 hours.

At 19 time points over that 48-hour period, participants were asked to rate their pain on a scale from 0 to 10. The primary outcome was the cumulative pain experienced over the 48 hours. So, higher pain would be worse here, but longer duration of pain would also be worse.

The story of the study is really told in this chart.

The New England Journal of Medicine


Yes, those assigned to the highest dose of VX-548 had a statistically significant lower cumulative amount of pain in the 48 hours after surgery. But the picture is really worth more than the stats here. You can see that the onset of pain relief was fairly quick, and that pain relief was sustained over time. You can also see that this is not a miracle drug. Pain scores were a bit better 48 hours out, but only by about a point and a half.

Placebo isn’t really the fair comparison here; few of us treat our postabdominoplasty patients with placebo, after all. The authors do not formally compare the effect of VX-548 with that of the opioid hydrocodone, for instance. But that doesn’t stop us.

This graph, which I put together from data in the paper, shows pain control across the four randomization categories, with higher numbers indicating more (cumulative) control. While all the active agents do a bit better than placebo, VX-548 at the higher dose appears to do the best. But I should note that 5 mg of hydrocodone may not be an adequate dose for most people.

Dr. F. Perry Wilson


Yes, I would really have killed for an NSAID arm in this trial. Its absence, given that NSAIDs are a staple of postoperative care, is ... well, let’s just say, notable.

Although not a pain-destroying machine, VX-548 has some other things to recommend it. The receptor is really not found in the brain at all, which suggests that the drug should not carry much risk for dependency, though that has not been formally studied.

The side effects were generally mild – headache was the most common – and less prevalent than what you see even in the placebo arm.

The New England Journal of Medicine


Perhaps most notable is the fact that the rate of discontinuation of the study drug was lowest in the VX-548 arm. Patients could stop taking the pill they were assigned for any reason, ranging from perceived lack of efficacy to side effects. A low discontinuation rate indicates to me a sort of “voting with your feet” that suggests this might be a well-tolerated and reasonably effective drug.

VX-548 isn’t on the market yet; phase 3 trials are ongoing. But whether it is this particular drug or another in this class, I’m happy to see researchers trying to find new ways to target that most primeval form of suffering: pain.

Dr. Wilson is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator, New Haven, Conn. He disclosed no relevant financial relationships.

A version of this article appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

What AI can see in CT scans that humans can’t

Article Type
Changed
Wed, 07/26/2023 - 10:37

 

This transcript has been edited for clarity.

If a picture is worth a thousand words, then a CT scan of the chest might as well be Atlas Shrugged. When you think of the sheer information content in one of those scans, it becomes immediately clear that our usual method of CT scan interpretation must be leaving a lot on the table. After all, we can go through all that information and come out with simply “normal” and call it a day.
 

Of course, radiologists can glean a lot from a CT scan, but they are trained to look for abnormalities. They can find pneumonia, emboli, fractures, and pneumothoraces, but the presence or absence of life-threatening abnormalities is still just a fraction of the data contained within a CT scan.

Pulling out more data from those images – data that may not indicate disease per se, but nevertheless tell us something important about patients and their risks – might just fall to those entities that are primed to take a bunch of data and interpret it in new ways: artificial intelligence (AI).

I’m thinking about AI and CT scans this week thanks to this study, appearing in the journal Radiology, from Kaiwen Xu and colleagues at Vanderbilt.

In a previous study, the team had developed an AI algorithm to take chest CT images and convert that data into information about body composition: skeletal muscle mass, fat mass, muscle lipid content – that sort of thing.

courtesy HHS Author Manuscripts


This is a beautiful example of how AI can take data we already have sitting around and do something new with it. While the radiologists are busy looking for cancer or pneumonia, the AI can create a body composition report – two results from one data stream.

Here’s an example of a report generated from a CT scan from the authors’ GitHub page.

courtesy GitHub


The cool thing here is that this is a clinically collected CT scan of the chest, not a special protocol designed to assess body composition. In fact, this comes from the low-dose lung cancer screening trial dataset.

As you may know, the U.S. Preventive Services Task Force recommends low-dose CT screening of the chest every year for those aged 50-80 with at least a 20 pack-year smoking history. These CT scans form an incredible dataset, actually, as they are all collected with nearly the same parameters. Obviously, the important thing to look for in these CT scans is whether there is early lung cancer. But the new paper asks, as long as we can get information about body composition from these scans, why don’t we? Can it help to risk-stratify these patients?

They took 20,768 individuals with CT scans done as part of the low-dose lung cancer screening trial and passed their scans through their automated data pipeline.

One cool feature here: Depending on body size, sometimes the edges of people in CT scans are not visible. That’s not a big deal for lung-cancer screening as long as you can see both lungs. But it does matter for assessment of muscle and body fat  because that stuff lives on the edges of the thoracic cavity. The authors’ data pipeline actually accounts for this, extrapolating what the missing pieces look like from what is able to be seen. It’s quite clever.

courtesy Radiology


On to some results. Would knowledge about the patient’s body composition help predict their ultimate outcome?

It would. And the best single predictor found was skeletal muscle attenuation – lower levels of skeletal muscle attenuation mean more fat infiltrating the muscle – so lower is worse here. You can see from these all-cause mortality curves that lower levels were associated with substantially worse life expectancy.

courtesy Radiology


It’s worth noting that these are unadjusted curves. While AI prediction from CT images is very cool, we might be able to make similar predictions knowing, for example, the age of the patient. To account for this, the authors adjusted the findings for age, diabetes, heart disease, stroke, and coronary calcium score (also calculated from those same CT scans). Even after adjustment, skeletal muscle attenuation was significantly associated with all-cause mortality, cardiovascular mortality, and lung-cancer mortality – but not lung cancer incidence.

courtesy Radiology


Those results tell us that there is likely a physiologic significance to skeletal muscle attenuation, and they provide a great proof-of-concept that automated data extraction techniques can be applied broadly to routinely collected radiology images.

That said, it’s one thing to show that something is physiologically relevant. In terms of actually predicting outcomes, adding this information to a model that contains just those clinical factors like age and diabetes doesn’t actually improve things very much. We measure this with something called the concordance index. This tells us the probability, given two individuals, of how often we can identify the person who has the outcome of interest sooner – if at all. (You can probably guess that the worst possible score is thus 0.5 and the best is 1.) A model without the AI data gives a concordance index for all-cause mortality of 0.71 or 0.72, depending on sex. Adding in the body composition data bumps that up only by a percent or so.

courtesy Radiology


This honestly feels a bit like a missed opportunity to me. The authors pass the imaging data through an AI to get body composition data and then see how that predicts death.

courtesy Dr. F. Perry Wilson


Why not skip the middleman? Train a model using the imaging data to predict death directly, using whatever signal the AI chooses: body composition, lung size, rib thickness – whatever.

I’d be very curious to see how that model might improve our ability to predict these outcomes. In the end, this is a space where AI can make some massive gains – not by trying to do radiologists’ jobs better than radiologists, but by extracting information that radiologists aren’t looking for in the first place.

F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity.

If a picture is worth a thousand words, then a CT scan of the chest might as well be Atlas Shrugged. When you think of the sheer information content in one of those scans, it becomes immediately clear that our usual method of CT scan interpretation must be leaving a lot on the table. After all, we can go through all that information and come out with simply “normal” and call it a day.
 

Of course, radiologists can glean a lot from a CT scan, but they are trained to look for abnormalities. They can find pneumonia, emboli, fractures, and pneumothoraces, but the presence or absence of life-threatening abnormalities is still just a fraction of the data contained within a CT scan.

Pulling out more data from those images – data that may not indicate disease per se, but nevertheless tell us something important about patients and their risks – might just fall to those entities that are primed to take a bunch of data and interpret it in new ways: artificial intelligence (AI).

I’m thinking about AI and CT scans this week thanks to this study, appearing in the journal Radiology, from Kaiwen Xu and colleagues at Vanderbilt.

In a previous study, the team had developed an AI algorithm to take chest CT images and convert that data into information about body composition: skeletal muscle mass, fat mass, muscle lipid content – that sort of thing.

courtesy HHS Author Manuscripts


This is a beautiful example of how AI can take data we already have sitting around and do something new with it. While the radiologists are busy looking for cancer or pneumonia, the AI can create a body composition report – two results from one data stream.

Here’s an example of a report generated from a CT scan from the authors’ GitHub page.

courtesy GitHub


The cool thing here is that this is a clinically collected CT scan of the chest, not a special protocol designed to assess body composition. In fact, this comes from the low-dose lung cancer screening trial dataset.

As you may know, the U.S. Preventive Services Task Force recommends low-dose CT screening of the chest every year for those aged 50-80 with at least a 20 pack-year smoking history. These CT scans form an incredible dataset, actually, as they are all collected with nearly the same parameters. Obviously, the important thing to look for in these CT scans is whether there is early lung cancer. But the new paper asks, as long as we can get information about body composition from these scans, why don’t we? Can it help to risk-stratify these patients?

They took 20,768 individuals with CT scans done as part of the low-dose lung cancer screening trial and passed their scans through their automated data pipeline.

One cool feature here: Depending on body size, sometimes the edges of people in CT scans are not visible. That’s not a big deal for lung-cancer screening as long as you can see both lungs. But it does matter for assessment of muscle and body fat  because that stuff lives on the edges of the thoracic cavity. The authors’ data pipeline actually accounts for this, extrapolating what the missing pieces look like from what is able to be seen. It’s quite clever.

courtesy Radiology


On to some results. Would knowledge about the patient’s body composition help predict their ultimate outcome?

It would. And the best single predictor found was skeletal muscle attenuation – lower levels of skeletal muscle attenuation mean more fat infiltrating the muscle – so lower is worse here. You can see from these all-cause mortality curves that lower levels were associated with substantially worse life expectancy.

courtesy Radiology


It’s worth noting that these are unadjusted curves. While AI prediction from CT images is very cool, we might be able to make similar predictions knowing, for example, the age of the patient. To account for this, the authors adjusted the findings for age, diabetes, heart disease, stroke, and coronary calcium score (also calculated from those same CT scans). Even after adjustment, skeletal muscle attenuation was significantly associated with all-cause mortality, cardiovascular mortality, and lung-cancer mortality – but not lung cancer incidence.

courtesy Radiology


Those results tell us that there is likely a physiologic significance to skeletal muscle attenuation, and they provide a great proof-of-concept that automated data extraction techniques can be applied broadly to routinely collected radiology images.

That said, it’s one thing to show that something is physiologically relevant. In terms of actually predicting outcomes, adding this information to a model that contains just those clinical factors like age and diabetes doesn’t actually improve things very much. We measure this with something called the concordance index. This tells us the probability, given two individuals, of how often we can identify the person who has the outcome of interest sooner – if at all. (You can probably guess that the worst possible score is thus 0.5 and the best is 1.) A model without the AI data gives a concordance index for all-cause mortality of 0.71 or 0.72, depending on sex. Adding in the body composition data bumps that up only by a percent or so.

courtesy Radiology


This honestly feels a bit like a missed opportunity to me. The authors pass the imaging data through an AI to get body composition data and then see how that predicts death.

courtesy Dr. F. Perry Wilson


Why not skip the middleman? Train a model using the imaging data to predict death directly, using whatever signal the AI chooses: body composition, lung size, rib thickness – whatever.

I’d be very curious to see how that model might improve our ability to predict these outcomes. In the end, this is a space where AI can make some massive gains – not by trying to do radiologists’ jobs better than radiologists, but by extracting information that radiologists aren’t looking for in the first place.

F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

 

This transcript has been edited for clarity.

If a picture is worth a thousand words, then a CT scan of the chest might as well be Atlas Shrugged. When you think of the sheer information content in one of those scans, it becomes immediately clear that our usual method of CT scan interpretation must be leaving a lot on the table. After all, we can go through all that information and come out with simply “normal” and call it a day.
 

Of course, radiologists can glean a lot from a CT scan, but they are trained to look for abnormalities. They can find pneumonia, emboli, fractures, and pneumothoraces, but the presence or absence of life-threatening abnormalities is still just a fraction of the data contained within a CT scan.

Pulling out more data from those images – data that may not indicate disease per se, but nevertheless tell us something important about patients and their risks – might just fall to those entities that are primed to take a bunch of data and interpret it in new ways: artificial intelligence (AI).

I’m thinking about AI and CT scans this week thanks to this study, appearing in the journal Radiology, from Kaiwen Xu and colleagues at Vanderbilt.

In a previous study, the team had developed an AI algorithm to take chest CT images and convert that data into information about body composition: skeletal muscle mass, fat mass, muscle lipid content – that sort of thing.

courtesy HHS Author Manuscripts


This is a beautiful example of how AI can take data we already have sitting around and do something new with it. While the radiologists are busy looking for cancer or pneumonia, the AI can create a body composition report – two results from one data stream.

Here’s an example of a report generated from a CT scan from the authors’ GitHub page.

courtesy GitHub


The cool thing here is that this is a clinically collected CT scan of the chest, not a special protocol designed to assess body composition. In fact, this comes from the low-dose lung cancer screening trial dataset.

As you may know, the U.S. Preventive Services Task Force recommends low-dose CT screening of the chest every year for those aged 50-80 with at least a 20 pack-year smoking history. These CT scans form an incredible dataset, actually, as they are all collected with nearly the same parameters. Obviously, the important thing to look for in these CT scans is whether there is early lung cancer. But the new paper asks, as long as we can get information about body composition from these scans, why don’t we? Can it help to risk-stratify these patients?

They took 20,768 individuals with CT scans done as part of the low-dose lung cancer screening trial and passed their scans through their automated data pipeline.

One cool feature here: Depending on body size, sometimes the edges of people in CT scans are not visible. That’s not a big deal for lung-cancer screening as long as you can see both lungs. But it does matter for assessment of muscle and body fat  because that stuff lives on the edges of the thoracic cavity. The authors’ data pipeline actually accounts for this, extrapolating what the missing pieces look like from what is able to be seen. It’s quite clever.

courtesy Radiology


On to some results. Would knowledge about the patient’s body composition help predict their ultimate outcome?

It would. And the best single predictor found was skeletal muscle attenuation – lower levels of skeletal muscle attenuation mean more fat infiltrating the muscle – so lower is worse here. You can see from these all-cause mortality curves that lower levels were associated with substantially worse life expectancy.

courtesy Radiology


It’s worth noting that these are unadjusted curves. While AI prediction from CT images is very cool, we might be able to make similar predictions knowing, for example, the age of the patient. To account for this, the authors adjusted the findings for age, diabetes, heart disease, stroke, and coronary calcium score (also calculated from those same CT scans). Even after adjustment, skeletal muscle attenuation was significantly associated with all-cause mortality, cardiovascular mortality, and lung-cancer mortality – but not lung cancer incidence.

courtesy Radiology


Those results tell us that there is likely a physiologic significance to skeletal muscle attenuation, and they provide a great proof-of-concept that automated data extraction techniques can be applied broadly to routinely collected radiology images.

That said, it’s one thing to show that something is physiologically relevant. In terms of actually predicting outcomes, adding this information to a model that contains just those clinical factors like age and diabetes doesn’t actually improve things very much. We measure this with something called the concordance index. This tells us the probability, given two individuals, of how often we can identify the person who has the outcome of interest sooner – if at all. (You can probably guess that the worst possible score is thus 0.5 and the best is 1.) A model without the AI data gives a concordance index for all-cause mortality of 0.71 or 0.72, depending on sex. Adding in the body composition data bumps that up only by a percent or so.

courtesy Radiology


This honestly feels a bit like a missed opportunity to me. The authors pass the imaging data through an AI to get body composition data and then see how that predicts death.

courtesy Dr. F. Perry Wilson


Why not skip the middleman? Train a model using the imaging data to predict death directly, using whatever signal the AI chooses: body composition, lung size, rib thickness – whatever.

I’d be very curious to see how that model might improve our ability to predict these outcomes. In the end, this is a space where AI can make some massive gains – not by trying to do radiologists’ jobs better than radiologists, but by extracting information that radiologists aren’t looking for in the first place.

F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The surprising occupations with higher-than-expected ovarian cancer rates

Article Type
Changed
Tue, 07/18/2023 - 11:43

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study.

Basically, all cancers are caused by a mix of genetic and environmental factors, with some cancers driven more strongly by one or the other. When it comes to ovarian cancer, which kills more than 13,000 women per year in the United States, genetic factors like the BRCA gene mutations are well described.

Other risk factors, like early menarche and nulliparity, are difficult to modify. The only slam-dunk environmental toxin to be linked to ovarian cancer is asbestos. Still, the vast majority of women who develop ovarian cancer do not have a known high-risk gene or asbestos exposure, so other triggers may be out there. How do we find them? The answer may just be good old-fashioned epidemiology.

When you’re looking for a new culprit agent that causes a relatively rare disease, the case-control study design is your best friend.

That’s just what researchers, led by Anita Koushik at the University of Montreal, did in a new study appearing in the journal Occupational and Environmental Medicine.

They identified 497 women in Montreal who had recently been diagnosed with ovarian cancer. They then matched those women to 897 women without ovarian cancer, based on age and address. (This approach would not work well in the United States, as diagnosis of ovarian cancer might depend on access to medical care, which is not universal here. In Canada, however, it’s safer to assume that anyone who could have gotten ovarian cancer in Montreal would have been detected.)

Cases and controls identified, the researchers took a detailed occupational history for each participant: every job they ever worked, and when, and for how long. Each occupation was mapped to a standardized set of industries and, interestingly, to a set of environmental exposures ranging from cosmetic talc to cooking fumes to cotton dust, in what is known as a job-exposure matrix. Of course, they also collected data on other ovarian cancer risk factors.

Dr. F. Perry Wilson


After that, it’s a simple matter of looking at the rate of ovarian cancer by occupation and occupation-associated exposures, accounting for differences in things like pregnancy rates.

A brief aside here. I was at dinner with my wife the other night and telling her about this study, and I asked, “What do you think the occupation with the highest rate of ovarian cancer is?” And without missing a beat, she said: “Hairdressers.” Which blew my mind because of how random that was, but she was also – as usual – 100% correct.

Hairdressers, at least those who had been in the industry for more than 10 years, had a threefold higher risk for ovarian cancer than matched controls who had never been hairdressers.

Dr. F. Perry Wilson


Of course, my wife is a cancer surgeon, so she has a bit of a leg up on me here. Many of you may also know that there is actually a decent body of literature showing higher rates of various cancers among hairdressers, presumably due to the variety of chemicals they are exposed to on a continuous basis.

The No. 2 highest-risk profession on the list? Accountants, with about a twofold higher risk. That one is more of a puzzler. It could be a false positive; after all, there were multiple occupations checked and random error might give a few hits that are meaningless. But there are certainly some occupational factors unique to accountants that might bear further investigation – maybe exposure to volatile organic compounds from office printers, or just a particularly sedentary office environment.

In terms of specific exposures, there were high risks seen with mononuclear aromatic hydrocarbons, bleaches, ethanol, and fluorocarbons, among others, but we have to be a bit more careful here. These exposures were not directly measured. Rather, based on the job category a woman described, the exposures were imputed based on the job-exposure matrix. As such, the correlations between the job and the particular exposure are really quite high, making it essentially impossible to tease out whether it is, for example, being a hairdresser, or being exposed to fluorocarbons as a hairdresser, or being exposed to something else as a hairdresser, that is the problem.

This is how these types of studies work; they tend to raise more questions than they answer. But in a world where a cancer diagnosis can seem to come completely out of the blue, they provide the starting point that someday may lead to a more definitive culprit agent or group of agents. Until then, it might be wise for hairdressers to make sure their workplace is well ventilated.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study.

Basically, all cancers are caused by a mix of genetic and environmental factors, with some cancers driven more strongly by one or the other. When it comes to ovarian cancer, which kills more than 13,000 women per year in the United States, genetic factors like the BRCA gene mutations are well described.

Other risk factors, like early menarche and nulliparity, are difficult to modify. The only slam-dunk environmental toxin to be linked to ovarian cancer is asbestos. Still, the vast majority of women who develop ovarian cancer do not have a known high-risk gene or asbestos exposure, so other triggers may be out there. How do we find them? The answer may just be good old-fashioned epidemiology.

When you’re looking for a new culprit agent that causes a relatively rare disease, the case-control study design is your best friend.

That’s just what researchers, led by Anita Koushik at the University of Montreal, did in a new study appearing in the journal Occupational and Environmental Medicine.

They identified 497 women in Montreal who had recently been diagnosed with ovarian cancer. They then matched those women to 897 women without ovarian cancer, based on age and address. (This approach would not work well in the United States, as diagnosis of ovarian cancer might depend on access to medical care, which is not universal here. In Canada, however, it’s safer to assume that anyone who could have gotten ovarian cancer in Montreal would have been detected.)

Cases and controls identified, the researchers took a detailed occupational history for each participant: every job they ever worked, and when, and for how long. Each occupation was mapped to a standardized set of industries and, interestingly, to a set of environmental exposures ranging from cosmetic talc to cooking fumes to cotton dust, in what is known as a job-exposure matrix. Of course, they also collected data on other ovarian cancer risk factors.

Dr. F. Perry Wilson


After that, it’s a simple matter of looking at the rate of ovarian cancer by occupation and occupation-associated exposures, accounting for differences in things like pregnancy rates.

A brief aside here. I was at dinner with my wife the other night and telling her about this study, and I asked, “What do you think the occupation with the highest rate of ovarian cancer is?” And without missing a beat, she said: “Hairdressers.” Which blew my mind because of how random that was, but she was also – as usual – 100% correct.

Hairdressers, at least those who had been in the industry for more than 10 years, had a threefold higher risk for ovarian cancer than matched controls who had never been hairdressers.

Dr. F. Perry Wilson


Of course, my wife is a cancer surgeon, so she has a bit of a leg up on me here. Many of you may also know that there is actually a decent body of literature showing higher rates of various cancers among hairdressers, presumably due to the variety of chemicals they are exposed to on a continuous basis.

The No. 2 highest-risk profession on the list? Accountants, with about a twofold higher risk. That one is more of a puzzler. It could be a false positive; after all, there were multiple occupations checked and random error might give a few hits that are meaningless. But there are certainly some occupational factors unique to accountants that might bear further investigation – maybe exposure to volatile organic compounds from office printers, or just a particularly sedentary office environment.

In terms of specific exposures, there were high risks seen with mononuclear aromatic hydrocarbons, bleaches, ethanol, and fluorocarbons, among others, but we have to be a bit more careful here. These exposures were not directly measured. Rather, based on the job category a woman described, the exposures were imputed based on the job-exposure matrix. As such, the correlations between the job and the particular exposure are really quite high, making it essentially impossible to tease out whether it is, for example, being a hairdresser, or being exposed to fluorocarbons as a hairdresser, or being exposed to something else as a hairdresser, that is the problem.

This is how these types of studies work; they tend to raise more questions than they answer. But in a world where a cancer diagnosis can seem to come completely out of the blue, they provide the starting point that someday may lead to a more definitive culprit agent or group of agents. Until then, it might be wise for hairdressers to make sure their workplace is well ventilated.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study.

Basically, all cancers are caused by a mix of genetic and environmental factors, with some cancers driven more strongly by one or the other. When it comes to ovarian cancer, which kills more than 13,000 women per year in the United States, genetic factors like the BRCA gene mutations are well described.

Other risk factors, like early menarche and nulliparity, are difficult to modify. The only slam-dunk environmental toxin to be linked to ovarian cancer is asbestos. Still, the vast majority of women who develop ovarian cancer do not have a known high-risk gene or asbestos exposure, so other triggers may be out there. How do we find them? The answer may just be good old-fashioned epidemiology.

When you’re looking for a new culprit agent that causes a relatively rare disease, the case-control study design is your best friend.

That’s just what researchers, led by Anita Koushik at the University of Montreal, did in a new study appearing in the journal Occupational and Environmental Medicine.

They identified 497 women in Montreal who had recently been diagnosed with ovarian cancer. They then matched those women to 897 women without ovarian cancer, based on age and address. (This approach would not work well in the United States, as diagnosis of ovarian cancer might depend on access to medical care, which is not universal here. In Canada, however, it’s safer to assume that anyone who could have gotten ovarian cancer in Montreal would have been detected.)

Cases and controls identified, the researchers took a detailed occupational history for each participant: every job they ever worked, and when, and for how long. Each occupation was mapped to a standardized set of industries and, interestingly, to a set of environmental exposures ranging from cosmetic talc to cooking fumes to cotton dust, in what is known as a job-exposure matrix. Of course, they also collected data on other ovarian cancer risk factors.

Dr. F. Perry Wilson


After that, it’s a simple matter of looking at the rate of ovarian cancer by occupation and occupation-associated exposures, accounting for differences in things like pregnancy rates.

A brief aside here. I was at dinner with my wife the other night and telling her about this study, and I asked, “What do you think the occupation with the highest rate of ovarian cancer is?” And without missing a beat, she said: “Hairdressers.” Which blew my mind because of how random that was, but she was also – as usual – 100% correct.

Hairdressers, at least those who had been in the industry for more than 10 years, had a threefold higher risk for ovarian cancer than matched controls who had never been hairdressers.

Dr. F. Perry Wilson


Of course, my wife is a cancer surgeon, so she has a bit of a leg up on me here. Many of you may also know that there is actually a decent body of literature showing higher rates of various cancers among hairdressers, presumably due to the variety of chemicals they are exposed to on a continuous basis.

The No. 2 highest-risk profession on the list? Accountants, with about a twofold higher risk. That one is more of a puzzler. It could be a false positive; after all, there were multiple occupations checked and random error might give a few hits that are meaningless. But there are certainly some occupational factors unique to accountants that might bear further investigation – maybe exposure to volatile organic compounds from office printers, or just a particularly sedentary office environment.

In terms of specific exposures, there were high risks seen with mononuclear aromatic hydrocarbons, bleaches, ethanol, and fluorocarbons, among others, but we have to be a bit more careful here. These exposures were not directly measured. Rather, based on the job category a woman described, the exposures were imputed based on the job-exposure matrix. As such, the correlations between the job and the particular exposure are really quite high, making it essentially impossible to tease out whether it is, for example, being a hairdresser, or being exposed to fluorocarbons as a hairdresser, or being exposed to something else as a hairdresser, that is the problem.

This is how these types of studies work; they tend to raise more questions than they answer. But in a world where a cancer diagnosis can seem to come completely out of the blue, they provide the starting point that someday may lead to a more definitive culprit agent or group of agents. Until then, it might be wise for hairdressers to make sure their workplace is well ventilated.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The most important question in medicine

Article Type
Changed
Tue, 06/27/2023 - 13:22

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.

Today I am going to tell you the single best question you can ask any doctor, the one that has saved my butt countless times throughout my career, the one that every attending physician should be asking every intern and resident when they present a new case. That question: “What else could this be?”

I know, I know – “When you hear hoofbeats, think horses, not zebras.” I get it. But sometimes we get so good at our jobs, so good at recognizing horses, that we stop asking ourselves about zebras at all. You see this in a phenomenon known as “anchoring bias” where physicians, when presented with a diagnosis, tend to latch on to that diagnosis based on the first piece of information given, paying attention to data that support it and ignoring data that point in other directions.

That special question: “What else could this be?”, breaks through that barrier. It forces you, the medical team, everyone, to go through the exercise of real, old-fashioned differential diagnosis. And I promise that if you do this enough, at some point it will save someone’s life.

Though the concept of anchoring bias in medicine is broadly understood, it hasn’t been broadly studied until now, with this study appearing in JAMA Internal Medicine.

Here’s the setup.

The authors hypothesized that there would be substantial anchoring bias when patients with heart failure presented to the emergency department with shortness of breath if the triage “visit reason” section mentioned HF. We’re talking about the subtle difference between the following:

  • Visit reason: Shortness of breath
  • Visit reason: Shortness of breath/HF

People with HF can be short of breath for lots of reasons. HF exacerbation comes immediately to mind and it should. But there are obviously lots of answers to that “What else could this be?” question: pneumonia, pneumothorax, heart attack, COPD, and, of course, pulmonary embolism (PE).

The authors leveraged the nationwide VA database, allowing them to examine data from over 100,000 patients presenting to various VA EDs with shortness of breath. They then looked for particular tests – D-dimer, CT chest with contrast, V/Q scan, lower-extremity Doppler — that would suggest that the doctor was thinking about PE. The question, then, is whether mentioning HF in that little “visit reason” section would influence the likelihood of testing for PE.

I know what you’re thinking: Not everyone who is short of breath needs an evaluation for PE. And the authors did a nice job accounting for a variety of factors that might predict a PE workup: malignancy, recent surgery, elevated heart rate, low oxygen saturation, etc. Of course, some of those same factors might predict whether that triage nurse will write HF in the visit reason section. All of these things need to be accounted for statistically, and were, but – the unofficial Impact Factor motto reminds us that “there are always more confounders.”

But let’s dig into the results. I’m going to give you the raw numbers first. There were 4,392 people with HF whose visit reason section, in addition to noting shortness of breath, explicitly mentioned HF. Of those, 360 had PE testing and two had a PE diagnosed during that ED visit. So that’s around an 8% testing rate and a 0.5% hit rate for testing. But 43 people, presumably not tested in the ED, had a PE diagnosed within the next 30 days. Assuming that those PEs were present at the ED visit, that means the ED missed 95% of the PEs in the group with that HF label attached to them.

Let’s do the same thing for those whose visit reason just said “shortness of breath.”

Of the 103,627 people in that category, 13,886 were tested for PE and 231 of those tested positive. So that is an overall testing rate of around 13% and a hit rate of 1.7%. And 1,081 of these people had a PE diagnosed within 30 days. Assuming that those PEs were actually present at the ED visit, the docs missed 79% of them.

There’s one other thing to notice from the data: The overall PE rate (diagnosed by 30 days) was basically the same in both groups. That HF label does not really flag a group at lower risk for PE.

Yes, there are a lot of assumptions here, including that all PEs that were actually there in the ED got caught within 30 days, but the numbers do paint a picture. In this unadjusted analysis, it seems that the HF label leads to less testing and more missed PEs. Classic anchoring bias.

The adjusted analysis, accounting for all those PE risk factors, really didn’t change these results. You get nearly the same numbers and thus nearly the same conclusions.

Now, the main missing piece of this puzzle is in the mind of the clinician. We don’t know whether they didn’t consider PE or whether they considered PE but thought it unlikely. And in the end, it’s clear that the vast majority of people in this study did not have PE (though I suspect not all had a simple HF exacerbation). But this type of analysis is useful not only for the empiric evidence of the clinical impact of anchoring bias but because of the fact that it reminds us all to ask that all-important question: What else could this be?

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.

Today I am going to tell you the single best question you can ask any doctor, the one that has saved my butt countless times throughout my career, the one that every attending physician should be asking every intern and resident when they present a new case. That question: “What else could this be?”

I know, I know – “When you hear hoofbeats, think horses, not zebras.” I get it. But sometimes we get so good at our jobs, so good at recognizing horses, that we stop asking ourselves about zebras at all. You see this in a phenomenon known as “anchoring bias” where physicians, when presented with a diagnosis, tend to latch on to that diagnosis based on the first piece of information given, paying attention to data that support it and ignoring data that point in other directions.

That special question: “What else could this be?”, breaks through that barrier. It forces you, the medical team, everyone, to go through the exercise of real, old-fashioned differential diagnosis. And I promise that if you do this enough, at some point it will save someone’s life.

Though the concept of anchoring bias in medicine is broadly understood, it hasn’t been broadly studied until now, with this study appearing in JAMA Internal Medicine.

Here’s the setup.

The authors hypothesized that there would be substantial anchoring bias when patients with heart failure presented to the emergency department with shortness of breath if the triage “visit reason” section mentioned HF. We’re talking about the subtle difference between the following:

  • Visit reason: Shortness of breath
  • Visit reason: Shortness of breath/HF

People with HF can be short of breath for lots of reasons. HF exacerbation comes immediately to mind and it should. But there are obviously lots of answers to that “What else could this be?” question: pneumonia, pneumothorax, heart attack, COPD, and, of course, pulmonary embolism (PE).

The authors leveraged the nationwide VA database, allowing them to examine data from over 100,000 patients presenting to various VA EDs with shortness of breath. They then looked for particular tests – D-dimer, CT chest with contrast, V/Q scan, lower-extremity Doppler — that would suggest that the doctor was thinking about PE. The question, then, is whether mentioning HF in that little “visit reason” section would influence the likelihood of testing for PE.

I know what you’re thinking: Not everyone who is short of breath needs an evaluation for PE. And the authors did a nice job accounting for a variety of factors that might predict a PE workup: malignancy, recent surgery, elevated heart rate, low oxygen saturation, etc. Of course, some of those same factors might predict whether that triage nurse will write HF in the visit reason section. All of these things need to be accounted for statistically, and were, but – the unofficial Impact Factor motto reminds us that “there are always more confounders.”

But let’s dig into the results. I’m going to give you the raw numbers first. There were 4,392 people with HF whose visit reason section, in addition to noting shortness of breath, explicitly mentioned HF. Of those, 360 had PE testing and two had a PE diagnosed during that ED visit. So that’s around an 8% testing rate and a 0.5% hit rate for testing. But 43 people, presumably not tested in the ED, had a PE diagnosed within the next 30 days. Assuming that those PEs were present at the ED visit, that means the ED missed 95% of the PEs in the group with that HF label attached to them.

Let’s do the same thing for those whose visit reason just said “shortness of breath.”

Of the 103,627 people in that category, 13,886 were tested for PE and 231 of those tested positive. So that is an overall testing rate of around 13% and a hit rate of 1.7%. And 1,081 of these people had a PE diagnosed within 30 days. Assuming that those PEs were actually present at the ED visit, the docs missed 79% of them.

There’s one other thing to notice from the data: The overall PE rate (diagnosed by 30 days) was basically the same in both groups. That HF label does not really flag a group at lower risk for PE.

Yes, there are a lot of assumptions here, including that all PEs that were actually there in the ED got caught within 30 days, but the numbers do paint a picture. In this unadjusted analysis, it seems that the HF label leads to less testing and more missed PEs. Classic anchoring bias.

The adjusted analysis, accounting for all those PE risk factors, really didn’t change these results. You get nearly the same numbers and thus nearly the same conclusions.

Now, the main missing piece of this puzzle is in the mind of the clinician. We don’t know whether they didn’t consider PE or whether they considered PE but thought it unlikely. And in the end, it’s clear that the vast majority of people in this study did not have PE (though I suspect not all had a simple HF exacerbation). But this type of analysis is useful not only for the empiric evidence of the clinical impact of anchoring bias but because of the fact that it reminds us all to ask that all-important question: What else could this be?

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.

Today I am going to tell you the single best question you can ask any doctor, the one that has saved my butt countless times throughout my career, the one that every attending physician should be asking every intern and resident when they present a new case. That question: “What else could this be?”

I know, I know – “When you hear hoofbeats, think horses, not zebras.” I get it. But sometimes we get so good at our jobs, so good at recognizing horses, that we stop asking ourselves about zebras at all. You see this in a phenomenon known as “anchoring bias” where physicians, when presented with a diagnosis, tend to latch on to that diagnosis based on the first piece of information given, paying attention to data that support it and ignoring data that point in other directions.

That special question: “What else could this be?”, breaks through that barrier. It forces you, the medical team, everyone, to go through the exercise of real, old-fashioned differential diagnosis. And I promise that if you do this enough, at some point it will save someone’s life.

Though the concept of anchoring bias in medicine is broadly understood, it hasn’t been broadly studied until now, with this study appearing in JAMA Internal Medicine.

Here’s the setup.

The authors hypothesized that there would be substantial anchoring bias when patients with heart failure presented to the emergency department with shortness of breath if the triage “visit reason” section mentioned HF. We’re talking about the subtle difference between the following:

  • Visit reason: Shortness of breath
  • Visit reason: Shortness of breath/HF

People with HF can be short of breath for lots of reasons. HF exacerbation comes immediately to mind and it should. But there are obviously lots of answers to that “What else could this be?” question: pneumonia, pneumothorax, heart attack, COPD, and, of course, pulmonary embolism (PE).

The authors leveraged the nationwide VA database, allowing them to examine data from over 100,000 patients presenting to various VA EDs with shortness of breath. They then looked for particular tests – D-dimer, CT chest with contrast, V/Q scan, lower-extremity Doppler — that would suggest that the doctor was thinking about PE. The question, then, is whether mentioning HF in that little “visit reason” section would influence the likelihood of testing for PE.

I know what you’re thinking: Not everyone who is short of breath needs an evaluation for PE. And the authors did a nice job accounting for a variety of factors that might predict a PE workup: malignancy, recent surgery, elevated heart rate, low oxygen saturation, etc. Of course, some of those same factors might predict whether that triage nurse will write HF in the visit reason section. All of these things need to be accounted for statistically, and were, but – the unofficial Impact Factor motto reminds us that “there are always more confounders.”

But let’s dig into the results. I’m going to give you the raw numbers first. There were 4,392 people with HF whose visit reason section, in addition to noting shortness of breath, explicitly mentioned HF. Of those, 360 had PE testing and two had a PE diagnosed during that ED visit. So that’s around an 8% testing rate and a 0.5% hit rate for testing. But 43 people, presumably not tested in the ED, had a PE diagnosed within the next 30 days. Assuming that those PEs were present at the ED visit, that means the ED missed 95% of the PEs in the group with that HF label attached to them.

Let’s do the same thing for those whose visit reason just said “shortness of breath.”

Of the 103,627 people in that category, 13,886 were tested for PE and 231 of those tested positive. So that is an overall testing rate of around 13% and a hit rate of 1.7%. And 1,081 of these people had a PE diagnosed within 30 days. Assuming that those PEs were actually present at the ED visit, the docs missed 79% of them.

There’s one other thing to notice from the data: The overall PE rate (diagnosed by 30 days) was basically the same in both groups. That HF label does not really flag a group at lower risk for PE.

Yes, there are a lot of assumptions here, including that all PEs that were actually there in the ED got caught within 30 days, but the numbers do paint a picture. In this unadjusted analysis, it seems that the HF label leads to less testing and more missed PEs. Classic anchoring bias.

The adjusted analysis, accounting for all those PE risk factors, really didn’t change these results. You get nearly the same numbers and thus nearly the same conclusions.

Now, the main missing piece of this puzzle is in the mind of the clinician. We don’t know whether they didn’t consider PE or whether they considered PE but thought it unlikely. And in the end, it’s clear that the vast majority of people in this study did not have PE (though I suspect not all had a simple HF exacerbation). But this type of analysis is useful not only for the empiric evidence of the clinical impact of anchoring bias but because of the fact that it reminds us all to ask that all-important question: What else could this be?

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The cardiopulmonary effects of mask wearing

Article Type
Changed
Thu, 06/15/2023 - 15:33

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.

There was a time when I would have had to explain to you what an N95 mask is, how it is designed to filter out 95% of fine particles, defined as stuff in the air less than 2.5 microns in size.

But of course, you know that now. The N95 had its moment – a moment that seemed to be passing as the concentration of airborne coronavirus particles decreased.

Wikimedia Commons


But, as the poet said, all that is less than 2.5 microns in size is not coronavirus. Wildfire smoke is also chock full of fine particulate matter. And so, N95s are having something of a comeback.

That’s why an article that took a deep look at what happens to our cardiovascular system when we wear N95 masks caught my eye. In a carefully controlled experiment, you can prove that, from the perspective of your heart, wearing these masks is different from not wearing these masks – but just barely.

Mask wearing has been the subject of intense debate around the country. While the vast majority of evidence, as well as the personal experience of thousands of doctors, suggests that wearing a mask has no significant physiologic effects, it’s not hard to find those who suggest that mask wearing depletes oxygen levels, or leads to infection, or has other bizarre effects.

In a world of conflicting opinions, a controlled study is a wonderful thing, and that’s what appeared in JAMA Network Open.

This isn’t a huge study, but it’s big enough to make some important conclusions. Thirty individuals, all young and healthy, half female, were enrolled. Each participant spent 3 days in a metabolic chamber; this is essentially a giant, airtight room where all the inputs (oxygen levels and so on) and outputs (carbon dioxide levels and so on) can be precisely measured.

JAMA Network Open


After a day of getting used to the environment, the participants spent a day either wearing an N95 mask or not for 16 waking hours. On the next day, they switched. Every other variable was controlled, from the calories in their diet to the temperature of the room itself.

They engaged in light exercise twice during the day – riding a stationary bike – and a host of physiologic parameters were measured. The question being, would the wearing of the mask for 16 hours straight change anything?

And the answer is yes, some things changed, but not by much.

Here’s a graph of the heart rate over time. You can see some separation, with higher heart rates during the mask-wearing day, particularly around 11 a.m. – when light exercise was scheduled.

JAMA Network Open


Zooming in on the exercise period makes the difference more clear. The heart rate was about eight beats/min higher while masked and engaging in exercise. Systolic blood pressure was about 6 mm Hg higher. Oxygen saturation was lower by 0.7%.

JAMA Network Open


So yes, exercising while wearing an N95 mask might be different from exercising without an N95 mask. But nothing here looks dangerous to me. The 0.7% decrease in oxygen saturation is smaller than the typical measurement error of a pulse oximeter. The authors write that venous pH decreased during the masked day, which is of more interest to me as a nephrologist, but they don’t show that data even in the supplement. I suspect it didn’t decrease much.

They also showed that respiratory rate during exercise decreased in the masked condition. That doesn’t really make sense when you think about it in the context of the other findings, which are all suggestive of increased metabolic rate and sympathetic drive. Does that call the whole procedure into question? No, but it’s worth noting.

These were young, healthy people. You could certainly argue that those with more vulnerable cardiopulmonary status might have had different effects from mask wearing, but without a specific study in those people, it’s just conjecture. Clearly, this study lets us conclude that mask wearing at rest has less of an effect than mask wearing during exercise.

But remember that, in reality, we are wearing masks for a reason. One could imagine a study where this metabolic chamber was filled with wildfire smoke at a concentration similar to what we saw in New York. In that situation, we might find that wearing an N95 is quite helpful. The thing is, studying masks in isolation is useful because you can control so many variables. But masks aren’t used in isolation. In fact, that’s sort of their defining characteristic.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.

There was a time when I would have had to explain to you what an N95 mask is, how it is designed to filter out 95% of fine particles, defined as stuff in the air less than 2.5 microns in size.

But of course, you know that now. The N95 had its moment – a moment that seemed to be passing as the concentration of airborne coronavirus particles decreased.

Wikimedia Commons


But, as the poet said, all that is less than 2.5 microns in size is not coronavirus. Wildfire smoke is also chock full of fine particulate matter. And so, N95s are having something of a comeback.

That’s why an article that took a deep look at what happens to our cardiovascular system when we wear N95 masks caught my eye. In a carefully controlled experiment, you can prove that, from the perspective of your heart, wearing these masks is different from not wearing these masks – but just barely.

Mask wearing has been the subject of intense debate around the country. While the vast majority of evidence, as well as the personal experience of thousands of doctors, suggests that wearing a mask has no significant physiologic effects, it’s not hard to find those who suggest that mask wearing depletes oxygen levels, or leads to infection, or has other bizarre effects.

In a world of conflicting opinions, a controlled study is a wonderful thing, and that’s what appeared in JAMA Network Open.

This isn’t a huge study, but it’s big enough to make some important conclusions. Thirty individuals, all young and healthy, half female, were enrolled. Each participant spent 3 days in a metabolic chamber; this is essentially a giant, airtight room where all the inputs (oxygen levels and so on) and outputs (carbon dioxide levels and so on) can be precisely measured.

JAMA Network Open


After a day of getting used to the environment, the participants spent a day either wearing an N95 mask or not for 16 waking hours. On the next day, they switched. Every other variable was controlled, from the calories in their diet to the temperature of the room itself.

They engaged in light exercise twice during the day – riding a stationary bike – and a host of physiologic parameters were measured. The question being, would the wearing of the mask for 16 hours straight change anything?

And the answer is yes, some things changed, but not by much.

Here’s a graph of the heart rate over time. You can see some separation, with higher heart rates during the mask-wearing day, particularly around 11 a.m. – when light exercise was scheduled.

JAMA Network Open


Zooming in on the exercise period makes the difference more clear. The heart rate was about eight beats/min higher while masked and engaging in exercise. Systolic blood pressure was about 6 mm Hg higher. Oxygen saturation was lower by 0.7%.

JAMA Network Open


So yes, exercising while wearing an N95 mask might be different from exercising without an N95 mask. But nothing here looks dangerous to me. The 0.7% decrease in oxygen saturation is smaller than the typical measurement error of a pulse oximeter. The authors write that venous pH decreased during the masked day, which is of more interest to me as a nephrologist, but they don’t show that data even in the supplement. I suspect it didn’t decrease much.

They also showed that respiratory rate during exercise decreased in the masked condition. That doesn’t really make sense when you think about it in the context of the other findings, which are all suggestive of increased metabolic rate and sympathetic drive. Does that call the whole procedure into question? No, but it’s worth noting.

These were young, healthy people. You could certainly argue that those with more vulnerable cardiopulmonary status might have had different effects from mask wearing, but without a specific study in those people, it’s just conjecture. Clearly, this study lets us conclude that mask wearing at rest has less of an effect than mask wearing during exercise.

But remember that, in reality, we are wearing masks for a reason. One could imagine a study where this metabolic chamber was filled with wildfire smoke at a concentration similar to what we saw in New York. In that situation, we might find that wearing an N95 is quite helpful. The thing is, studying masks in isolation is useful because you can control so many variables. But masks aren’t used in isolation. In fact, that’s sort of their defining characteristic.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr. F. Perry Wilson of the Yale School of Medicine.

There was a time when I would have had to explain to you what an N95 mask is, how it is designed to filter out 95% of fine particles, defined as stuff in the air less than 2.5 microns in size.

But of course, you know that now. The N95 had its moment – a moment that seemed to be passing as the concentration of airborne coronavirus particles decreased.

Wikimedia Commons


But, as the poet said, all that is less than 2.5 microns in size is not coronavirus. Wildfire smoke is also chock full of fine particulate matter. And so, N95s are having something of a comeback.

That’s why an article that took a deep look at what happens to our cardiovascular system when we wear N95 masks caught my eye. In a carefully controlled experiment, you can prove that, from the perspective of your heart, wearing these masks is different from not wearing these masks – but just barely.

Mask wearing has been the subject of intense debate around the country. While the vast majority of evidence, as well as the personal experience of thousands of doctors, suggests that wearing a mask has no significant physiologic effects, it’s not hard to find those who suggest that mask wearing depletes oxygen levels, or leads to infection, or has other bizarre effects.

In a world of conflicting opinions, a controlled study is a wonderful thing, and that’s what appeared in JAMA Network Open.

This isn’t a huge study, but it’s big enough to make some important conclusions. Thirty individuals, all young and healthy, half female, were enrolled. Each participant spent 3 days in a metabolic chamber; this is essentially a giant, airtight room where all the inputs (oxygen levels and so on) and outputs (carbon dioxide levels and so on) can be precisely measured.

JAMA Network Open


After a day of getting used to the environment, the participants spent a day either wearing an N95 mask or not for 16 waking hours. On the next day, they switched. Every other variable was controlled, from the calories in their diet to the temperature of the room itself.

They engaged in light exercise twice during the day – riding a stationary bike – and a host of physiologic parameters were measured. The question being, would the wearing of the mask for 16 hours straight change anything?

And the answer is yes, some things changed, but not by much.

Here’s a graph of the heart rate over time. You can see some separation, with higher heart rates during the mask-wearing day, particularly around 11 a.m. – when light exercise was scheduled.

JAMA Network Open


Zooming in on the exercise period makes the difference more clear. The heart rate was about eight beats/min higher while masked and engaging in exercise. Systolic blood pressure was about 6 mm Hg higher. Oxygen saturation was lower by 0.7%.

JAMA Network Open


So yes, exercising while wearing an N95 mask might be different from exercising without an N95 mask. But nothing here looks dangerous to me. The 0.7% decrease in oxygen saturation is smaller than the typical measurement error of a pulse oximeter. The authors write that venous pH decreased during the masked day, which is of more interest to me as a nephrologist, but they don’t show that data even in the supplement. I suspect it didn’t decrease much.

They also showed that respiratory rate during exercise decreased in the masked condition. That doesn’t really make sense when you think about it in the context of the other findings, which are all suggestive of increased metabolic rate and sympathetic drive. Does that call the whole procedure into question? No, but it’s worth noting.

These were young, healthy people. You could certainly argue that those with more vulnerable cardiopulmonary status might have had different effects from mask wearing, but without a specific study in those people, it’s just conjecture. Clearly, this study lets us conclude that mask wearing at rest has less of an effect than mask wearing during exercise.

But remember that, in reality, we are wearing masks for a reason. One could imagine a study where this metabolic chamber was filled with wildfire smoke at a concentration similar to what we saw in New York. In that situation, we might find that wearing an N95 is quite helpful. The thing is, studying masks in isolation is useful because you can control so many variables. But masks aren’t used in isolation. In fact, that’s sort of their defining characteristic.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

COVID boosters effective, but not for long

Article Type
Changed
Wed, 05/31/2023 - 12:37

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study.

I am here today to talk about the effectiveness of COVID vaccine boosters in the midst of 2023. The reason I want to talk about this isn’t necessarily to dig into exactly how effective vaccines are. This is an area that’s been trod upon multiple times. But it does give me an opportunity to talk about a neat study design called the “test-negative case-control” design, which has some unique properties when you’re trying to evaluate the effect of something outside of the context of a randomized trial.

So, just a little bit of background to remind everyone where we are. These are the number of doses of COVID vaccines administered over time throughout the pandemic.

Centers for Disease Control and Prevention


You can see that it’s stratified by age. The orange lines are adults ages 18-49, for example. You can see a big wave of vaccination when the vaccine first came out at the start of 2021. Then subsequently, you can see smaller waves after the first and second booster authorizations, and maybe a bit of a pickup, particularly among older adults, when the bivalent boosters were authorized. But still very little overall pickup of the bivalent booster, compared with the monovalent vaccines, which might suggest vaccine fatigue going on this far into the pandemic. But it’s important to try to understand exactly how effective those new boosters are, at least at this point in time.

I’m talking about Early Estimates of Bivalent mRNA Booster Dose Vaccine Effectiveness in Preventing Symptomatic SARS-CoV-2 Infection Attributable to Omicron BA.5– and XBB/XBB.1.5–Related Sublineages Among Immunocompetent Adults – Increasing Community Access to Testing Program, United States, December 2022–January 2023, which came out in the Morbidity and Mortality Weekly Report very recently, which uses this test-negative case-control design to evaluate the ability of bivalent mRNA vaccines to prevent hospitalization.

The question is: Does receipt of a bivalent COVID vaccine booster prevent hospitalizations, ICU stay, or death? That may not be the question that is of interest to everyone. I know people are interested in symptoms, missed work, and transmission, but this paper was looking at hospitalization, ICU stay, and death.

What’s kind of tricky here is that the data they’re using are in people who are hospitalized with various diseases. It’s a little bit counterintuitive to ask yourself: “How can you estimate the vaccine’s ability to prevent hospitalization using only data from hospitalized patients?” You might look at that on the surface and say: “Well, you can’t – that’s impossible.” But you can, actually, with this cool test-negative case-control design.

Here’s basically how it works. You take a population of people who are hospitalized and confirmed to have COVID. Some of them will be vaccinated and some of them will be unvaccinated. And the proportion of vaccinated and unvaccinated people doesn’t tell you very much because it depends on how that compares with the rates in the general population, for instance. Let me clarify this. If 100% of the population were vaccinated, then 100% of the people hospitalized with COVID would be vaccinated. That doesn’t mean vaccines are bad. Put another way, if 90% of the population were vaccinated and 60% of people hospitalized with COVID were vaccinated, that would actually show that the vaccines were working to some extent, all else being equal. So it’s not just the raw percentages that tell you anything. Some people are vaccinated, some people aren’t. You need to understand what the baseline rate is.

The test-negative case-control design looks at people who are hospitalized without COVID. Now who those people are (who the controls are, in this case) is something you really need to think about. In the case of this CDC study, they used people who were hospitalized with COVID-like illnesses – flu-like illnesses, respiratory illnesses, pneumonia, influenza, etc. This is a pretty good idea because it standardizes a little bit for people who have access to healthcare. They can get to a hospital and they’re the type of person who would go to a hospital when they’re feeling sick. That’s a better control than the general population overall, which is something I like about this design.

Some of those people who don’t have COVID (they’re in the hospital for flu or whatever) will have been vaccinated for COVID, and some will not have been vaccinated for COVID. And of course, we don’t expect COVID vaccines necessarily to protect against the flu or pneumonia, but that gives us a way to standardize.

Dr. F. Perry Wilson


If you look at these Venn diagrams, I’ve got vaccinated/unvaccinated being exactly the same proportion, which would suggest that you’re just as likely to be hospitalized with COVID if you’re vaccinated as you are to be hospitalized with some other respiratory illness, which suggests that the vaccine isn’t particularly effective.

Dr. F. Perry Wilson


However, if you saw something like this, looking at all those patients with flu and other non-COVID illnesses, a lot more of them had been vaccinated for COVID. What that tells you is that we’re seeing fewer vaccinated people hospitalized with COVID than we would expect because we have this standardization from other respiratory infections. We expect this many vaccinated people because that’s how many vaccinated people there are who show up with flu. But in the COVID population, there are fewer, and that would suggest that the vaccines are effective. So that is the test-negative case-control design. You can do the same thing with ICU stays and death.

There are some assumptions here which you might already be thinking about. The most important one is that vaccination status is not associated with the risk for the disease. I always think of older people in this context. During the pandemic, at least in the United States, older people were much more likely to be vaccinated but were also much more likely to contract COVID and be hospitalized with COVID. The test-negative design actually accounts for this in some sense, because older people are also more likely to be hospitalized for things like flu and pneumonia. So there’s some control there.

But to the extent that older people are uniquely susceptible to COVID compared with other respiratory illnesses, that would bias your results to make the vaccines look worse. So the standard approach here is to adjust for these things. I think the CDC adjusted for age, sex, race, ethnicity, and a few other things to settle down and see how effective the vaccines were.

Let’s get to a worked example.

Dr. F. Perry Wilson


This is the actual data from the CDC paper. They had 6,907 individuals who were hospitalized with COVID, and 26% of them were unvaccinated. What’s the baseline rate that we would expect to be unvaccinated? A total of 59,234 individuals were hospitalized with a non-COVID respiratory illness, and 23% of them were unvaccinated. So you can see that there were more unvaccinated people than you would think in the COVID group. In other words, fewer vaccinated people, which suggests that the vaccine works to some degree because it’s keeping some people out of the hospital.

Now, 26% versus 23% is not a very impressive difference. But it gets more interesting when you break it down by the type of vaccine and how long ago the individual was vaccinated.

Dr. F. Perry Wilson


Let’s walk through the “all” group on this figure. What you can see is the calculated vaccine effectiveness. If you look at just the monovalent vaccine here, we see a 20% vaccine effectiveness. This means that you’re preventing 20% of hospitalizations basically due to COVID by people getting vaccinated. That’s okay but it’s certainly not anything to write home about. But we see much better vaccine effectiveness with the bivalent vaccine if it had been received within 60 days.

This compares people who received the bivalent vaccine within 60 days in the COVID group and the non-COVID group. The concern that the vaccine was given very recently affects both groups equally so it shouldn’t result in bias there. You see a step-off in vaccine effectiveness from 60 days, 60-120 days, and greater than 120 days. This is 4 months, and you’ve gone from 60% to 20%. When you break that down by age, you can see a similar pattern in the 18-to-65 group and potentially some more protection the greater than 65 age group.

Why is vaccine efficacy going down? The study doesn’t tell us, but we can hypothesize that this might be an immunologic effect – the antibodies or the protective T cells are waning over time. This could also reflect changes in the virus in the environment as the virus seeks to evade certain immune responses. But overall, this suggests that waiting a year between booster doses may leave you exposed for quite some time, although the take-home here is that bivalent vaccines in general are probably a good idea for the proportion of people who haven’t gotten them.

When we look at critical illness and death, the numbers look a little bit better.

Dr. F. Perry Wilson


You can see that bivalent is better than monovalent – certainly pretty good if you’ve received it within 60 days. It does tend to wane a little bit, but not nearly as much. You’ve still got about 50% vaccine efficacy beyond 120 days when we’re looking at critical illness, which is stays in the ICU and death.

The overriding thing to think about when we think about vaccine policy is that the way you get immunized against COVID is either by vaccine or by getting infected with COVID, or both.

Centers for Disease Control and Prevention


This really interesting graph from the CDC (although it’s updated only through quarter three of 2022) shows the proportion of Americans, based on routine lab tests, who have varying degrees of protection against COVID. What you can see is that, by quarter three of 2022, just 3.6% of people who had blood drawn at a commercial laboratory had no evidence of infection or vaccination. In other words, almost no one was totally naive. Then 26% of people had never been infected – they only have vaccine antibodies – plus 22% of people had only been infected but had never been vaccinated. And then 50% of people had both. So there’s a tremendous amount of existing immunity out there.

The really interesting question about future vaccination and future booster doses is, how does it work on the background of this pattern? The CDC study doesn’t tell us, and I don’t think they have the data to tell us the vaccine efficacy in these different groups. Is it more effective in people who have only had an infection, for example? Is it more effective in people who have only had vaccination versus people who had both, or people who have no protection whatsoever? Those are the really interesting questions that need to be answered going forward as vaccine policy gets developed in the future.

I hope this was a helpful primer on how the test-negative case-control design can answer questions that seem a little bit unanswerable.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He disclosed no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study.

I am here today to talk about the effectiveness of COVID vaccine boosters in the midst of 2023. The reason I want to talk about this isn’t necessarily to dig into exactly how effective vaccines are. This is an area that’s been trod upon multiple times. But it does give me an opportunity to talk about a neat study design called the “test-negative case-control” design, which has some unique properties when you’re trying to evaluate the effect of something outside of the context of a randomized trial.

So, just a little bit of background to remind everyone where we are. These are the number of doses of COVID vaccines administered over time throughout the pandemic.

Centers for Disease Control and Prevention


You can see that it’s stratified by age. The orange lines are adults ages 18-49, for example. You can see a big wave of vaccination when the vaccine first came out at the start of 2021. Then subsequently, you can see smaller waves after the first and second booster authorizations, and maybe a bit of a pickup, particularly among older adults, when the bivalent boosters were authorized. But still very little overall pickup of the bivalent booster, compared with the monovalent vaccines, which might suggest vaccine fatigue going on this far into the pandemic. But it’s important to try to understand exactly how effective those new boosters are, at least at this point in time.

I’m talking about Early Estimates of Bivalent mRNA Booster Dose Vaccine Effectiveness in Preventing Symptomatic SARS-CoV-2 Infection Attributable to Omicron BA.5– and XBB/XBB.1.5–Related Sublineages Among Immunocompetent Adults – Increasing Community Access to Testing Program, United States, December 2022–January 2023, which came out in the Morbidity and Mortality Weekly Report very recently, which uses this test-negative case-control design to evaluate the ability of bivalent mRNA vaccines to prevent hospitalization.

The question is: Does receipt of a bivalent COVID vaccine booster prevent hospitalizations, ICU stay, or death? That may not be the question that is of interest to everyone. I know people are interested in symptoms, missed work, and transmission, but this paper was looking at hospitalization, ICU stay, and death.

What’s kind of tricky here is that the data they’re using are in people who are hospitalized with various diseases. It’s a little bit counterintuitive to ask yourself: “How can you estimate the vaccine’s ability to prevent hospitalization using only data from hospitalized patients?” You might look at that on the surface and say: “Well, you can’t – that’s impossible.” But you can, actually, with this cool test-negative case-control design.

Here’s basically how it works. You take a population of people who are hospitalized and confirmed to have COVID. Some of them will be vaccinated and some of them will be unvaccinated. And the proportion of vaccinated and unvaccinated people doesn’t tell you very much because it depends on how that compares with the rates in the general population, for instance. Let me clarify this. If 100% of the population were vaccinated, then 100% of the people hospitalized with COVID would be vaccinated. That doesn’t mean vaccines are bad. Put another way, if 90% of the population were vaccinated and 60% of people hospitalized with COVID were vaccinated, that would actually show that the vaccines were working to some extent, all else being equal. So it’s not just the raw percentages that tell you anything. Some people are vaccinated, some people aren’t. You need to understand what the baseline rate is.

The test-negative case-control design looks at people who are hospitalized without COVID. Now who those people are (who the controls are, in this case) is something you really need to think about. In the case of this CDC study, they used people who were hospitalized with COVID-like illnesses – flu-like illnesses, respiratory illnesses, pneumonia, influenza, etc. This is a pretty good idea because it standardizes a little bit for people who have access to healthcare. They can get to a hospital and they’re the type of person who would go to a hospital when they’re feeling sick. That’s a better control than the general population overall, which is something I like about this design.

Some of those people who don’t have COVID (they’re in the hospital for flu or whatever) will have been vaccinated for COVID, and some will not have been vaccinated for COVID. And of course, we don’t expect COVID vaccines necessarily to protect against the flu or pneumonia, but that gives us a way to standardize.

Dr. F. Perry Wilson


If you look at these Venn diagrams, I’ve got vaccinated/unvaccinated being exactly the same proportion, which would suggest that you’re just as likely to be hospitalized with COVID if you’re vaccinated as you are to be hospitalized with some other respiratory illness, which suggests that the vaccine isn’t particularly effective.

Dr. F. Perry Wilson


However, if you saw something like this, looking at all those patients with flu and other non-COVID illnesses, a lot more of them had been vaccinated for COVID. What that tells you is that we’re seeing fewer vaccinated people hospitalized with COVID than we would expect because we have this standardization from other respiratory infections. We expect this many vaccinated people because that’s how many vaccinated people there are who show up with flu. But in the COVID population, there are fewer, and that would suggest that the vaccines are effective. So that is the test-negative case-control design. You can do the same thing with ICU stays and death.

There are some assumptions here which you might already be thinking about. The most important one is that vaccination status is not associated with the risk for the disease. I always think of older people in this context. During the pandemic, at least in the United States, older people were much more likely to be vaccinated but were also much more likely to contract COVID and be hospitalized with COVID. The test-negative design actually accounts for this in some sense, because older people are also more likely to be hospitalized for things like flu and pneumonia. So there’s some control there.

But to the extent that older people are uniquely susceptible to COVID compared with other respiratory illnesses, that would bias your results to make the vaccines look worse. So the standard approach here is to adjust for these things. I think the CDC adjusted for age, sex, race, ethnicity, and a few other things to settle down and see how effective the vaccines were.

Let’s get to a worked example.

Dr. F. Perry Wilson


This is the actual data from the CDC paper. They had 6,907 individuals who were hospitalized with COVID, and 26% of them were unvaccinated. What’s the baseline rate that we would expect to be unvaccinated? A total of 59,234 individuals were hospitalized with a non-COVID respiratory illness, and 23% of them were unvaccinated. So you can see that there were more unvaccinated people than you would think in the COVID group. In other words, fewer vaccinated people, which suggests that the vaccine works to some degree because it’s keeping some people out of the hospital.

Now, 26% versus 23% is not a very impressive difference. But it gets more interesting when you break it down by the type of vaccine and how long ago the individual was vaccinated.

Dr. F. Perry Wilson


Let’s walk through the “all” group on this figure. What you can see is the calculated vaccine effectiveness. If you look at just the monovalent vaccine here, we see a 20% vaccine effectiveness. This means that you’re preventing 20% of hospitalizations basically due to COVID by people getting vaccinated. That’s okay but it’s certainly not anything to write home about. But we see much better vaccine effectiveness with the bivalent vaccine if it had been received within 60 days.

This compares people who received the bivalent vaccine within 60 days in the COVID group and the non-COVID group. The concern that the vaccine was given very recently affects both groups equally so it shouldn’t result in bias there. You see a step-off in vaccine effectiveness from 60 days, 60-120 days, and greater than 120 days. This is 4 months, and you’ve gone from 60% to 20%. When you break that down by age, you can see a similar pattern in the 18-to-65 group and potentially some more protection the greater than 65 age group.

Why is vaccine efficacy going down? The study doesn’t tell us, but we can hypothesize that this might be an immunologic effect – the antibodies or the protective T cells are waning over time. This could also reflect changes in the virus in the environment as the virus seeks to evade certain immune responses. But overall, this suggests that waiting a year between booster doses may leave you exposed for quite some time, although the take-home here is that bivalent vaccines in general are probably a good idea for the proportion of people who haven’t gotten them.

When we look at critical illness and death, the numbers look a little bit better.

Dr. F. Perry Wilson


You can see that bivalent is better than monovalent – certainly pretty good if you’ve received it within 60 days. It does tend to wane a little bit, but not nearly as much. You’ve still got about 50% vaccine efficacy beyond 120 days when we’re looking at critical illness, which is stays in the ICU and death.

The overriding thing to think about when we think about vaccine policy is that the way you get immunized against COVID is either by vaccine or by getting infected with COVID, or both.

Centers for Disease Control and Prevention


This really interesting graph from the CDC (although it’s updated only through quarter three of 2022) shows the proportion of Americans, based on routine lab tests, who have varying degrees of protection against COVID. What you can see is that, by quarter three of 2022, just 3.6% of people who had blood drawn at a commercial laboratory had no evidence of infection or vaccination. In other words, almost no one was totally naive. Then 26% of people had never been infected – they only have vaccine antibodies – plus 22% of people had only been infected but had never been vaccinated. And then 50% of people had both. So there’s a tremendous amount of existing immunity out there.

The really interesting question about future vaccination and future booster doses is, how does it work on the background of this pattern? The CDC study doesn’t tell us, and I don’t think they have the data to tell us the vaccine efficacy in these different groups. Is it more effective in people who have only had an infection, for example? Is it more effective in people who have only had vaccination versus people who had both, or people who have no protection whatsoever? Those are the really interesting questions that need to be answered going forward as vaccine policy gets developed in the future.

I hope this was a helpful primer on how the test-negative case-control design can answer questions that seem a little bit unanswerable.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He disclosed no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study.

I am here today to talk about the effectiveness of COVID vaccine boosters in the midst of 2023. The reason I want to talk about this isn’t necessarily to dig into exactly how effective vaccines are. This is an area that’s been trod upon multiple times. But it does give me an opportunity to talk about a neat study design called the “test-negative case-control” design, which has some unique properties when you’re trying to evaluate the effect of something outside of the context of a randomized trial.

So, just a little bit of background to remind everyone where we are. These are the number of doses of COVID vaccines administered over time throughout the pandemic.

Centers for Disease Control and Prevention


You can see that it’s stratified by age. The orange lines are adults ages 18-49, for example. You can see a big wave of vaccination when the vaccine first came out at the start of 2021. Then subsequently, you can see smaller waves after the first and second booster authorizations, and maybe a bit of a pickup, particularly among older adults, when the bivalent boosters were authorized. But still very little overall pickup of the bivalent booster, compared with the monovalent vaccines, which might suggest vaccine fatigue going on this far into the pandemic. But it’s important to try to understand exactly how effective those new boosters are, at least at this point in time.

I’m talking about Early Estimates of Bivalent mRNA Booster Dose Vaccine Effectiveness in Preventing Symptomatic SARS-CoV-2 Infection Attributable to Omicron BA.5– and XBB/XBB.1.5–Related Sublineages Among Immunocompetent Adults – Increasing Community Access to Testing Program, United States, December 2022–January 2023, which came out in the Morbidity and Mortality Weekly Report very recently, which uses this test-negative case-control design to evaluate the ability of bivalent mRNA vaccines to prevent hospitalization.

The question is: Does receipt of a bivalent COVID vaccine booster prevent hospitalizations, ICU stay, or death? That may not be the question that is of interest to everyone. I know people are interested in symptoms, missed work, and transmission, but this paper was looking at hospitalization, ICU stay, and death.

What’s kind of tricky here is that the data they’re using are in people who are hospitalized with various diseases. It’s a little bit counterintuitive to ask yourself: “How can you estimate the vaccine’s ability to prevent hospitalization using only data from hospitalized patients?” You might look at that on the surface and say: “Well, you can’t – that’s impossible.” But you can, actually, with this cool test-negative case-control design.

Here’s basically how it works. You take a population of people who are hospitalized and confirmed to have COVID. Some of them will be vaccinated and some of them will be unvaccinated. And the proportion of vaccinated and unvaccinated people doesn’t tell you very much because it depends on how that compares with the rates in the general population, for instance. Let me clarify this. If 100% of the population were vaccinated, then 100% of the people hospitalized with COVID would be vaccinated. That doesn’t mean vaccines are bad. Put another way, if 90% of the population were vaccinated and 60% of people hospitalized with COVID were vaccinated, that would actually show that the vaccines were working to some extent, all else being equal. So it’s not just the raw percentages that tell you anything. Some people are vaccinated, some people aren’t. You need to understand what the baseline rate is.

The test-negative case-control design looks at people who are hospitalized without COVID. Now who those people are (who the controls are, in this case) is something you really need to think about. In the case of this CDC study, they used people who were hospitalized with COVID-like illnesses – flu-like illnesses, respiratory illnesses, pneumonia, influenza, etc. This is a pretty good idea because it standardizes a little bit for people who have access to healthcare. They can get to a hospital and they’re the type of person who would go to a hospital when they’re feeling sick. That’s a better control than the general population overall, which is something I like about this design.

Some of those people who don’t have COVID (they’re in the hospital for flu or whatever) will have been vaccinated for COVID, and some will not have been vaccinated for COVID. And of course, we don’t expect COVID vaccines necessarily to protect against the flu or pneumonia, but that gives us a way to standardize.

Dr. F. Perry Wilson


If you look at these Venn diagrams, I’ve got vaccinated/unvaccinated being exactly the same proportion, which would suggest that you’re just as likely to be hospitalized with COVID if you’re vaccinated as you are to be hospitalized with some other respiratory illness, which suggests that the vaccine isn’t particularly effective.

Dr. F. Perry Wilson


However, if you saw something like this, looking at all those patients with flu and other non-COVID illnesses, a lot more of them had been vaccinated for COVID. What that tells you is that we’re seeing fewer vaccinated people hospitalized with COVID than we would expect because we have this standardization from other respiratory infections. We expect this many vaccinated people because that’s how many vaccinated people there are who show up with flu. But in the COVID population, there are fewer, and that would suggest that the vaccines are effective. So that is the test-negative case-control design. You can do the same thing with ICU stays and death.

There are some assumptions here which you might already be thinking about. The most important one is that vaccination status is not associated with the risk for the disease. I always think of older people in this context. During the pandemic, at least in the United States, older people were much more likely to be vaccinated but were also much more likely to contract COVID and be hospitalized with COVID. The test-negative design actually accounts for this in some sense, because older people are also more likely to be hospitalized for things like flu and pneumonia. So there’s some control there.

But to the extent that older people are uniquely susceptible to COVID compared with other respiratory illnesses, that would bias your results to make the vaccines look worse. So the standard approach here is to adjust for these things. I think the CDC adjusted for age, sex, race, ethnicity, and a few other things to settle down and see how effective the vaccines were.

Let’s get to a worked example.

Dr. F. Perry Wilson


This is the actual data from the CDC paper. They had 6,907 individuals who were hospitalized with COVID, and 26% of them were unvaccinated. What’s the baseline rate that we would expect to be unvaccinated? A total of 59,234 individuals were hospitalized with a non-COVID respiratory illness, and 23% of them were unvaccinated. So you can see that there were more unvaccinated people than you would think in the COVID group. In other words, fewer vaccinated people, which suggests that the vaccine works to some degree because it’s keeping some people out of the hospital.

Now, 26% versus 23% is not a very impressive difference. But it gets more interesting when you break it down by the type of vaccine and how long ago the individual was vaccinated.

Dr. F. Perry Wilson


Let’s walk through the “all” group on this figure. What you can see is the calculated vaccine effectiveness. If you look at just the monovalent vaccine here, we see a 20% vaccine effectiveness. This means that you’re preventing 20% of hospitalizations basically due to COVID by people getting vaccinated. That’s okay but it’s certainly not anything to write home about. But we see much better vaccine effectiveness with the bivalent vaccine if it had been received within 60 days.

This compares people who received the bivalent vaccine within 60 days in the COVID group and the non-COVID group. The concern that the vaccine was given very recently affects both groups equally so it shouldn’t result in bias there. You see a step-off in vaccine effectiveness from 60 days, 60-120 days, and greater than 120 days. This is 4 months, and you’ve gone from 60% to 20%. When you break that down by age, you can see a similar pattern in the 18-to-65 group and potentially some more protection the greater than 65 age group.

Why is vaccine efficacy going down? The study doesn’t tell us, but we can hypothesize that this might be an immunologic effect – the antibodies or the protective T cells are waning over time. This could also reflect changes in the virus in the environment as the virus seeks to evade certain immune responses. But overall, this suggests that waiting a year between booster doses may leave you exposed for quite some time, although the take-home here is that bivalent vaccines in general are probably a good idea for the proportion of people who haven’t gotten them.

When we look at critical illness and death, the numbers look a little bit better.

Dr. F. Perry Wilson


You can see that bivalent is better than monovalent – certainly pretty good if you’ve received it within 60 days. It does tend to wane a little bit, but not nearly as much. You’ve still got about 50% vaccine efficacy beyond 120 days when we’re looking at critical illness, which is stays in the ICU and death.

The overriding thing to think about when we think about vaccine policy is that the way you get immunized against COVID is either by vaccine or by getting infected with COVID, or both.

Centers for Disease Control and Prevention


This really interesting graph from the CDC (although it’s updated only through quarter three of 2022) shows the proportion of Americans, based on routine lab tests, who have varying degrees of protection against COVID. What you can see is that, by quarter three of 2022, just 3.6% of people who had blood drawn at a commercial laboratory had no evidence of infection or vaccination. In other words, almost no one was totally naive. Then 26% of people had never been infected – they only have vaccine antibodies – plus 22% of people had only been infected but had never been vaccinated. And then 50% of people had both. So there’s a tremendous amount of existing immunity out there.

The really interesting question about future vaccination and future booster doses is, how does it work on the background of this pattern? The CDC study doesn’t tell us, and I don’t think they have the data to tell us the vaccine efficacy in these different groups. Is it more effective in people who have only had an infection, for example? Is it more effective in people who have only had vaccination versus people who had both, or people who have no protection whatsoever? Those are the really interesting questions that need to be answered going forward as vaccine policy gets developed in the future.

I hope this was a helpful primer on how the test-negative case-control design can answer questions that seem a little bit unanswerable.

F. Perry Wilson, MD, MSCE, is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator. He disclosed no relevant conflicts of interest.
 

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

The 30th-birthday gift that could save a life

Article Type
Changed
Wed, 05/17/2023 - 09:16

 

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Milestone birthdays are always memorable – those ages when your life seems to fundamentally change somehow. Age 16: A license to drive. Age 18: You can vote to determine your own future and serve in the military. At 21, 3 years after adulthood, you are finally allowed to drink alcohol, for some reason. And then ... nothing much happens. At least until you turn 65 and become eligible for Medicare.

But imagine a future when turning 30 might be the biggest milestone birthday of all. Imagine a future when, at 30, you get your genome sequenced and doctors tell you what needs to be done to save your life.

That future may not be far off, as a new study shows us that screening every single 30-year-old in the United States for three particular genetic conditions may not only save lives but be reasonably cost-effective.

Getting your genome sequenced is a double-edged sword. Of course, there is the potential for substantial benefit; finding certain mutations allows for definitive therapy before it’s too late. That said, there are genetic diseases without a cure and without a treatment. Knowing about that destiny may do more harm than good.

Three conditions are described by the CDC as “Tier 1” conditions, genetic syndromes with a significant impact on life expectancy that also have definitive, effective therapies.

Dr. F. Perry Wilson


These include mutations like BRCA1/2, associated with a high risk for breast and ovarian cancer; mutations associated with Lynch syndrome, which confer an elevated risk for colon cancer; and mutations associated with familial hypercholesterolemia, which confer elevated risk for cardiovascular events.

In each of these cases, there is clear evidence that early intervention can save lives. Individuals at high risk for breast and ovarian cancer can get prophylactic mastectomy and salpingo-oophorectomy. Those with Lynch syndrome can get more frequent screening for colon cancer and polypectomy, and those with familial hypercholesterolemia can get aggressive lipid-lowering therapy.

I think most of us would probably want to know if we had one of these conditions. Most of us would use that information to take concrete steps to decrease our risk. But just because a rational person would choose to do something doesn’t mean it’s feasible. After all, we’re talking about tests and treatments that have significant costs.

In a recent issue of Annals of Internal Medicine, Josh Peterson and David Veenstra present a detailed accounting of the cost and benefit of a hypothetical nationwide, universal screening program for Tier 1 conditions. And in the end, it may actually be worth it.

Cost-benefit analyses work by comparing two independent policy choices: the status quo – in this case, a world in which some people get tested for these conditions, but generally only if they are at high risk based on strong family history; and an alternative policy – in this case, universal screening for these conditions starting at some age.

After that, it’s time to play the assumption game. Using the best available data, the authors estimated the percentage of the population that will have each condition, the percentage of those individuals who will definitively act on the information, and how effective those actions would be if taken.

The authors provide an example. First, they assume that the prevalence of mutations leading to a high risk for breast and ovarian cancer is around 0.7%, and that up to 40% of people who learn that they have one of these mutations would undergo prophylactic mastectomy, which would reduce the risk for breast cancer by around 94%. (I ran these numbers past my wife, a breast surgical oncologist, who agreed that they seem reasonable.)

Assumptions in place, it’s time to consider costs. The cost of the screening test itself: The authors use $250 as their average per-person cost. But we also have the cost of treatment – around $22,000 per person for a bilateral prophylactic mastectomy; the cost of statin therapy for those with familial hypercholesterolemia; or the cost of all of those colonoscopies for those with Lynch syndrome.

Finally, we assess quality of life. Obviously, living longer is generally considered better than living shorter, but marginal increases in life expectancy at the cost of quality of life might not be a rational choice.

You then churn these assumptions through a computer and see what comes out. How many dollars does it take to save one quality-adjusted life-year (QALY)? I’ll tell you right now that $50,000 per QALY used to be the unofficial standard for a “cost-effective” intervention in the United States. Researchers have more recently used $100,000 as that threshold.

Let’s look at some hard numbers.

If you screened 100,000 people at age 30 years, 1,500 would get news that something in their genetics was, more or less, a ticking time bomb. Some would choose to get definitive treatment and the authors estimate that the strategy would prevent 85 cases of cancer. You’d prevent nine heart attacks and five strokes by lowering cholesterol levels among those with familial hypercholesterolemia. Obviously, these aren’t huge numbers, but of course most people don’t have these hereditary risk factors. For your average 30-year-old, the genetic screening test will be completely uneventful, but for those 1,500 it will be life-changing, and potentially life-saving.

But is it worth it? The authors estimate that, at the midpoint of all their assumptions, the cost of this program would be $68,000 per QALY saved.

Of course, that depends on all those assumptions we talked about. Interestingly, the single factor that changes the cost-effectiveness the most in this analysis is the cost of the genetic test itself, which I guess makes sense, considering we’d be talking about testing a huge segment of the population. If the test cost $100 instead of $250, the cost per QALY would be $39,700 – well within the range that most policymakers would support. And given the rate at which the cost of genetic testing is decreasing, and the obvious economies of scale here, I think $100 per test is totally feasible.

The future will bring other changes as well. Right now, there are only three hereditary conditions designated as Tier 1 by the CDC. If conditions are added, that might also swing the calculation more heavily toward benefit.

This will represent a stark change from how we think about genetic testing currently, focusing on those whose pretest probability of an abnormal result is high due to family history or other risk factors. But for the 20-year-olds out there, I wouldn’t be surprised if your 30th birthday is a bit more significant than you have been anticipating.
 

Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Milestone birthdays are always memorable – those ages when your life seems to fundamentally change somehow. Age 16: A license to drive. Age 18: You can vote to determine your own future and serve in the military. At 21, 3 years after adulthood, you are finally allowed to drink alcohol, for some reason. And then ... nothing much happens. At least until you turn 65 and become eligible for Medicare.

But imagine a future when turning 30 might be the biggest milestone birthday of all. Imagine a future when, at 30, you get your genome sequenced and doctors tell you what needs to be done to save your life.

That future may not be far off, as a new study shows us that screening every single 30-year-old in the United States for three particular genetic conditions may not only save lives but be reasonably cost-effective.

Getting your genome sequenced is a double-edged sword. Of course, there is the potential for substantial benefit; finding certain mutations allows for definitive therapy before it’s too late. That said, there are genetic diseases without a cure and without a treatment. Knowing about that destiny may do more harm than good.

Three conditions are described by the CDC as “Tier 1” conditions, genetic syndromes with a significant impact on life expectancy that also have definitive, effective therapies.

Dr. F. Perry Wilson


These include mutations like BRCA1/2, associated with a high risk for breast and ovarian cancer; mutations associated with Lynch syndrome, which confer an elevated risk for colon cancer; and mutations associated with familial hypercholesterolemia, which confer elevated risk for cardiovascular events.

In each of these cases, there is clear evidence that early intervention can save lives. Individuals at high risk for breast and ovarian cancer can get prophylactic mastectomy and salpingo-oophorectomy. Those with Lynch syndrome can get more frequent screening for colon cancer and polypectomy, and those with familial hypercholesterolemia can get aggressive lipid-lowering therapy.

I think most of us would probably want to know if we had one of these conditions. Most of us would use that information to take concrete steps to decrease our risk. But just because a rational person would choose to do something doesn’t mean it’s feasible. After all, we’re talking about tests and treatments that have significant costs.

In a recent issue of Annals of Internal Medicine, Josh Peterson and David Veenstra present a detailed accounting of the cost and benefit of a hypothetical nationwide, universal screening program for Tier 1 conditions. And in the end, it may actually be worth it.

Cost-benefit analyses work by comparing two independent policy choices: the status quo – in this case, a world in which some people get tested for these conditions, but generally only if they are at high risk based on strong family history; and an alternative policy – in this case, universal screening for these conditions starting at some age.

After that, it’s time to play the assumption game. Using the best available data, the authors estimated the percentage of the population that will have each condition, the percentage of those individuals who will definitively act on the information, and how effective those actions would be if taken.

The authors provide an example. First, they assume that the prevalence of mutations leading to a high risk for breast and ovarian cancer is around 0.7%, and that up to 40% of people who learn that they have one of these mutations would undergo prophylactic mastectomy, which would reduce the risk for breast cancer by around 94%. (I ran these numbers past my wife, a breast surgical oncologist, who agreed that they seem reasonable.)

Assumptions in place, it’s time to consider costs. The cost of the screening test itself: The authors use $250 as their average per-person cost. But we also have the cost of treatment – around $22,000 per person for a bilateral prophylactic mastectomy; the cost of statin therapy for those with familial hypercholesterolemia; or the cost of all of those colonoscopies for those with Lynch syndrome.

Finally, we assess quality of life. Obviously, living longer is generally considered better than living shorter, but marginal increases in life expectancy at the cost of quality of life might not be a rational choice.

You then churn these assumptions through a computer and see what comes out. How many dollars does it take to save one quality-adjusted life-year (QALY)? I’ll tell you right now that $50,000 per QALY used to be the unofficial standard for a “cost-effective” intervention in the United States. Researchers have more recently used $100,000 as that threshold.

Let’s look at some hard numbers.

If you screened 100,000 people at age 30 years, 1,500 would get news that something in their genetics was, more or less, a ticking time bomb. Some would choose to get definitive treatment and the authors estimate that the strategy would prevent 85 cases of cancer. You’d prevent nine heart attacks and five strokes by lowering cholesterol levels among those with familial hypercholesterolemia. Obviously, these aren’t huge numbers, but of course most people don’t have these hereditary risk factors. For your average 30-year-old, the genetic screening test will be completely uneventful, but for those 1,500 it will be life-changing, and potentially life-saving.

But is it worth it? The authors estimate that, at the midpoint of all their assumptions, the cost of this program would be $68,000 per QALY saved.

Of course, that depends on all those assumptions we talked about. Interestingly, the single factor that changes the cost-effectiveness the most in this analysis is the cost of the genetic test itself, which I guess makes sense, considering we’d be talking about testing a huge segment of the population. If the test cost $100 instead of $250, the cost per QALY would be $39,700 – well within the range that most policymakers would support. And given the rate at which the cost of genetic testing is decreasing, and the obvious economies of scale here, I think $100 per test is totally feasible.

The future will bring other changes as well. Right now, there are only three hereditary conditions designated as Tier 1 by the CDC. If conditions are added, that might also swing the calculation more heavily toward benefit.

This will represent a stark change from how we think about genetic testing currently, focusing on those whose pretest probability of an abnormal result is high due to family history or other risk factors. But for the 20-year-olds out there, I wouldn’t be surprised if your 30th birthday is a bit more significant than you have been anticipating.
 

Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

 

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

Milestone birthdays are always memorable – those ages when your life seems to fundamentally change somehow. Age 16: A license to drive. Age 18: You can vote to determine your own future and serve in the military. At 21, 3 years after adulthood, you are finally allowed to drink alcohol, for some reason. And then ... nothing much happens. At least until you turn 65 and become eligible for Medicare.

But imagine a future when turning 30 might be the biggest milestone birthday of all. Imagine a future when, at 30, you get your genome sequenced and doctors tell you what needs to be done to save your life.

That future may not be far off, as a new study shows us that screening every single 30-year-old in the United States for three particular genetic conditions may not only save lives but be reasonably cost-effective.

Getting your genome sequenced is a double-edged sword. Of course, there is the potential for substantial benefit; finding certain mutations allows for definitive therapy before it’s too late. That said, there are genetic diseases without a cure and without a treatment. Knowing about that destiny may do more harm than good.

Three conditions are described by the CDC as “Tier 1” conditions, genetic syndromes with a significant impact on life expectancy that also have definitive, effective therapies.

Dr. F. Perry Wilson


These include mutations like BRCA1/2, associated with a high risk for breast and ovarian cancer; mutations associated with Lynch syndrome, which confer an elevated risk for colon cancer; and mutations associated with familial hypercholesterolemia, which confer elevated risk for cardiovascular events.

In each of these cases, there is clear evidence that early intervention can save lives. Individuals at high risk for breast and ovarian cancer can get prophylactic mastectomy and salpingo-oophorectomy. Those with Lynch syndrome can get more frequent screening for colon cancer and polypectomy, and those with familial hypercholesterolemia can get aggressive lipid-lowering therapy.

I think most of us would probably want to know if we had one of these conditions. Most of us would use that information to take concrete steps to decrease our risk. But just because a rational person would choose to do something doesn’t mean it’s feasible. After all, we’re talking about tests and treatments that have significant costs.

In a recent issue of Annals of Internal Medicine, Josh Peterson and David Veenstra present a detailed accounting of the cost and benefit of a hypothetical nationwide, universal screening program for Tier 1 conditions. And in the end, it may actually be worth it.

Cost-benefit analyses work by comparing two independent policy choices: the status quo – in this case, a world in which some people get tested for these conditions, but generally only if they are at high risk based on strong family history; and an alternative policy – in this case, universal screening for these conditions starting at some age.

After that, it’s time to play the assumption game. Using the best available data, the authors estimated the percentage of the population that will have each condition, the percentage of those individuals who will definitively act on the information, and how effective those actions would be if taken.

The authors provide an example. First, they assume that the prevalence of mutations leading to a high risk for breast and ovarian cancer is around 0.7%, and that up to 40% of people who learn that they have one of these mutations would undergo prophylactic mastectomy, which would reduce the risk for breast cancer by around 94%. (I ran these numbers past my wife, a breast surgical oncologist, who agreed that they seem reasonable.)

Assumptions in place, it’s time to consider costs. The cost of the screening test itself: The authors use $250 as their average per-person cost. But we also have the cost of treatment – around $22,000 per person for a bilateral prophylactic mastectomy; the cost of statin therapy for those with familial hypercholesterolemia; or the cost of all of those colonoscopies for those with Lynch syndrome.

Finally, we assess quality of life. Obviously, living longer is generally considered better than living shorter, but marginal increases in life expectancy at the cost of quality of life might not be a rational choice.

You then churn these assumptions through a computer and see what comes out. How many dollars does it take to save one quality-adjusted life-year (QALY)? I’ll tell you right now that $50,000 per QALY used to be the unofficial standard for a “cost-effective” intervention in the United States. Researchers have more recently used $100,000 as that threshold.

Let’s look at some hard numbers.

If you screened 100,000 people at age 30 years, 1,500 would get news that something in their genetics was, more or less, a ticking time bomb. Some would choose to get definitive treatment and the authors estimate that the strategy would prevent 85 cases of cancer. You’d prevent nine heart attacks and five strokes by lowering cholesterol levels among those with familial hypercholesterolemia. Obviously, these aren’t huge numbers, but of course most people don’t have these hereditary risk factors. For your average 30-year-old, the genetic screening test will be completely uneventful, but for those 1,500 it will be life-changing, and potentially life-saving.

But is it worth it? The authors estimate that, at the midpoint of all their assumptions, the cost of this program would be $68,000 per QALY saved.

Of course, that depends on all those assumptions we talked about. Interestingly, the single factor that changes the cost-effectiveness the most in this analysis is the cost of the genetic test itself, which I guess makes sense, considering we’d be talking about testing a huge segment of the population. If the test cost $100 instead of $250, the cost per QALY would be $39,700 – well within the range that most policymakers would support. And given the rate at which the cost of genetic testing is decreasing, and the obvious economies of scale here, I think $100 per test is totally feasible.

The future will bring other changes as well. Right now, there are only three hereditary conditions designated as Tier 1 by the CDC. If conditions are added, that might also swing the calculation more heavily toward benefit.

This will represent a stark change from how we think about genetic testing currently, focusing on those whose pretest probability of an abnormal result is high due to family history or other risk factors. But for the 20-year-olds out there, I wouldn’t be surprised if your 30th birthday is a bit more significant than you have been anticipating.
 

Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He disclosed no relevant conflicts of interest.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article

Surprising brain activity moments before death

Article Type
Changed
Fri, 05/05/2023 - 10:26

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

All the participants in the study I am going to tell you about this week died. And three of them died twice. But their deaths provide us with a fascinating window into the complex electrochemistry of the dying brain. What we might be looking at, indeed, is the physiologic correlate of the near-death experience.

The concept of the near-death experience is culturally ubiquitous. And though the content seems to track along culture lines – Western Christians are more likely to report seeing guardian angels, while Hindus are more likely to report seeing messengers of the god of death – certain factors seem to transcend culture: an out-of-body experience; a feeling of peace; and, of course, the light at the end of the tunnel.

As a materialist, I won’t discuss the possibility that these commonalities reflect some metaphysical structure to the afterlife. More likely, it seems to me, is that the commonalities result from the fact that the experience is mediated by our brains, and our brains, when dying, may be more alike than different.

We are talking about this study, appearing in the Proceedings of the National Academy of Sciences, by Jimo Borjigin and her team.

Dr. Borjigin studies the neural correlates of consciousness, perhaps one of the biggest questions in all of science today. To wit, if consciousness is derived from processes in the brain, what set of processes are minimally necessary for consciousness?

The study in question follows four unconscious patients –comatose patients, really – as life-sustaining support was withdrawn, up until the moment of death. Three had suffered severe anoxic brain injury in the setting of prolonged cardiac arrest. Though the heart was restarted, the brain damage was severe. The fourth had a large brain hemorrhage. All four patients were thus comatose and, though not brain-dead, unresponsive – with the lowest possible Glasgow Coma Scale score. No response to outside stimuli.

The families had made the decision to withdraw life support – to remove the breathing tube – but agreed to enroll their loved one in the study.

The team applied EEG leads to the head, EKG leads to the chest, and other monitoring equipment to observe the physiologic changes that occurred as the comatose and unresponsive patient died.

As the heart rhythm evolved from this:

PNAS


To this:

PNAS


And eventually stopped.

But this is a study about the brain, not the heart.

Prior to the withdrawal of life support, the brain electrical signals looked like this:

PNAS/F. Perry Wilson, MD, MSCE


What you see is the EEG power at various frequencies, with red being higher. All the red was down at the low frequencies. Consciousness, at least as we understand it, is a higher-frequency phenomenon.

Right after the breathing tube was removed, the power didn’t change too much, but you can see some increased activity at the higher frequencies.

PNAS/F. Perry Wilson, MD, MSCE


But in two of the four patients, something really surprising happened. Watch what happens as the brain gets closer and closer to death.

PNAS/F. Perry Wilson, MD, MSCE


Here, about 300 seconds before death, there was a power surge at the high gamma frequencies.

PNAS/F. Perry Wilson, MD, MSCE


This spike in power occurred in the somatosensory cortex and the dorsolateral prefrontal cortex, areas that are associated with conscious experience. It seems that this patient, 5 minutes before death, was experiencing something.

But I know what you’re thinking. This is a brain that is not receiving oxygen. Cells are going to become disordered quickly and start firing randomly – a last gasp, so to speak, before the end. Meaningless noise.

But connectivity mapping tells a different story. The signals seem to have structure.

Those high-frequency power surges increased connectivity in the posterior cortical “hot zone,” an area of the brain many researchers feel is necessary for conscious perception. This figure is not a map of raw brain electrical output like the one I showed before, but of coherence between brain regions in the consciousness hot zone. Those red areas indicate cross-talk – not the disordered scream of dying neurons, but a last set of messages passing back and forth from the parietal and posterior temporal lobes.

PNAS


In fact, the electrical patterns of the brains in these patients looked very similar to the patterns seen in dreaming humans, as well as in patients with epilepsy who report sensations of out-of-body experiences.

It’s critical to realize two things here. First, these signals of consciousness were not present before life support was withdrawn. These comatose patients had minimal brain activity; there was no evidence that they were experiencing anything before the process of dying began. These brains are behaving fundamentally differently near death.

But second, we must realize that, although the brains of these individuals, in their last moments, appeared to be acting in a way that conscious brains act, we have no way of knowing if the patients were truly having a conscious experience. As I said, all the patients in the study died. Short of those metaphysics I alluded to earlier, we will have no way to ask them how they experienced their final moments.

Let’s be clear: This study doesn’t answer the question of what happens when we die. It says nothing about life after death or the existence or persistence of the soul. But what it does do is shed light on an incredibly difficult problem in neuroscience: the problem of consciousness. And as studies like this move forward, we may discover that the root of consciousness comes not from the breath of God or the energy of a living universe, but from very specific parts of the very complicated machine that is the brain, acting together to produce something transcendent. And to me, that is no less sublime.
 

Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator, Yale University, New Haven, Conn. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. Dr. Wilson has disclosed no relevant financial relationships.
 

Publications
Topics
Sections

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

All the participants in the study I am going to tell you about this week died. And three of them died twice. But their deaths provide us with a fascinating window into the complex electrochemistry of the dying brain. What we might be looking at, indeed, is the physiologic correlate of the near-death experience.

The concept of the near-death experience is culturally ubiquitous. And though the content seems to track along culture lines – Western Christians are more likely to report seeing guardian angels, while Hindus are more likely to report seeing messengers of the god of death – certain factors seem to transcend culture: an out-of-body experience; a feeling of peace; and, of course, the light at the end of the tunnel.

As a materialist, I won’t discuss the possibility that these commonalities reflect some metaphysical structure to the afterlife. More likely, it seems to me, is that the commonalities result from the fact that the experience is mediated by our brains, and our brains, when dying, may be more alike than different.

We are talking about this study, appearing in the Proceedings of the National Academy of Sciences, by Jimo Borjigin and her team.

Dr. Borjigin studies the neural correlates of consciousness, perhaps one of the biggest questions in all of science today. To wit, if consciousness is derived from processes in the brain, what set of processes are minimally necessary for consciousness?

The study in question follows four unconscious patients –comatose patients, really – as life-sustaining support was withdrawn, up until the moment of death. Three had suffered severe anoxic brain injury in the setting of prolonged cardiac arrest. Though the heart was restarted, the brain damage was severe. The fourth had a large brain hemorrhage. All four patients were thus comatose and, though not brain-dead, unresponsive – with the lowest possible Glasgow Coma Scale score. No response to outside stimuli.

The families had made the decision to withdraw life support – to remove the breathing tube – but agreed to enroll their loved one in the study.

The team applied EEG leads to the head, EKG leads to the chest, and other monitoring equipment to observe the physiologic changes that occurred as the comatose and unresponsive patient died.

As the heart rhythm evolved from this:

PNAS


To this:

PNAS


And eventually stopped.

But this is a study about the brain, not the heart.

Prior to the withdrawal of life support, the brain electrical signals looked like this:

PNAS/F. Perry Wilson, MD, MSCE


What you see is the EEG power at various frequencies, with red being higher. All the red was down at the low frequencies. Consciousness, at least as we understand it, is a higher-frequency phenomenon.

Right after the breathing tube was removed, the power didn’t change too much, but you can see some increased activity at the higher frequencies.

PNAS/F. Perry Wilson, MD, MSCE


But in two of the four patients, something really surprising happened. Watch what happens as the brain gets closer and closer to death.

PNAS/F. Perry Wilson, MD, MSCE


Here, about 300 seconds before death, there was a power surge at the high gamma frequencies.

PNAS/F. Perry Wilson, MD, MSCE


This spike in power occurred in the somatosensory cortex and the dorsolateral prefrontal cortex, areas that are associated with conscious experience. It seems that this patient, 5 minutes before death, was experiencing something.

But I know what you’re thinking. This is a brain that is not receiving oxygen. Cells are going to become disordered quickly and start firing randomly – a last gasp, so to speak, before the end. Meaningless noise.

But connectivity mapping tells a different story. The signals seem to have structure.

Those high-frequency power surges increased connectivity in the posterior cortical “hot zone,” an area of the brain many researchers feel is necessary for conscious perception. This figure is not a map of raw brain electrical output like the one I showed before, but of coherence between brain regions in the consciousness hot zone. Those red areas indicate cross-talk – not the disordered scream of dying neurons, but a last set of messages passing back and forth from the parietal and posterior temporal lobes.

PNAS


In fact, the electrical patterns of the brains in these patients looked very similar to the patterns seen in dreaming humans, as well as in patients with epilepsy who report sensations of out-of-body experiences.

It’s critical to realize two things here. First, these signals of consciousness were not present before life support was withdrawn. These comatose patients had minimal brain activity; there was no evidence that they were experiencing anything before the process of dying began. These brains are behaving fundamentally differently near death.

But second, we must realize that, although the brains of these individuals, in their last moments, appeared to be acting in a way that conscious brains act, we have no way of knowing if the patients were truly having a conscious experience. As I said, all the patients in the study died. Short of those metaphysics I alluded to earlier, we will have no way to ask them how they experienced their final moments.

Let’s be clear: This study doesn’t answer the question of what happens when we die. It says nothing about life after death or the existence or persistence of the soul. But what it does do is shed light on an incredibly difficult problem in neuroscience: the problem of consciousness. And as studies like this move forward, we may discover that the root of consciousness comes not from the breath of God or the energy of a living universe, but from very specific parts of the very complicated machine that is the brain, acting together to produce something transcendent. And to me, that is no less sublime.
 

Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator, Yale University, New Haven, Conn. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. Dr. Wilson has disclosed no relevant financial relationships.
 

This transcript has been edited for clarity.

Welcome to Impact Factor, your weekly dose of commentary on a new medical study. I’m Dr F. Perry Wilson of the Yale School of Medicine.

All the participants in the study I am going to tell you about this week died. And three of them died twice. But their deaths provide us with a fascinating window into the complex electrochemistry of the dying brain. What we might be looking at, indeed, is the physiologic correlate of the near-death experience.

The concept of the near-death experience is culturally ubiquitous. And though the content seems to track along culture lines – Western Christians are more likely to report seeing guardian angels, while Hindus are more likely to report seeing messengers of the god of death – certain factors seem to transcend culture: an out-of-body experience; a feeling of peace; and, of course, the light at the end of the tunnel.

As a materialist, I won’t discuss the possibility that these commonalities reflect some metaphysical structure to the afterlife. More likely, it seems to me, is that the commonalities result from the fact that the experience is mediated by our brains, and our brains, when dying, may be more alike than different.

We are talking about this study, appearing in the Proceedings of the National Academy of Sciences, by Jimo Borjigin and her team.

Dr. Borjigin studies the neural correlates of consciousness, perhaps one of the biggest questions in all of science today. To wit, if consciousness is derived from processes in the brain, what set of processes are minimally necessary for consciousness?

The study in question follows four unconscious patients –comatose patients, really – as life-sustaining support was withdrawn, up until the moment of death. Three had suffered severe anoxic brain injury in the setting of prolonged cardiac arrest. Though the heart was restarted, the brain damage was severe. The fourth had a large brain hemorrhage. All four patients were thus comatose and, though not brain-dead, unresponsive – with the lowest possible Glasgow Coma Scale score. No response to outside stimuli.

The families had made the decision to withdraw life support – to remove the breathing tube – but agreed to enroll their loved one in the study.

The team applied EEG leads to the head, EKG leads to the chest, and other monitoring equipment to observe the physiologic changes that occurred as the comatose and unresponsive patient died.

As the heart rhythm evolved from this:

PNAS


To this:

PNAS


And eventually stopped.

But this is a study about the brain, not the heart.

Prior to the withdrawal of life support, the brain electrical signals looked like this:

PNAS/F. Perry Wilson, MD, MSCE


What you see is the EEG power at various frequencies, with red being higher. All the red was down at the low frequencies. Consciousness, at least as we understand it, is a higher-frequency phenomenon.

Right after the breathing tube was removed, the power didn’t change too much, but you can see some increased activity at the higher frequencies.

PNAS/F. Perry Wilson, MD, MSCE


But in two of the four patients, something really surprising happened. Watch what happens as the brain gets closer and closer to death.

PNAS/F. Perry Wilson, MD, MSCE


Here, about 300 seconds before death, there was a power surge at the high gamma frequencies.

PNAS/F. Perry Wilson, MD, MSCE


This spike in power occurred in the somatosensory cortex and the dorsolateral prefrontal cortex, areas that are associated with conscious experience. It seems that this patient, 5 minutes before death, was experiencing something.

But I know what you’re thinking. This is a brain that is not receiving oxygen. Cells are going to become disordered quickly and start firing randomly – a last gasp, so to speak, before the end. Meaningless noise.

But connectivity mapping tells a different story. The signals seem to have structure.

Those high-frequency power surges increased connectivity in the posterior cortical “hot zone,” an area of the brain many researchers feel is necessary for conscious perception. This figure is not a map of raw brain electrical output like the one I showed before, but of coherence between brain regions in the consciousness hot zone. Those red areas indicate cross-talk – not the disordered scream of dying neurons, but a last set of messages passing back and forth from the parietal and posterior temporal lobes.

PNAS


In fact, the electrical patterns of the brains in these patients looked very similar to the patterns seen in dreaming humans, as well as in patients with epilepsy who report sensations of out-of-body experiences.

It’s critical to realize two things here. First, these signals of consciousness were not present before life support was withdrawn. These comatose patients had minimal brain activity; there was no evidence that they were experiencing anything before the process of dying began. These brains are behaving fundamentally differently near death.

But second, we must realize that, although the brains of these individuals, in their last moments, appeared to be acting in a way that conscious brains act, we have no way of knowing if the patients were truly having a conscious experience. As I said, all the patients in the study died. Short of those metaphysics I alluded to earlier, we will have no way to ask them how they experienced their final moments.

Let’s be clear: This study doesn’t answer the question of what happens when we die. It says nothing about life after death or the existence or persistence of the soul. But what it does do is shed light on an incredibly difficult problem in neuroscience: the problem of consciousness. And as studies like this move forward, we may discover that the root of consciousness comes not from the breath of God or the energy of a living universe, but from very specific parts of the very complicated machine that is the brain, acting together to produce something transcendent. And to me, that is no less sublime.
 

Dr. Wilson is an associate professor of medicine and director of Yale’s Clinical and Translational Research Accelerator, Yale University, New Haven, Conn. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, How Medicine Works and When It Doesn’t, is available now. Dr. Wilson has disclosed no relevant financial relationships.
 

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content
Disable Inline Native ads
WebMD Article