User login
Even one night in the ED raises risk for death
This transcript has been edited for clarity.
As a consulting nephrologist, I go all over the hospital. Medicine floors, surgical floors, the ICU – I’ve even done consults in the operating room. And more and more, I do consults in the emergency department.
The reason I am doing more consults in the ED is not because the ED docs are getting gun shy with creatinine increases; it’s because patients are staying for extended periods in the ED despite being formally admitted to the hospital. It’s a phenomenon known as boarding, because there are simply not enough beds. You know the scene if you have ever been to a busy hospital: The ED is full to breaking, with patients on stretchers in hallways. It can often feel more like a warzone than a place for healing.
This is a huge problem.
The Joint Commission specifies that admitted patients should spend no more than 4 hours in the ED waiting for a bed in the hospital.
That is, based on what I’ve seen, hugely ambitious. But I should point out that I work in a hospital that runs near capacity all the time, and studies – from some of my Yale colleagues, actually – have shown that once hospital capacity exceeds 85%, boarding rates skyrocket.
I want to discuss some of the causes of extended boarding and some solutions. But before that, I should prove to you that this really matters, and for that we are going to dig in to a new study which suggests that ED boarding kills.
To put some hard numbers to the boarding problem, we turn to this paper out of France, appearing in JAMA Internal Medicine.
This is a unique study design. Basically, on a single day – Dec. 12, 2022 – researchers fanned out across France to 97 EDs and started counting patients. The study focused on those older than age 75 who were admitted to a hospital ward from the ED. The researchers then defined two groups: those who were sent up to the hospital floor before midnight, and those who spent at least from midnight until 8 AM in the ED (basically, people forced to sleep in the ED for a night). The middle-ground people who were sent up between midnight and 8 AM were excluded.
The baseline characteristics between the two groups of patients were pretty similar: median age around 86, 55% female. There were no significant differences in comorbidities. That said, comporting with previous studies, people in an urban ED, an academic ED, or a busy ED were much more likely to board overnight.
So, what we have are two similar groups of patients treated quite differently. Not quite a randomized trial, given the hospital differences, but not bad for purposes of analysis.
Here are the most important numbers from the trial:
This difference held up even after adjustment for patient and hospital characteristics. Put another way, you’d need to send 22 patients to the floor instead of boarding in the ED to save one life. Not a bad return on investment.
It’s not entirely clear what the mechanism for the excess mortality might be, but the researchers note that patients kept in the ED overnight were about twice as likely to have a fall during their hospital stay – not surprising, given the dangers of gurneys in hallways and the sleep deprivation that trying to rest in a busy ED engenders.
I should point out that this could be worse in the United States. French ED doctors continue to care for admitted patients boarding in the ED, whereas in many hospitals in the United States, admitted patients are the responsibility of the floor team, regardless of where they are, making it more likely that these individuals may be neglected.
So, if boarding in the ED is a life-threatening situation, why do we do it? What conditions predispose to this?
You’ll hear a lot of talk, mostly from hospital administrators, saying that this is simply a problem of supply and demand. There are not enough beds for the number of patients who need beds. And staffing shortages don’t help either.
However, they never want to talk about the reasons for the staffing shortages, like poor pay, poor support, and, of course, the moral injury of treating patients in hallways.
The issue of volume is real. We could do a lot to prevent ED visits and hospital admissions by providing better access to preventive and primary care and improving our outpatient mental health infrastructure. But I think this framing passes the buck a little.
Another reason ED boarding occurs is the way our health care system is paid for. If you are building a hospital, you have little incentive to build in excess capacity. The most efficient hospital, from a profit-and-loss standpoint, is one that is 100% full as often as possible. That may be fine at times, but throw in a respiratory virus or even a pandemic, and those systems fracture under the pressure.
Let us also remember that not all hospital beds are given to patients who acutely need hospital beds. Many beds, in many hospitals, are necessary to handle postoperative patients undergoing elective procedures. Those patients having a knee replacement or abdominoplasty don’t spend the night in the ED when they leave the OR; they go to a hospital bed. And those procedures are – let’s face it – more profitable than an ED admission for a medical issue. That’s why, even when hospitals expand the number of beds they have, they do it with an eye toward increasing the rate of those profitable procedures, not decreasing the burden faced by their ED.
For now, the band-aid to the solution might be to better triage individuals boarding in the ED for floor access, prioritizing those of older age, greater frailty, or more medical complexity. But it feels like a stop-gap measure as long as the incentives are aligned to view an empty hospital bed as a sign of failure in the health system instead of success.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
As a consulting nephrologist, I go all over the hospital. Medicine floors, surgical floors, the ICU – I’ve even done consults in the operating room. And more and more, I do consults in the emergency department.
The reason I am doing more consults in the ED is not because the ED docs are getting gun shy with creatinine increases; it’s because patients are staying for extended periods in the ED despite being formally admitted to the hospital. It’s a phenomenon known as boarding, because there are simply not enough beds. You know the scene if you have ever been to a busy hospital: The ED is full to breaking, with patients on stretchers in hallways. It can often feel more like a warzone than a place for healing.
This is a huge problem.
The Joint Commission specifies that admitted patients should spend no more than 4 hours in the ED waiting for a bed in the hospital.
That is, based on what I’ve seen, hugely ambitious. But I should point out that I work in a hospital that runs near capacity all the time, and studies – from some of my Yale colleagues, actually – have shown that once hospital capacity exceeds 85%, boarding rates skyrocket.
I want to discuss some of the causes of extended boarding and some solutions. But before that, I should prove to you that this really matters, and for that we are going to dig in to a new study which suggests that ED boarding kills.
To put some hard numbers to the boarding problem, we turn to this paper out of France, appearing in JAMA Internal Medicine.
This is a unique study design. Basically, on a single day – Dec. 12, 2022 – researchers fanned out across France to 97 EDs and started counting patients. The study focused on those older than age 75 who were admitted to a hospital ward from the ED. The researchers then defined two groups: those who were sent up to the hospital floor before midnight, and those who spent at least from midnight until 8 AM in the ED (basically, people forced to sleep in the ED for a night). The middle-ground people who were sent up between midnight and 8 AM were excluded.
The baseline characteristics between the two groups of patients were pretty similar: median age around 86, 55% female. There were no significant differences in comorbidities. That said, comporting with previous studies, people in an urban ED, an academic ED, or a busy ED were much more likely to board overnight.
So, what we have are two similar groups of patients treated quite differently. Not quite a randomized trial, given the hospital differences, but not bad for purposes of analysis.
Here are the most important numbers from the trial:
This difference held up even after adjustment for patient and hospital characteristics. Put another way, you’d need to send 22 patients to the floor instead of boarding in the ED to save one life. Not a bad return on investment.
It’s not entirely clear what the mechanism for the excess mortality might be, but the researchers note that patients kept in the ED overnight were about twice as likely to have a fall during their hospital stay – not surprising, given the dangers of gurneys in hallways and the sleep deprivation that trying to rest in a busy ED engenders.
I should point out that this could be worse in the United States. French ED doctors continue to care for admitted patients boarding in the ED, whereas in many hospitals in the United States, admitted patients are the responsibility of the floor team, regardless of where they are, making it more likely that these individuals may be neglected.
So, if boarding in the ED is a life-threatening situation, why do we do it? What conditions predispose to this?
You’ll hear a lot of talk, mostly from hospital administrators, saying that this is simply a problem of supply and demand. There are not enough beds for the number of patients who need beds. And staffing shortages don’t help either.
However, they never want to talk about the reasons for the staffing shortages, like poor pay, poor support, and, of course, the moral injury of treating patients in hallways.
The issue of volume is real. We could do a lot to prevent ED visits and hospital admissions by providing better access to preventive and primary care and improving our outpatient mental health infrastructure. But I think this framing passes the buck a little.
Another reason ED boarding occurs is the way our health care system is paid for. If you are building a hospital, you have little incentive to build in excess capacity. The most efficient hospital, from a profit-and-loss standpoint, is one that is 100% full as often as possible. That may be fine at times, but throw in a respiratory virus or even a pandemic, and those systems fracture under the pressure.
Let us also remember that not all hospital beds are given to patients who acutely need hospital beds. Many beds, in many hospitals, are necessary to handle postoperative patients undergoing elective procedures. Those patients having a knee replacement or abdominoplasty don’t spend the night in the ED when they leave the OR; they go to a hospital bed. And those procedures are – let’s face it – more profitable than an ED admission for a medical issue. That’s why, even when hospitals expand the number of beds they have, they do it with an eye toward increasing the rate of those profitable procedures, not decreasing the burden faced by their ED.
For now, the band-aid to the solution might be to better triage individuals boarding in the ED for floor access, prioritizing those of older age, greater frailty, or more medical complexity. But it feels like a stop-gap measure as long as the incentives are aligned to view an empty hospital bed as a sign of failure in the health system instead of success.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
As a consulting nephrologist, I go all over the hospital. Medicine floors, surgical floors, the ICU – I’ve even done consults in the operating room. And more and more, I do consults in the emergency department.
The reason I am doing more consults in the ED is not because the ED docs are getting gun shy with creatinine increases; it’s because patients are staying for extended periods in the ED despite being formally admitted to the hospital. It’s a phenomenon known as boarding, because there are simply not enough beds. You know the scene if you have ever been to a busy hospital: The ED is full to breaking, with patients on stretchers in hallways. It can often feel more like a warzone than a place for healing.
This is a huge problem.
The Joint Commission specifies that admitted patients should spend no more than 4 hours in the ED waiting for a bed in the hospital.
That is, based on what I’ve seen, hugely ambitious. But I should point out that I work in a hospital that runs near capacity all the time, and studies – from some of my Yale colleagues, actually – have shown that once hospital capacity exceeds 85%, boarding rates skyrocket.
I want to discuss some of the causes of extended boarding and some solutions. But before that, I should prove to you that this really matters, and for that we are going to dig in to a new study which suggests that ED boarding kills.
To put some hard numbers to the boarding problem, we turn to this paper out of France, appearing in JAMA Internal Medicine.
This is a unique study design. Basically, on a single day – Dec. 12, 2022 – researchers fanned out across France to 97 EDs and started counting patients. The study focused on those older than age 75 who were admitted to a hospital ward from the ED. The researchers then defined two groups: those who were sent up to the hospital floor before midnight, and those who spent at least from midnight until 8 AM in the ED (basically, people forced to sleep in the ED for a night). The middle-ground people who were sent up between midnight and 8 AM were excluded.
The baseline characteristics between the two groups of patients were pretty similar: median age around 86, 55% female. There were no significant differences in comorbidities. That said, comporting with previous studies, people in an urban ED, an academic ED, or a busy ED were much more likely to board overnight.
So, what we have are two similar groups of patients treated quite differently. Not quite a randomized trial, given the hospital differences, but not bad for purposes of analysis.
Here are the most important numbers from the trial:
This difference held up even after adjustment for patient and hospital characteristics. Put another way, you’d need to send 22 patients to the floor instead of boarding in the ED to save one life. Not a bad return on investment.
It’s not entirely clear what the mechanism for the excess mortality might be, but the researchers note that patients kept in the ED overnight were about twice as likely to have a fall during their hospital stay – not surprising, given the dangers of gurneys in hallways and the sleep deprivation that trying to rest in a busy ED engenders.
I should point out that this could be worse in the United States. French ED doctors continue to care for admitted patients boarding in the ED, whereas in many hospitals in the United States, admitted patients are the responsibility of the floor team, regardless of where they are, making it more likely that these individuals may be neglected.
So, if boarding in the ED is a life-threatening situation, why do we do it? What conditions predispose to this?
You’ll hear a lot of talk, mostly from hospital administrators, saying that this is simply a problem of supply and demand. There are not enough beds for the number of patients who need beds. And staffing shortages don’t help either.
However, they never want to talk about the reasons for the staffing shortages, like poor pay, poor support, and, of course, the moral injury of treating patients in hallways.
The issue of volume is real. We could do a lot to prevent ED visits and hospital admissions by providing better access to preventive and primary care and improving our outpatient mental health infrastructure. But I think this framing passes the buck a little.
Another reason ED boarding occurs is the way our health care system is paid for. If you are building a hospital, you have little incentive to build in excess capacity. The most efficient hospital, from a profit-and-loss standpoint, is one that is 100% full as often as possible. That may be fine at times, but throw in a respiratory virus or even a pandemic, and those systems fracture under the pressure.
Let us also remember that not all hospital beds are given to patients who acutely need hospital beds. Many beds, in many hospitals, are necessary to handle postoperative patients undergoing elective procedures. Those patients having a knee replacement or abdominoplasty don’t spend the night in the ED when they leave the OR; they go to a hospital bed. And those procedures are – let’s face it – more profitable than an ED admission for a medical issue. That’s why, even when hospitals expand the number of beds they have, they do it with an eye toward increasing the rate of those profitable procedures, not decreasing the burden faced by their ED.
For now, the band-aid to the solution might be to better triage individuals boarding in the ED for floor access, prioritizing those of older age, greater frailty, or more medical complexity. But it feels like a stop-gap measure as long as the incentives are aligned to view an empty hospital bed as a sign of failure in the health system instead of success.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This drug works, but wait till you hear what’s in it
This transcript has been edited for clarity.
As some of you may know, I do a fair amount of clinical research developing and evaluating artificial intelligence (AI) models, particularly machine learning algorithms that predict certain outcomes.
A thorny issue that comes up as algorithms have gotten more complicated is “explainability.” The problem is that AI can be a black box. Even if you have a model that is very accurate at predicting death, clinicians don’t trust it unless you can explain how it makes its predictions – how it works. “It just works” is not good enough to build trust.
It’s easier to build trust when you’re talking about a medication rather than a computer program. When a new blood pressure drug comes out that lowers blood pressure, importantly, we know why it lowers blood pressure. Every drug has a mechanism of action and, for most of the drugs in our arsenal, we know what that mechanism is.
But what if there were a drug – or better yet, a treatment – that worked? And I can honestly say we have no idea how it works. That’s what came across my desk today in what I believe is the largest, most rigorous trial of a traditional Chinese medication in history.
“Traditional Chinese medicine” is an omnibus term that refers to a class of therapies and health practices that are fundamentally different from how we practice medicine in the West.
It’s a highly personalized practice, with practitioners using often esoteric means to choose what substance to give what patient. That personalization makes traditional Chinese medicine nearly impossible to study in the typical randomized trial framework because treatments are not chosen solely on the basis of disease states.
The lack of scientific rigor in traditional Chinese medicine means that it is rife with practices and beliefs that can legitimately be called pseudoscience. As a nephrologist who has treated someone for “Chinese herb nephropathy,” I can tell you that some of the practices may be actively harmful.
But that doesn’t mean there is nothing there. I do not subscribe to the “argument from antiquity” – the idea that because something has been done for a long time it must be correct. But at the same time, traditional and non–science-based medicine practices could still identify therapies that work.
And with that, let me introduce you to Tongxinluo. Tongxinluo literally means “to open the network of the heart,” and it is a substance that has been used for centuries by traditional Chinese medicine practitioners to treat angina but was approved by the Chinese state medicine agency for use in 1996.
Like many traditional Chinese medicine preparations, Tongxinluo is not a single chemical – far from it. It is a powder made from a variety of plant and insect parts, as you can see here.
I can’t imagine running a trial of this concoction in the United States; I just don’t see an institutional review board signing off, given the ingredient list.
But let’s set that aside and talk about the study itself.
While I don’t have access to any primary data, the write-up of the study suggests that it was highly rigorous. Chinese researchers randomized 3,797 patients with ST-elevation MI to take Tongxinluo – four capsules, three times a day for 12 months – or matching placebo. The placebo was designed to look just like the Tongxinluo capsules and, if the capsules were opened, to smell like them as well.
Researchers and participants were blinded, and the statistical analysis was done both by the primary team and an independent research agency, also in China.
And the results were pretty good. The primary outcome, 30-day major cardiovascular and cerebral events, were significantly lower in the intervention group than in the placebo group.
One-year outcomes were similarly good; 8.3% of the placebo group suffered a major cardiovascular or cerebral event in that time frame, compared with 5.3% of the Tongxinluo group. In short, if this were a pure chemical compound from a major pharmaceutical company, well, you might be seeing a new treatment for heart attack – and a boost in stock price.
But there are some issues here, generalizability being a big one. This study was done entirely in China, so its applicability to a more diverse population is unclear. Moreover, the quality of post-MI care in this study is quite a bit worse than what we’d see here in the United States, with just over 50% of patients being discharged on a beta-blocker, for example.
But issues of generalizability and potentially substandard supplementary treatments are the usual reasons we worry about new medication trials. And those concerns seem to pale before the big one I have here which is, you know – we don’t know why this works.
Is it the extract of leech in the preparation perhaps thinning the blood a bit? Or is it the antioxidants in the ginseng, or something from the Pacific centipede or the sandalwood?
This trial doesn’t read to me as a vindication of traditional Chinese medicine but rather as an example of missed opportunity. More rigorous scientific study over the centuries that Tongxinluo has been used could have identified one, or perhaps more, compounds with strong therapeutic potential.
Purity of medical substances is incredibly important. Pure substances have predictable effects and side effects. Pure substances interact with other treatments we give patients in predictable ways. Pure substances can be quantified for purity by third parties, they can be manufactured according to accepted standards, and they can be assessed for adulteration. In short, pure substances pose less risk.
Now, I know that may come off as particularly sterile. Some people will feel that a “natural” substance has some inherent benefit over pure compounds. And, of course, there is something soothing about imagining a traditional preparation handed down over centuries, being prepared with care by a single practitioner, in contrast to the sterile industrial processes of a for-profit pharmaceutical company. I get it. But natural is not the same as safe. I am glad I have access to purified aspirin and don’t have to chew willow bark. I like my pure penicillin and am glad I don’t have to make a mold slurry to treat a bacterial infection.
I applaud the researchers for subjecting Tongxinluo to the rigor of a well-designed trial. They have generated data that are incredibly exciting, but not because we have a new treatment for ST-elevation MI on our hands; it’s because we have a map to a new treatment. The next big thing in heart attack care is not the mixture that is Tongxinluo, but it might be in the mixture.
A version of this article first appeared on Medscape.com.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t,” is available now.
This transcript has been edited for clarity.
As some of you may know, I do a fair amount of clinical research developing and evaluating artificial intelligence (AI) models, particularly machine learning algorithms that predict certain outcomes.
A thorny issue that comes up as algorithms have gotten more complicated is “explainability.” The problem is that AI can be a black box. Even if you have a model that is very accurate at predicting death, clinicians don’t trust it unless you can explain how it makes its predictions – how it works. “It just works” is not good enough to build trust.
It’s easier to build trust when you’re talking about a medication rather than a computer program. When a new blood pressure drug comes out that lowers blood pressure, importantly, we know why it lowers blood pressure. Every drug has a mechanism of action and, for most of the drugs in our arsenal, we know what that mechanism is.
But what if there were a drug – or better yet, a treatment – that worked? And I can honestly say we have no idea how it works. That’s what came across my desk today in what I believe is the largest, most rigorous trial of a traditional Chinese medication in history.
“Traditional Chinese medicine” is an omnibus term that refers to a class of therapies and health practices that are fundamentally different from how we practice medicine in the West.
It’s a highly personalized practice, with practitioners using often esoteric means to choose what substance to give what patient. That personalization makes traditional Chinese medicine nearly impossible to study in the typical randomized trial framework because treatments are not chosen solely on the basis of disease states.
The lack of scientific rigor in traditional Chinese medicine means that it is rife with practices and beliefs that can legitimately be called pseudoscience. As a nephrologist who has treated someone for “Chinese herb nephropathy,” I can tell you that some of the practices may be actively harmful.
But that doesn’t mean there is nothing there. I do not subscribe to the “argument from antiquity” – the idea that because something has been done for a long time it must be correct. But at the same time, traditional and non–science-based medicine practices could still identify therapies that work.
And with that, let me introduce you to Tongxinluo. Tongxinluo literally means “to open the network of the heart,” and it is a substance that has been used for centuries by traditional Chinese medicine practitioners to treat angina but was approved by the Chinese state medicine agency for use in 1996.
Like many traditional Chinese medicine preparations, Tongxinluo is not a single chemical – far from it. It is a powder made from a variety of plant and insect parts, as you can see here.
I can’t imagine running a trial of this concoction in the United States; I just don’t see an institutional review board signing off, given the ingredient list.
But let’s set that aside and talk about the study itself.
While I don’t have access to any primary data, the write-up of the study suggests that it was highly rigorous. Chinese researchers randomized 3,797 patients with ST-elevation MI to take Tongxinluo – four capsules, three times a day for 12 months – or matching placebo. The placebo was designed to look just like the Tongxinluo capsules and, if the capsules were opened, to smell like them as well.
Researchers and participants were blinded, and the statistical analysis was done both by the primary team and an independent research agency, also in China.
And the results were pretty good. The primary outcome, 30-day major cardiovascular and cerebral events, were significantly lower in the intervention group than in the placebo group.
One-year outcomes were similarly good; 8.3% of the placebo group suffered a major cardiovascular or cerebral event in that time frame, compared with 5.3% of the Tongxinluo group. In short, if this were a pure chemical compound from a major pharmaceutical company, well, you might be seeing a new treatment for heart attack – and a boost in stock price.
But there are some issues here, generalizability being a big one. This study was done entirely in China, so its applicability to a more diverse population is unclear. Moreover, the quality of post-MI care in this study is quite a bit worse than what we’d see here in the United States, with just over 50% of patients being discharged on a beta-blocker, for example.
But issues of generalizability and potentially substandard supplementary treatments are the usual reasons we worry about new medication trials. And those concerns seem to pale before the big one I have here which is, you know – we don’t know why this works.
Is it the extract of leech in the preparation perhaps thinning the blood a bit? Or is it the antioxidants in the ginseng, or something from the Pacific centipede or the sandalwood?
This trial doesn’t read to me as a vindication of traditional Chinese medicine but rather as an example of missed opportunity. More rigorous scientific study over the centuries that Tongxinluo has been used could have identified one, or perhaps more, compounds with strong therapeutic potential.
Purity of medical substances is incredibly important. Pure substances have predictable effects and side effects. Pure substances interact with other treatments we give patients in predictable ways. Pure substances can be quantified for purity by third parties, they can be manufactured according to accepted standards, and they can be assessed for adulteration. In short, pure substances pose less risk.
Now, I know that may come off as particularly sterile. Some people will feel that a “natural” substance has some inherent benefit over pure compounds. And, of course, there is something soothing about imagining a traditional preparation handed down over centuries, being prepared with care by a single practitioner, in contrast to the sterile industrial processes of a for-profit pharmaceutical company. I get it. But natural is not the same as safe. I am glad I have access to purified aspirin and don’t have to chew willow bark. I like my pure penicillin and am glad I don’t have to make a mold slurry to treat a bacterial infection.
I applaud the researchers for subjecting Tongxinluo to the rigor of a well-designed trial. They have generated data that are incredibly exciting, but not because we have a new treatment for ST-elevation MI on our hands; it’s because we have a map to a new treatment. The next big thing in heart attack care is not the mixture that is Tongxinluo, but it might be in the mixture.
A version of this article first appeared on Medscape.com.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t,” is available now.
This transcript has been edited for clarity.
As some of you may know, I do a fair amount of clinical research developing and evaluating artificial intelligence (AI) models, particularly machine learning algorithms that predict certain outcomes.
A thorny issue that comes up as algorithms have gotten more complicated is “explainability.” The problem is that AI can be a black box. Even if you have a model that is very accurate at predicting death, clinicians don’t trust it unless you can explain how it makes its predictions – how it works. “It just works” is not good enough to build trust.
It’s easier to build trust when you’re talking about a medication rather than a computer program. When a new blood pressure drug comes out that lowers blood pressure, importantly, we know why it lowers blood pressure. Every drug has a mechanism of action and, for most of the drugs in our arsenal, we know what that mechanism is.
But what if there were a drug – or better yet, a treatment – that worked? And I can honestly say we have no idea how it works. That’s what came across my desk today in what I believe is the largest, most rigorous trial of a traditional Chinese medication in history.
“Traditional Chinese medicine” is an omnibus term that refers to a class of therapies and health practices that are fundamentally different from how we practice medicine in the West.
It’s a highly personalized practice, with practitioners using often esoteric means to choose what substance to give what patient. That personalization makes traditional Chinese medicine nearly impossible to study in the typical randomized trial framework because treatments are not chosen solely on the basis of disease states.
The lack of scientific rigor in traditional Chinese medicine means that it is rife with practices and beliefs that can legitimately be called pseudoscience. As a nephrologist who has treated someone for “Chinese herb nephropathy,” I can tell you that some of the practices may be actively harmful.
But that doesn’t mean there is nothing there. I do not subscribe to the “argument from antiquity” – the idea that because something has been done for a long time it must be correct. But at the same time, traditional and non–science-based medicine practices could still identify therapies that work.
And with that, let me introduce you to Tongxinluo. Tongxinluo literally means “to open the network of the heart,” and it is a substance that has been used for centuries by traditional Chinese medicine practitioners to treat angina but was approved by the Chinese state medicine agency for use in 1996.
Like many traditional Chinese medicine preparations, Tongxinluo is not a single chemical – far from it. It is a powder made from a variety of plant and insect parts, as you can see here.
I can’t imagine running a trial of this concoction in the United States; I just don’t see an institutional review board signing off, given the ingredient list.
But let’s set that aside and talk about the study itself.
While I don’t have access to any primary data, the write-up of the study suggests that it was highly rigorous. Chinese researchers randomized 3,797 patients with ST-elevation MI to take Tongxinluo – four capsules, three times a day for 12 months – or matching placebo. The placebo was designed to look just like the Tongxinluo capsules and, if the capsules were opened, to smell like them as well.
Researchers and participants were blinded, and the statistical analysis was done both by the primary team and an independent research agency, also in China.
And the results were pretty good. The primary outcome, 30-day major cardiovascular and cerebral events, were significantly lower in the intervention group than in the placebo group.
One-year outcomes were similarly good; 8.3% of the placebo group suffered a major cardiovascular or cerebral event in that time frame, compared with 5.3% of the Tongxinluo group. In short, if this were a pure chemical compound from a major pharmaceutical company, well, you might be seeing a new treatment for heart attack – and a boost in stock price.
But there are some issues here, generalizability being a big one. This study was done entirely in China, so its applicability to a more diverse population is unclear. Moreover, the quality of post-MI care in this study is quite a bit worse than what we’d see here in the United States, with just over 50% of patients being discharged on a beta-blocker, for example.
But issues of generalizability and potentially substandard supplementary treatments are the usual reasons we worry about new medication trials. And those concerns seem to pale before the big one I have here which is, you know – we don’t know why this works.
Is it the extract of leech in the preparation perhaps thinning the blood a bit? Or is it the antioxidants in the ginseng, or something from the Pacific centipede or the sandalwood?
This trial doesn’t read to me as a vindication of traditional Chinese medicine but rather as an example of missed opportunity. More rigorous scientific study over the centuries that Tongxinluo has been used could have identified one, or perhaps more, compounds with strong therapeutic potential.
Purity of medical substances is incredibly important. Pure substances have predictable effects and side effects. Pure substances interact with other treatments we give patients in predictable ways. Pure substances can be quantified for purity by third parties, they can be manufactured according to accepted standards, and they can be assessed for adulteration. In short, pure substances pose less risk.
Now, I know that may come off as particularly sterile. Some people will feel that a “natural” substance has some inherent benefit over pure compounds. And, of course, there is something soothing about imagining a traditional preparation handed down over centuries, being prepared with care by a single practitioner, in contrast to the sterile industrial processes of a for-profit pharmaceutical company. I get it. But natural is not the same as safe. I am glad I have access to purified aspirin and don’t have to chew willow bark. I like my pure penicillin and am glad I don’t have to make a mold slurry to treat a bacterial infection.
I applaud the researchers for subjecting Tongxinluo to the rigor of a well-designed trial. They have generated data that are incredibly exciting, but not because we have a new treatment for ST-elevation MI on our hands; it’s because we have a map to a new treatment. The next big thing in heart attack care is not the mixture that is Tongxinluo, but it might be in the mixture.
A version of this article first appeared on Medscape.com.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator. His science communication work can be found in the Huffington Post, on NPR, and on Medscape. He tweets @fperrywilson and his new book, “How Medicine Works and When It Doesn’t,” is available now.
AI in medicine has a major Cassandra problem
This transcript has been edited for clarity.
Today I’m going to talk to you about a study at the cutting edge of modern medicine, one that uses an artificial intelligence (AI) model to guide care. But before I do, I need to take you back to the late Bronze Age, to a city located on the coast of what is now Turkey.
Troy’s towering walls made it seem unassailable, but that would not stop the Achaeans and their fleet of black ships from making landfall, and, after a siege, destroying the city. The destruction of Troy, as told in the Iliad and the Aeneid, was foretold by Cassandra, the daughter of King Priam and Priestess of Troy.
Cassandra had been given the gift of prophecy by the god Apollo in exchange for her favors. But after the gift was bestowed, she rejected the bright god and, in his rage, he added a curse to her blessing: that no one would ever believe her prophecies.
Thus it was that when her brother Paris set off to Sparta to abduct Helen, she warned him that his actions would lead to the downfall of their great city. He, of course, ignored her.
And you know the rest of the story.
Why am I telling you the story of Cassandra of Troy when we’re supposed to be talking about AI in medicine? Because AI has a major Cassandra problem.
The recent history of AI, and particularly the subset of AI known as machine learning in medicine, has been characterized by an accuracy arms race.
The electronic health record allows for the collection of volumes of data orders of magnitude greater than what we have ever been able to collect before. And all that data can be crunched by various algorithms to make predictions about, well, anything – whether a patient will be transferred to the intensive care unit, whether a GI bleed will need an intervention, whether someone will die in the next year.
Studies in this area tend to rely on retrospective datasets, and as time has gone on, better algorithms and more data have led to better and better predictions. In some simpler cases, machine-learning models have achieved near-perfect accuracy – Cassandra-level accuracy – as in the reading of chest x-rays for pneumonia, for example.
But as Cassandra teaches us, even perfect prediction is useless if no one believes you, if they don’t change their behavior. And this is the central problem of AI in medicine today. Many people are focusing on accuracy of the prediction but have forgotten that high accuracy is just table stakes for an AI model to be useful. It has to not only be accurate, but its use also has to change outcomes for patients. We need to be able to save Troy.
The best way to determine whether an AI model will help patients is to treat a model like we treat a new medication and evaluate it through a randomized trial. That’s what researchers, led by Shannon Walker of Vanderbilt University, Nashville, Tenn., did in a paper appearing in JAMA Network Open.
The model in question was one that predicted venous thromboembolism – blood clots – in hospitalized children. The model took in a variety of data points from the health record: a history of blood clot, history of cancer, presence of a central line, a variety of lab values. And the predictive model was very good – maybe not Cassandra good, but it achieved an AUC of 0.90, which means it had very high accuracy.
But again, accuracy is just table stakes.
The authors deployed the model in the live health record and recorded the results. For half of the kids, that was all that happened; no one actually saw the predictions. For those randomized to the intervention, the hematology team would be notified when the risk for clot was calculated to be greater than 2.5%. The hematology team would then contact the primary team to discuss prophylactic anticoagulation.
This is an elegant approach.
Let’s start with those table stakes – accuracy. The predictions were, by and large, pretty accurate in this trial. Of the 135 kids who developed blood clots, 121 had been flagged by the model in advance. That’s about 90%. The model flagged about 10% of kids who didn’t get a blood clot as well, but that’s not entirely surprising since the threshold for flagging was a 2.5% risk.
Given that the model preidentified almost every kid who would go on to develop a blood clot, it would make sense that kids randomized to the intervention would do better; after all, Cassandra was calling out her warnings.
But those kids didn’t do better. The rate of blood clot was no different between the group that used the accurate prediction model and the group that did not.
Why? Why does the use of an accurate model not necessarily improve outcomes?
First of all, a warning must lead to some change in management. Indeed, the kids in the intervention group were more likely to receive anticoagulation, but barely so. There were lots of reasons for this: physician preference, imminent discharge, active bleeding, and so on.
But let’s take a look at the 77 kids in the intervention arm who developed blood clots, because I think this is an instructive analysis.
Six of them did not meet the 2.5% threshold criteria, a case where the model missed its mark. Again, accuracy is table stakes.
Of the remaining 71, only 16 got a recommendation from the hematologist to start anticoagulation. Why not more? Well, the model identified some of the high-risk kids on the weekend, and it seems that the study team did not contact treatment teams during that time. That may account for about 40% of these cases. The remainder had some contraindication to anticoagulation.
Most tellingly, of the 16 who did get a recommendation to start anticoagulation, the recommendation was followed in only seven patients.
This is the gap between accurate prediction and the ability to change outcomes for patients. A prediction is useless if it is wrong, for sure. But it’s also useless if you don’t tell anyone about it. It’s useless if you tell someone but they can’t do anything about it. And it’s useless if they could do something about it but choose not to.
That’s the gulf that these models need to cross at this point. So, the next time some slick company tells you how accurate their AI model is, ask them if accuracy is really the most important thing. If they say, “Well, yes, of course,” then tell them about Cassandra.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Today I’m going to talk to you about a study at the cutting edge of modern medicine, one that uses an artificial intelligence (AI) model to guide care. But before I do, I need to take you back to the late Bronze Age, to a city located on the coast of what is now Turkey.
Troy’s towering walls made it seem unassailable, but that would not stop the Achaeans and their fleet of black ships from making landfall, and, after a siege, destroying the city. The destruction of Troy, as told in the Iliad and the Aeneid, was foretold by Cassandra, the daughter of King Priam and Priestess of Troy.
Cassandra had been given the gift of prophecy by the god Apollo in exchange for her favors. But after the gift was bestowed, she rejected the bright god and, in his rage, he added a curse to her blessing: that no one would ever believe her prophecies.
Thus it was that when her brother Paris set off to Sparta to abduct Helen, she warned him that his actions would lead to the downfall of their great city. He, of course, ignored her.
And you know the rest of the story.
Why am I telling you the story of Cassandra of Troy when we’re supposed to be talking about AI in medicine? Because AI has a major Cassandra problem.
The recent history of AI, and particularly the subset of AI known as machine learning in medicine, has been characterized by an accuracy arms race.
The electronic health record allows for the collection of volumes of data orders of magnitude greater than what we have ever been able to collect before. And all that data can be crunched by various algorithms to make predictions about, well, anything – whether a patient will be transferred to the intensive care unit, whether a GI bleed will need an intervention, whether someone will die in the next year.
Studies in this area tend to rely on retrospective datasets, and as time has gone on, better algorithms and more data have led to better and better predictions. In some simpler cases, machine-learning models have achieved near-perfect accuracy – Cassandra-level accuracy – as in the reading of chest x-rays for pneumonia, for example.
But as Cassandra teaches us, even perfect prediction is useless if no one believes you, if they don’t change their behavior. And this is the central problem of AI in medicine today. Many people are focusing on accuracy of the prediction but have forgotten that high accuracy is just table stakes for an AI model to be useful. It has to not only be accurate, but its use also has to change outcomes for patients. We need to be able to save Troy.
The best way to determine whether an AI model will help patients is to treat a model like we treat a new medication and evaluate it through a randomized trial. That’s what researchers, led by Shannon Walker of Vanderbilt University, Nashville, Tenn., did in a paper appearing in JAMA Network Open.
The model in question was one that predicted venous thromboembolism – blood clots – in hospitalized children. The model took in a variety of data points from the health record: a history of blood clot, history of cancer, presence of a central line, a variety of lab values. And the predictive model was very good – maybe not Cassandra good, but it achieved an AUC of 0.90, which means it had very high accuracy.
But again, accuracy is just table stakes.
The authors deployed the model in the live health record and recorded the results. For half of the kids, that was all that happened; no one actually saw the predictions. For those randomized to the intervention, the hematology team would be notified when the risk for clot was calculated to be greater than 2.5%. The hematology team would then contact the primary team to discuss prophylactic anticoagulation.
This is an elegant approach.
Let’s start with those table stakes – accuracy. The predictions were, by and large, pretty accurate in this trial. Of the 135 kids who developed blood clots, 121 had been flagged by the model in advance. That’s about 90%. The model flagged about 10% of kids who didn’t get a blood clot as well, but that’s not entirely surprising since the threshold for flagging was a 2.5% risk.
Given that the model preidentified almost every kid who would go on to develop a blood clot, it would make sense that kids randomized to the intervention would do better; after all, Cassandra was calling out her warnings.
But those kids didn’t do better. The rate of blood clot was no different between the group that used the accurate prediction model and the group that did not.
Why? Why does the use of an accurate model not necessarily improve outcomes?
First of all, a warning must lead to some change in management. Indeed, the kids in the intervention group were more likely to receive anticoagulation, but barely so. There were lots of reasons for this: physician preference, imminent discharge, active bleeding, and so on.
But let’s take a look at the 77 kids in the intervention arm who developed blood clots, because I think this is an instructive analysis.
Six of them did not meet the 2.5% threshold criteria, a case where the model missed its mark. Again, accuracy is table stakes.
Of the remaining 71, only 16 got a recommendation from the hematologist to start anticoagulation. Why not more? Well, the model identified some of the high-risk kids on the weekend, and it seems that the study team did not contact treatment teams during that time. That may account for about 40% of these cases. The remainder had some contraindication to anticoagulation.
Most tellingly, of the 16 who did get a recommendation to start anticoagulation, the recommendation was followed in only seven patients.
This is the gap between accurate prediction and the ability to change outcomes for patients. A prediction is useless if it is wrong, for sure. But it’s also useless if you don’t tell anyone about it. It’s useless if you tell someone but they can’t do anything about it. And it’s useless if they could do something about it but choose not to.
That’s the gulf that these models need to cross at this point. So, the next time some slick company tells you how accurate their AI model is, ask them if accuracy is really the most important thing. If they say, “Well, yes, of course,” then tell them about Cassandra.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Today I’m going to talk to you about a study at the cutting edge of modern medicine, one that uses an artificial intelligence (AI) model to guide care. But before I do, I need to take you back to the late Bronze Age, to a city located on the coast of what is now Turkey.
Troy’s towering walls made it seem unassailable, but that would not stop the Achaeans and their fleet of black ships from making landfall, and, after a siege, destroying the city. The destruction of Troy, as told in the Iliad and the Aeneid, was foretold by Cassandra, the daughter of King Priam and Priestess of Troy.
Cassandra had been given the gift of prophecy by the god Apollo in exchange for her favors. But after the gift was bestowed, she rejected the bright god and, in his rage, he added a curse to her blessing: that no one would ever believe her prophecies.
Thus it was that when her brother Paris set off to Sparta to abduct Helen, she warned him that his actions would lead to the downfall of their great city. He, of course, ignored her.
And you know the rest of the story.
Why am I telling you the story of Cassandra of Troy when we’re supposed to be talking about AI in medicine? Because AI has a major Cassandra problem.
The recent history of AI, and particularly the subset of AI known as machine learning in medicine, has been characterized by an accuracy arms race.
The electronic health record allows for the collection of volumes of data orders of magnitude greater than what we have ever been able to collect before. And all that data can be crunched by various algorithms to make predictions about, well, anything – whether a patient will be transferred to the intensive care unit, whether a GI bleed will need an intervention, whether someone will die in the next year.
Studies in this area tend to rely on retrospective datasets, and as time has gone on, better algorithms and more data have led to better and better predictions. In some simpler cases, machine-learning models have achieved near-perfect accuracy – Cassandra-level accuracy – as in the reading of chest x-rays for pneumonia, for example.
But as Cassandra teaches us, even perfect prediction is useless if no one believes you, if they don’t change their behavior. And this is the central problem of AI in medicine today. Many people are focusing on accuracy of the prediction but have forgotten that high accuracy is just table stakes for an AI model to be useful. It has to not only be accurate, but its use also has to change outcomes for patients. We need to be able to save Troy.
The best way to determine whether an AI model will help patients is to treat a model like we treat a new medication and evaluate it through a randomized trial. That’s what researchers, led by Shannon Walker of Vanderbilt University, Nashville, Tenn., did in a paper appearing in JAMA Network Open.
The model in question was one that predicted venous thromboembolism – blood clots – in hospitalized children. The model took in a variety of data points from the health record: a history of blood clot, history of cancer, presence of a central line, a variety of lab values. And the predictive model was very good – maybe not Cassandra good, but it achieved an AUC of 0.90, which means it had very high accuracy.
But again, accuracy is just table stakes.
The authors deployed the model in the live health record and recorded the results. For half of the kids, that was all that happened; no one actually saw the predictions. For those randomized to the intervention, the hematology team would be notified when the risk for clot was calculated to be greater than 2.5%. The hematology team would then contact the primary team to discuss prophylactic anticoagulation.
This is an elegant approach.
Let’s start with those table stakes – accuracy. The predictions were, by and large, pretty accurate in this trial. Of the 135 kids who developed blood clots, 121 had been flagged by the model in advance. That’s about 90%. The model flagged about 10% of kids who didn’t get a blood clot as well, but that’s not entirely surprising since the threshold for flagging was a 2.5% risk.
Given that the model preidentified almost every kid who would go on to develop a blood clot, it would make sense that kids randomized to the intervention would do better; after all, Cassandra was calling out her warnings.
But those kids didn’t do better. The rate of blood clot was no different between the group that used the accurate prediction model and the group that did not.
Why? Why does the use of an accurate model not necessarily improve outcomes?
First of all, a warning must lead to some change in management. Indeed, the kids in the intervention group were more likely to receive anticoagulation, but barely so. There were lots of reasons for this: physician preference, imminent discharge, active bleeding, and so on.
But let’s take a look at the 77 kids in the intervention arm who developed blood clots, because I think this is an instructive analysis.
Six of them did not meet the 2.5% threshold criteria, a case where the model missed its mark. Again, accuracy is table stakes.
Of the remaining 71, only 16 got a recommendation from the hematologist to start anticoagulation. Why not more? Well, the model identified some of the high-risk kids on the weekend, and it seems that the study team did not contact treatment teams during that time. That may account for about 40% of these cases. The remainder had some contraindication to anticoagulation.
Most tellingly, of the 16 who did get a recommendation to start anticoagulation, the recommendation was followed in only seven patients.
This is the gap between accurate prediction and the ability to change outcomes for patients. A prediction is useless if it is wrong, for sure. But it’s also useless if you don’t tell anyone about it. It’s useless if you tell someone but they can’t do anything about it. And it’s useless if they could do something about it but choose not to.
That’s the gulf that these models need to cross at this point. So, the next time some slick company tells you how accurate their AI model is, ask them if accuracy is really the most important thing. If they say, “Well, yes, of course,” then tell them about Cassandra.
Dr. F. Perry Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Every click you make, the EHR is watching you
This transcript has been edited for clarity.
When I close my eyes and imagine what it is I do for a living, I see a computer screen.
I’m primarily a clinical researcher, so much of what I do is looking at statistical software, or, more recently, writing grant applications. But even when I think of my clinical duties, I see that computer screen.
The reason? The electronic health record (EHR) – the hot, beating heart of medical care in the modern era. Our most powerful tool and our greatest enemy.
The EHR records everything – not just the vital signs and lab values of our patients, not just our notes and billing codes. Everything. Every interaction we have is tracked and can be analyzed. The EHR is basically Sting in the song “Every Breath You Take.” Every click you make, it is watching you.
Researchers are leveraging that panopticon to give insight into something we don’t talk about frequently: the issue of racial bias in medicine. Is our true nature revealed by our interactions with the EHR?
We’re talking about this study in JAMA Network Open.
Researchers leveraged huge amounts of EHR data from two big academic medical centers, Vanderbilt University Medical Center and Northwestern University Medical Center. All told, there are data from nearly 250,000 hospitalizations here.
The researchers created a metric for EHR engagement. Basically, they summed the amount of clicks and other EHR interactions that occurred during the hospitalization, divided by the length of stay in days, to create a sort of average “engagement per day” metric. This number was categorized into four groups: low engagement, medium engagement, high engagement, and very high engagement.
What factors would predict higher engagement? Well, , except among Black patients who actually got a bit more engagement.
So, right away we need to be concerned about the obvious implications. Less engagement with the EHR may mean lower-quality care, right? Less attention to medical issues. And if that differs systematically by race, that’s a problem.
But we need to be careful here, because engagement in the health record is not random. Many factors would lead you to spend more time in one patient’s chart vs. another. Medical complexity is the most obvious one. The authors did their best to account for this, adjusting for patients’ age, sex, insurance status, comorbidity score, and social deprivation index based on their ZIP code. But notably, they did not account for the acuity of illness during the hospitalization. If individuals identifying as a minority were, all else being equal, less likely to be severely ill by the time they were hospitalized, you might see results like this.
The authors also restrict their analysis to individuals who were discharged alive. I’m not entirely clear why they made this choice. Most people don’t die in the hospital; the inpatient mortality rate at most centers is 1%-1.5%. But excluding those patients could potentially bias these results, especially if race is, all else being equal, a predictor of inpatient mortality, as some studies have shown.
But the truth is, these data aren’t coming out of nowhere; they don’t exist in a vacuum. Numerous studies demonstrate different intensity of care among minority vs. nonminority individuals. There is this study, which shows that minority populations are less likely to be placed on the liver transplant waitlist.
There is this study, which found that minority kids with type 1 diabetes were less likely to get insulin pumps than were their White counterparts. And this one, which showed that kids with acute appendicitis were less likely to get pain-control medications if they were Black.
This study shows that although life expectancy decreased across all races during the pandemic, it decreased the most among minority populations.
This list goes on. It’s why the CDC has called racism a “fundamental cause of ... disease.”
So, yes, it is clear that there are racial disparities in health care outcomes. It is clear that there are racial disparities in treatments. It is also clear that virtually every physician believes they deliver equitable care. Somewhere, this disconnect arises. Could the actions we take in the EHR reveal the unconscious biases we have? Does the all-seeing eye of the EHR see not only into our brains but into our hearts? And if it can, are we ready to confront what it sees?
F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
When I close my eyes and imagine what it is I do for a living, I see a computer screen.
I’m primarily a clinical researcher, so much of what I do is looking at statistical software, or, more recently, writing grant applications. But even when I think of my clinical duties, I see that computer screen.
The reason? The electronic health record (EHR) – the hot, beating heart of medical care in the modern era. Our most powerful tool and our greatest enemy.
The EHR records everything – not just the vital signs and lab values of our patients, not just our notes and billing codes. Everything. Every interaction we have is tracked and can be analyzed. The EHR is basically Sting in the song “Every Breath You Take.” Every click you make, it is watching you.
Researchers are leveraging that panopticon to give insight into something we don’t talk about frequently: the issue of racial bias in medicine. Is our true nature revealed by our interactions with the EHR?
We’re talking about this study in JAMA Network Open.
Researchers leveraged huge amounts of EHR data from two big academic medical centers, Vanderbilt University Medical Center and Northwestern University Medical Center. All told, there are data from nearly 250,000 hospitalizations here.
The researchers created a metric for EHR engagement. Basically, they summed the amount of clicks and other EHR interactions that occurred during the hospitalization, divided by the length of stay in days, to create a sort of average “engagement per day” metric. This number was categorized into four groups: low engagement, medium engagement, high engagement, and very high engagement.
What factors would predict higher engagement? Well, , except among Black patients who actually got a bit more engagement.
So, right away we need to be concerned about the obvious implications. Less engagement with the EHR may mean lower-quality care, right? Less attention to medical issues. And if that differs systematically by race, that’s a problem.
But we need to be careful here, because engagement in the health record is not random. Many factors would lead you to spend more time in one patient’s chart vs. another. Medical complexity is the most obvious one. The authors did their best to account for this, adjusting for patients’ age, sex, insurance status, comorbidity score, and social deprivation index based on their ZIP code. But notably, they did not account for the acuity of illness during the hospitalization. If individuals identifying as a minority were, all else being equal, less likely to be severely ill by the time they were hospitalized, you might see results like this.
The authors also restrict their analysis to individuals who were discharged alive. I’m not entirely clear why they made this choice. Most people don’t die in the hospital; the inpatient mortality rate at most centers is 1%-1.5%. But excluding those patients could potentially bias these results, especially if race is, all else being equal, a predictor of inpatient mortality, as some studies have shown.
But the truth is, these data aren’t coming out of nowhere; they don’t exist in a vacuum. Numerous studies demonstrate different intensity of care among minority vs. nonminority individuals. There is this study, which shows that minority populations are less likely to be placed on the liver transplant waitlist.
There is this study, which found that minority kids with type 1 diabetes were less likely to get insulin pumps than were their White counterparts. And this one, which showed that kids with acute appendicitis were less likely to get pain-control medications if they were Black.
This study shows that although life expectancy decreased across all races during the pandemic, it decreased the most among minority populations.
This list goes on. It’s why the CDC has called racism a “fundamental cause of ... disease.”
So, yes, it is clear that there are racial disparities in health care outcomes. It is clear that there are racial disparities in treatments. It is also clear that virtually every physician believes they deliver equitable care. Somewhere, this disconnect arises. Could the actions we take in the EHR reveal the unconscious biases we have? Does the all-seeing eye of the EHR see not only into our brains but into our hearts? And if it can, are we ready to confront what it sees?
F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
When I close my eyes and imagine what it is I do for a living, I see a computer screen.
I’m primarily a clinical researcher, so much of what I do is looking at statistical software, or, more recently, writing grant applications. But even when I think of my clinical duties, I see that computer screen.
The reason? The electronic health record (EHR) – the hot, beating heart of medical care in the modern era. Our most powerful tool and our greatest enemy.
The EHR records everything – not just the vital signs and lab values of our patients, not just our notes and billing codes. Everything. Every interaction we have is tracked and can be analyzed. The EHR is basically Sting in the song “Every Breath You Take.” Every click you make, it is watching you.
Researchers are leveraging that panopticon to give insight into something we don’t talk about frequently: the issue of racial bias in medicine. Is our true nature revealed by our interactions with the EHR?
We’re talking about this study in JAMA Network Open.
Researchers leveraged huge amounts of EHR data from two big academic medical centers, Vanderbilt University Medical Center and Northwestern University Medical Center. All told, there are data from nearly 250,000 hospitalizations here.
The researchers created a metric for EHR engagement. Basically, they summed the amount of clicks and other EHR interactions that occurred during the hospitalization, divided by the length of stay in days, to create a sort of average “engagement per day” metric. This number was categorized into four groups: low engagement, medium engagement, high engagement, and very high engagement.
What factors would predict higher engagement? Well, , except among Black patients who actually got a bit more engagement.
So, right away we need to be concerned about the obvious implications. Less engagement with the EHR may mean lower-quality care, right? Less attention to medical issues. And if that differs systematically by race, that’s a problem.
But we need to be careful here, because engagement in the health record is not random. Many factors would lead you to spend more time in one patient’s chart vs. another. Medical complexity is the most obvious one. The authors did their best to account for this, adjusting for patients’ age, sex, insurance status, comorbidity score, and social deprivation index based on their ZIP code. But notably, they did not account for the acuity of illness during the hospitalization. If individuals identifying as a minority were, all else being equal, less likely to be severely ill by the time they were hospitalized, you might see results like this.
The authors also restrict their analysis to individuals who were discharged alive. I’m not entirely clear why they made this choice. Most people don’t die in the hospital; the inpatient mortality rate at most centers is 1%-1.5%. But excluding those patients could potentially bias these results, especially if race is, all else being equal, a predictor of inpatient mortality, as some studies have shown.
But the truth is, these data aren’t coming out of nowhere; they don’t exist in a vacuum. Numerous studies demonstrate different intensity of care among minority vs. nonminority individuals. There is this study, which shows that minority populations are less likely to be placed on the liver transplant waitlist.
There is this study, which found that minority kids with type 1 diabetes were less likely to get insulin pumps than were their White counterparts. And this one, which showed that kids with acute appendicitis were less likely to get pain-control medications if they were Black.
This study shows that although life expectancy decreased across all races during the pandemic, it decreased the most among minority populations.
This list goes on. It’s why the CDC has called racism a “fundamental cause of ... disease.”
So, yes, it is clear that there are racial disparities in health care outcomes. It is clear that there are racial disparities in treatments. It is also clear that virtually every physician believes they deliver equitable care. Somewhere, this disconnect arises. Could the actions we take in the EHR reveal the unconscious biases we have? Does the all-seeing eye of the EHR see not only into our brains but into our hearts? And if it can, are we ready to confront what it sees?
F. Perry Wilson, MD, MSCE, is associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
The surprising link between loneliness and Parkinson’s disease
This transcript has been edited for clarity.
On May 3, 2023, Surgeon General Vivek Murthy issued an advisory raising an alarm about what he called an “epidemic of loneliness” in the United States.
Now, I am not saying that Vivek Murthy read my book, “How Medicine Works and When It Doesn’t” – released in January and available in bookstores now – where, in chapter 11, I call attention to the problem of loneliness and its relationship to the exponential rise in deaths of despair. But Vivek, if you did, let me know. I could use the publicity.
No, of course the idea that loneliness is a public health issue is not new, but I’m glad to see it finally getting attention. At this point, studies have linked loneliness to heart disease, stroke, dementia, and premature death.
The UK Biobank is really a treasure trove of data for epidemiologists. I must see three to four studies a week coming out of this mega-dataset. This one, appearing in JAMA Neurology, caught my eye for its focus specifically on loneliness as a risk factor – something I’m hoping to see more of in the future.
The study examines data from just under 500,000 individuals in the United Kingdom who answered a survey including the question “Do you often feel lonely?” between 2006 and 2010; 18.4% of people answered yes. Individuals’ electronic health record data were then monitored over time to see who would get a new diagnosis code consistent with Parkinson’s disease. Through 2021, 2822 people did – that’s just over half a percent.
So, now we do the statistics thing. Of the nonlonely folks, 2,273 went on to develop Parkinson’s disease. Of those who said they often feel lonely, 549 people did. The raw numbers here, to be honest, aren’t that compelling. Lonely people had an absolute risk for Parkinson’s disease about 0.03% higher than that of nonlonely people. Put another way, you’d need to take over 3,000 lonely souls and make them not lonely to prevent 1 case of Parkinson’s disease.
Still, the costs of loneliness are not measured exclusively in Parkinson’s disease, and I would argue that the real risks here come from other sources: alcohol abuse, drug abuse, and suicide. Nevertheless, the weak but significant association with Parkinson’s disease reminds us that loneliness is a neurologic phenomenon. There is something about social connection that affects our brain in a way that is not just spiritual; it is actually biological.
Of course, people who say they are often lonely are different in other ways from people who report not being lonely. Lonely people, in this dataset, were younger, more likely to be female, less likely to have a college degree, in worse physical health, and engaged in more high-risk health behaviors like smoking.
The authors adjusted for all of these factors and found that, on the relative scale, lonely people were still about 20%-30% more likely to develop Parkinson’s disease.
So, what do we do about this? There is no pill for loneliness, and God help us if there ever is. Recognizing the problem is a good start. But there are some policy things we can do to reduce loneliness. We can invest in public spaces that bring people together – parks, museums, libraries – and public transportation. We can deal with tech companies that are so optimized at capturing our attention that we cease to engage with other humans. And, individually, we can just reach out a bit more. We’ve spent the past few pandemic years with our attention focused sharply inward. It’s time to look out again.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
On May 3, 2023, Surgeon General Vivek Murthy issued an advisory raising an alarm about what he called an “epidemic of loneliness” in the United States.
Now, I am not saying that Vivek Murthy read my book, “How Medicine Works and When It Doesn’t” – released in January and available in bookstores now – where, in chapter 11, I call attention to the problem of loneliness and its relationship to the exponential rise in deaths of despair. But Vivek, if you did, let me know. I could use the publicity.
No, of course the idea that loneliness is a public health issue is not new, but I’m glad to see it finally getting attention. At this point, studies have linked loneliness to heart disease, stroke, dementia, and premature death.
The UK Biobank is really a treasure trove of data for epidemiologists. I must see three to four studies a week coming out of this mega-dataset. This one, appearing in JAMA Neurology, caught my eye for its focus specifically on loneliness as a risk factor – something I’m hoping to see more of in the future.
The study examines data from just under 500,000 individuals in the United Kingdom who answered a survey including the question “Do you often feel lonely?” between 2006 and 2010; 18.4% of people answered yes. Individuals’ electronic health record data were then monitored over time to see who would get a new diagnosis code consistent with Parkinson’s disease. Through 2021, 2822 people did – that’s just over half a percent.
So, now we do the statistics thing. Of the nonlonely folks, 2,273 went on to develop Parkinson’s disease. Of those who said they often feel lonely, 549 people did. The raw numbers here, to be honest, aren’t that compelling. Lonely people had an absolute risk for Parkinson’s disease about 0.03% higher than that of nonlonely people. Put another way, you’d need to take over 3,000 lonely souls and make them not lonely to prevent 1 case of Parkinson’s disease.
Still, the costs of loneliness are not measured exclusively in Parkinson’s disease, and I would argue that the real risks here come from other sources: alcohol abuse, drug abuse, and suicide. Nevertheless, the weak but significant association with Parkinson’s disease reminds us that loneliness is a neurologic phenomenon. There is something about social connection that affects our brain in a way that is not just spiritual; it is actually biological.
Of course, people who say they are often lonely are different in other ways from people who report not being lonely. Lonely people, in this dataset, were younger, more likely to be female, less likely to have a college degree, in worse physical health, and engaged in more high-risk health behaviors like smoking.
The authors adjusted for all of these factors and found that, on the relative scale, lonely people were still about 20%-30% more likely to develop Parkinson’s disease.
So, what do we do about this? There is no pill for loneliness, and God help us if there ever is. Recognizing the problem is a good start. But there are some policy things we can do to reduce loneliness. We can invest in public spaces that bring people together – parks, museums, libraries – and public transportation. We can deal with tech companies that are so optimized at capturing our attention that we cease to engage with other humans. And, individually, we can just reach out a bit more. We’ve spent the past few pandemic years with our attention focused sharply inward. It’s time to look out again.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
On May 3, 2023, Surgeon General Vivek Murthy issued an advisory raising an alarm about what he called an “epidemic of loneliness” in the United States.
Now, I am not saying that Vivek Murthy read my book, “How Medicine Works and When It Doesn’t” – released in January and available in bookstores now – where, in chapter 11, I call attention to the problem of loneliness and its relationship to the exponential rise in deaths of despair. But Vivek, if you did, let me know. I could use the publicity.
No, of course the idea that loneliness is a public health issue is not new, but I’m glad to see it finally getting attention. At this point, studies have linked loneliness to heart disease, stroke, dementia, and premature death.
The UK Biobank is really a treasure trove of data for epidemiologists. I must see three to four studies a week coming out of this mega-dataset. This one, appearing in JAMA Neurology, caught my eye for its focus specifically on loneliness as a risk factor – something I’m hoping to see more of in the future.
The study examines data from just under 500,000 individuals in the United Kingdom who answered a survey including the question “Do you often feel lonely?” between 2006 and 2010; 18.4% of people answered yes. Individuals’ electronic health record data were then monitored over time to see who would get a new diagnosis code consistent with Parkinson’s disease. Through 2021, 2822 people did – that’s just over half a percent.
So, now we do the statistics thing. Of the nonlonely folks, 2,273 went on to develop Parkinson’s disease. Of those who said they often feel lonely, 549 people did. The raw numbers here, to be honest, aren’t that compelling. Lonely people had an absolute risk for Parkinson’s disease about 0.03% higher than that of nonlonely people. Put another way, you’d need to take over 3,000 lonely souls and make them not lonely to prevent 1 case of Parkinson’s disease.
Still, the costs of loneliness are not measured exclusively in Parkinson’s disease, and I would argue that the real risks here come from other sources: alcohol abuse, drug abuse, and suicide. Nevertheless, the weak but significant association with Parkinson’s disease reminds us that loneliness is a neurologic phenomenon. There is something about social connection that affects our brain in a way that is not just spiritual; it is actually biological.
Of course, people who say they are often lonely are different in other ways from people who report not being lonely. Lonely people, in this dataset, were younger, more likely to be female, less likely to have a college degree, in worse physical health, and engaged in more high-risk health behaviors like smoking.
The authors adjusted for all of these factors and found that, on the relative scale, lonely people were still about 20%-30% more likely to develop Parkinson’s disease.
So, what do we do about this? There is no pill for loneliness, and God help us if there ever is. Recognizing the problem is a good start. But there are some policy things we can do to reduce loneliness. We can invest in public spaces that bring people together – parks, museums, libraries – and public transportation. We can deal with tech companies that are so optimized at capturing our attention that we cease to engage with other humans. And, individually, we can just reach out a bit more. We’ve spent the past few pandemic years with our attention focused sharply inward. It’s time to look out again.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Overburdened: Health care workers more likely to die by suicide
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study.
If you run into a health care provider these days and ask, “How are you doing?” you’re likely to get a response like this one: “You know, hanging in there.” You smile and move on. But it may be time to go a step further. If you ask that next question – “No, really, how are you doing?” Well, you might need to carve out some time.
It’s been a rough few years for those of us in the health care professions. Our lives, dominated by COVID-related concerns at home, were equally dominated by COVID concerns at work. On the job, there were fewer and fewer of us around as exploitation and COVID-related stressors led doctors, nurses, and others to leave the profession entirely or take early retirement. Even now, I’m not sure we’ve recovered. Staffing in the hospitals is still a huge problem, and the persistence of impersonal meetings via teleconference – which not only prevent any sort of human connection but, audaciously, run from one into another without a break – robs us of even the subtle joy of walking from one hallway to another for 5 minutes of reflection before sitting down to view the next hastily cobbled together PowerPoint.
I’m speaking in generalities, of course.
I’m talking about how bad things are now because, in truth, they’ve never been great. And that may be why health care workers – people with jobs focused on serving others – are nevertheless at substantially increased risk for suicide.
Analyses through the years have shown that physicians tend to have higher rates of death from suicide than the general population. There are reasons for this that may not entirely be because of work-related stress. Doctors’ suicide attempts are more often lethal – we know what is likely to work, after all.
And, according to this paper in JAMA, it is those people who may be suffering most of all.
The study is a nationally representative sample based on the 2008 American Community Survey. Records were linked to the National Death Index through 2019.
Survey respondents were classified into five categories of health care worker, as you can see here. And 1,666,000 non–health care workers served as the control group.
Let’s take a look at the numbers.
I’m showing you age- and sex-standardized rates of death from suicide, starting with non–health care workers. In this study, physicians have similar rates of death from suicide to the general population. Nurses have higher rates, but health care support workers – nurses’ aides, home health aides – have rates nearly twice that of the general population.
Only social and behavioral health workers had rates lower than those in the general population, perhaps because they know how to access life-saving resources.
Of course, these groups differ in a lot of ways – education and income, for example. But even after adjustment for these factors as well as for sex, race, and marital status, the results persist. The only group with even a trend toward lower suicide rates are social and behavioral health workers.
There has been much hand-wringing about rates of physician suicide in the past. It is still a very real problem. But this paper finally highlights that there is a lot more to the health care profession than physicians. It’s time we acknowledge and support the people in our profession who seem to be suffering more than any of us: the aides, the techs, the support staff – the overworked and underpaid who have to deal with all the stresses that physicians like me face and then some.
There’s more to suicide risk than just your job; I know that. Family matters. Relationships matter. Medical and psychiatric illnesses matter. But to ignore this problem when it is right here, in our own house so to speak, can’t continue.
Might I suggest we start by asking someone in our profession – whether doctor, nurse, aide, or tech – how they are doing. How they are really doing. And when we are done listening, we use what we hear to advocate for real change.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study.
If you run into a health care provider these days and ask, “How are you doing?” you’re likely to get a response like this one: “You know, hanging in there.” You smile and move on. But it may be time to go a step further. If you ask that next question – “No, really, how are you doing?” Well, you might need to carve out some time.
It’s been a rough few years for those of us in the health care professions. Our lives, dominated by COVID-related concerns at home, were equally dominated by COVID concerns at work. On the job, there were fewer and fewer of us around as exploitation and COVID-related stressors led doctors, nurses, and others to leave the profession entirely or take early retirement. Even now, I’m not sure we’ve recovered. Staffing in the hospitals is still a huge problem, and the persistence of impersonal meetings via teleconference – which not only prevent any sort of human connection but, audaciously, run from one into another without a break – robs us of even the subtle joy of walking from one hallway to another for 5 minutes of reflection before sitting down to view the next hastily cobbled together PowerPoint.
I’m speaking in generalities, of course.
I’m talking about how bad things are now because, in truth, they’ve never been great. And that may be why health care workers – people with jobs focused on serving others – are nevertheless at substantially increased risk for suicide.
Analyses through the years have shown that physicians tend to have higher rates of death from suicide than the general population. There are reasons for this that may not entirely be because of work-related stress. Doctors’ suicide attempts are more often lethal – we know what is likely to work, after all.
And, according to this paper in JAMA, it is those people who may be suffering most of all.
The study is a nationally representative sample based on the 2008 American Community Survey. Records were linked to the National Death Index through 2019.
Survey respondents were classified into five categories of health care worker, as you can see here. And 1,666,000 non–health care workers served as the control group.
Let’s take a look at the numbers.
I’m showing you age- and sex-standardized rates of death from suicide, starting with non–health care workers. In this study, physicians have similar rates of death from suicide to the general population. Nurses have higher rates, but health care support workers – nurses’ aides, home health aides – have rates nearly twice that of the general population.
Only social and behavioral health workers had rates lower than those in the general population, perhaps because they know how to access life-saving resources.
Of course, these groups differ in a lot of ways – education and income, for example. But even after adjustment for these factors as well as for sex, race, and marital status, the results persist. The only group with even a trend toward lower suicide rates are social and behavioral health workers.
There has been much hand-wringing about rates of physician suicide in the past. It is still a very real problem. But this paper finally highlights that there is a lot more to the health care profession than physicians. It’s time we acknowledge and support the people in our profession who seem to be suffering more than any of us: the aides, the techs, the support staff – the overworked and underpaid who have to deal with all the stresses that physicians like me face and then some.
There’s more to suicide risk than just your job; I know that. Family matters. Relationships matter. Medical and psychiatric illnesses matter. But to ignore this problem when it is right here, in our own house so to speak, can’t continue.
Might I suggest we start by asking someone in our profession – whether doctor, nurse, aide, or tech – how they are doing. How they are really doing. And when we are done listening, we use what we hear to advocate for real change.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Welcome to Impact Factor, your weekly dose of commentary on a new medical study.
If you run into a health care provider these days and ask, “How are you doing?” you’re likely to get a response like this one: “You know, hanging in there.” You smile and move on. But it may be time to go a step further. If you ask that next question – “No, really, how are you doing?” Well, you might need to carve out some time.
It’s been a rough few years for those of us in the health care professions. Our lives, dominated by COVID-related concerns at home, were equally dominated by COVID concerns at work. On the job, there were fewer and fewer of us around as exploitation and COVID-related stressors led doctors, nurses, and others to leave the profession entirely or take early retirement. Even now, I’m not sure we’ve recovered. Staffing in the hospitals is still a huge problem, and the persistence of impersonal meetings via teleconference – which not only prevent any sort of human connection but, audaciously, run from one into another without a break – robs us of even the subtle joy of walking from one hallway to another for 5 minutes of reflection before sitting down to view the next hastily cobbled together PowerPoint.
I’m speaking in generalities, of course.
I’m talking about how bad things are now because, in truth, they’ve never been great. And that may be why health care workers – people with jobs focused on serving others – are nevertheless at substantially increased risk for suicide.
Analyses through the years have shown that physicians tend to have higher rates of death from suicide than the general population. There are reasons for this that may not entirely be because of work-related stress. Doctors’ suicide attempts are more often lethal – we know what is likely to work, after all.
And, according to this paper in JAMA, it is those people who may be suffering most of all.
The study is a nationally representative sample based on the 2008 American Community Survey. Records were linked to the National Death Index through 2019.
Survey respondents were classified into five categories of health care worker, as you can see here. And 1,666,000 non–health care workers served as the control group.
Let’s take a look at the numbers.
I’m showing you age- and sex-standardized rates of death from suicide, starting with non–health care workers. In this study, physicians have similar rates of death from suicide to the general population. Nurses have higher rates, but health care support workers – nurses’ aides, home health aides – have rates nearly twice that of the general population.
Only social and behavioral health workers had rates lower than those in the general population, perhaps because they know how to access life-saving resources.
Of course, these groups differ in a lot of ways – education and income, for example. But even after adjustment for these factors as well as for sex, race, and marital status, the results persist. The only group with even a trend toward lower suicide rates are social and behavioral health workers.
There has been much hand-wringing about rates of physician suicide in the past. It is still a very real problem. But this paper finally highlights that there is a lot more to the health care profession than physicians. It’s time we acknowledge and support the people in our profession who seem to be suffering more than any of us: the aides, the techs, the support staff – the overworked and underpaid who have to deal with all the stresses that physicians like me face and then some.
There’s more to suicide risk than just your job; I know that. Family matters. Relationships matter. Medical and psychiatric illnesses matter. But to ignore this problem when it is right here, in our own house so to speak, can’t continue.
Might I suggest we start by asking someone in our profession – whether doctor, nurse, aide, or tech – how they are doing. How they are really doing. And when we are done listening, we use what we hear to advocate for real change.
Dr. Wilson is associate professor of medicine and public health and director of the Clinical and Translational Research Accelerator at Yale University, New Haven, Conn. He has disclosed no relevant financial relationships.
A version of this article appeared on Medscape.com.
Laboratory testing: No doctor required?
This transcript has been edited for clarity.
Let’s assume, for the sake of argument, that I am a healthy 43-year old man. Nevertheless, I am interested in getting my vitamin D level checked. My primary care doc says it’s unnecessary, but that doesn’t matter because a variety of direct-to-consumer testing companies will do it without a doctor’s prescription – for a fee of course.
Is that okay? Should I be able to get the test?
What if instead of my vitamin D level, I want to test my testosterone level, or my PSA, or my cadmium level, or my Lyme disease antibodies, or even have a full-body MRI scan?
These questions are becoming more and more common, because the direct-to-consumer testing market is exploding.
We’re talking about direct-to-consumer testing, thanks to this paper: Policies of US Companies Offering Direct-to-Consumer Laboratory Tests, appearing in JAMA Internal Medicine, which characterizes the testing practices of direct-to-consumer testing companies.
But before we get to the study, a word on this market. Direct-to-consumer lab testing is projected to be a $2 billion industry by 2025, and lab testing megacorporations Quest Diagnostics and Labcorp are both jumping headlong into this space.
Why is this happening? A couple of reasons, I think. First, the increasing cost of health care has led payers to place significant restrictions on what tests can be ordered and under what circumstances. Physicians are all too familiar with the “prior authorization” system that seeks to limit even the tests we think would benefit our patients.
Frustrated with such a system, it’s no wonder that patients are increasingly deciding to go it on their own. Sure, insurance won’t cover these tests, but the prices are transparent and competition actually keeps them somewhat reasonable. So, is this a win-win? Shouldn’t we allow people to get the tests they want, at least if they are willing to pay for it?
Of course, it’s not quite that simple. If the tests are normal, or negative, then sure – no harm, no foul. But when they are positive, everything changes. What happens when the PSA test I got myself via a direct-to-consumer testing company comes back elevated? Well, at that point, I am right back into the traditional mode of medicine – seeing my doctor, probably getting repeat testing, biopsies, etc., – and some payer will be on the hook for that, which is to say that all of us will be on the hook for that.
One other reason direct-to-consumer testing is getting more popular is a more difficult-to-characterize phenomenon which I might call postpandemic individualism. I’ve seen this across several domains, but I think in some ways the pandemic led people to focus more attention on themselves, perhaps because we were so isolated from each other. Optimizing health through data – whether using a fitness tracking watch, meticulously counting macronutrient intake, or ordering your own lab tests – may be a form of exerting control over a universe that feels increasingly chaotic. But what do I know? I’m not a psychologist.
The study characterizes a total of 21 direct-to-consumer testing companies. They offer a variety of services, as you can see here, with the majority in the endocrine space: thyroid, diabetes, men’s and women’s health. A smattering of companies offer more esoteric testing, such as heavy metals and Lyme disease.
Who’s in charge of all this? It’s fairly regulated, actually, but perhaps not in the way you think. The FDA uses its CLIA authority to ensure that these tests are accurate. The FTC ensures that the companies do not engage in false advertising. But no one is minding the store as to whether the tests are actually beneficial either to an individual or to society.
The 21 companies varied dramatically in regard to how they handle communicating the risks and results of these tests. All of them had a disclaimer that the information does not represent comprehensive medical advice. Fine. But a minority acknowledged any risks or limitations of the tests. Less than half had a statement of HIPAA compliance. And 17 out of 21 provided no information as to whether customers could request their data to be deleted, while 18 out of 21 stated that there could be follow-up for abnormal results, but often it was unclear exactly how that would work.
So, let’s circle back to the first question: Should a healthy person be able to get a laboratory test simply because they want to? The libertarians among us would argue certainly yes, though perhaps without thinking through the societal implications of abnormal results. The evidence-based medicine folks will, accurately, state that there are no clinical trials to suggest that screening healthy people with tests like these has any benefit.
But we should be cautious here. This question is scienceable; you could design a trial to test whether screening healthy 43-year-olds for testosterone level led to significant improvements in overall mortality. It would just take a few million people and about 40 years of follow-up.
And even if it didn’t help, we let people throw their money away on useless things all the time. The only difference between someone spending money on a useless test or on a useless dietary supplement is that someone has to deal with the result.
So, can you do this right? Can you make a direct-to-consumer testing company that is not essentially a free-rider on the rest of the health care ecosystem?
I think there are ways. You’d need physicians involved at all stages to help interpret the testing and guide next steps. You’d need some transparent guidelines, written in language that patients can understand, for what will happen given any conceivable result – and what costs those results might lead to for them and their insurance company. Most important, you’d need longitudinal follow-up and the ability to recommend changes, retest in the future, and potentially address the cost implications of the downstream findings. In the end, it starts to sound very much like a doctor’s office.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Let’s assume, for the sake of argument, that I am a healthy 43-year old man. Nevertheless, I am interested in getting my vitamin D level checked. My primary care doc says it’s unnecessary, but that doesn’t matter because a variety of direct-to-consumer testing companies will do it without a doctor’s prescription – for a fee of course.
Is that okay? Should I be able to get the test?
What if instead of my vitamin D level, I want to test my testosterone level, or my PSA, or my cadmium level, or my Lyme disease antibodies, or even have a full-body MRI scan?
These questions are becoming more and more common, because the direct-to-consumer testing market is exploding.
We’re talking about direct-to-consumer testing, thanks to this paper: Policies of US Companies Offering Direct-to-Consumer Laboratory Tests, appearing in JAMA Internal Medicine, which characterizes the testing practices of direct-to-consumer testing companies.
But before we get to the study, a word on this market. Direct-to-consumer lab testing is projected to be a $2 billion industry by 2025, and lab testing megacorporations Quest Diagnostics and Labcorp are both jumping headlong into this space.
Why is this happening? A couple of reasons, I think. First, the increasing cost of health care has led payers to place significant restrictions on what tests can be ordered and under what circumstances. Physicians are all too familiar with the “prior authorization” system that seeks to limit even the tests we think would benefit our patients.
Frustrated with such a system, it’s no wonder that patients are increasingly deciding to go it on their own. Sure, insurance won’t cover these tests, but the prices are transparent and competition actually keeps them somewhat reasonable. So, is this a win-win? Shouldn’t we allow people to get the tests they want, at least if they are willing to pay for it?
Of course, it’s not quite that simple. If the tests are normal, or negative, then sure – no harm, no foul. But when they are positive, everything changes. What happens when the PSA test I got myself via a direct-to-consumer testing company comes back elevated? Well, at that point, I am right back into the traditional mode of medicine – seeing my doctor, probably getting repeat testing, biopsies, etc., – and some payer will be on the hook for that, which is to say that all of us will be on the hook for that.
One other reason direct-to-consumer testing is getting more popular is a more difficult-to-characterize phenomenon which I might call postpandemic individualism. I’ve seen this across several domains, but I think in some ways the pandemic led people to focus more attention on themselves, perhaps because we were so isolated from each other. Optimizing health through data – whether using a fitness tracking watch, meticulously counting macronutrient intake, or ordering your own lab tests – may be a form of exerting control over a universe that feels increasingly chaotic. But what do I know? I’m not a psychologist.
The study characterizes a total of 21 direct-to-consumer testing companies. They offer a variety of services, as you can see here, with the majority in the endocrine space: thyroid, diabetes, men’s and women’s health. A smattering of companies offer more esoteric testing, such as heavy metals and Lyme disease.
Who’s in charge of all this? It’s fairly regulated, actually, but perhaps not in the way you think. The FDA uses its CLIA authority to ensure that these tests are accurate. The FTC ensures that the companies do not engage in false advertising. But no one is minding the store as to whether the tests are actually beneficial either to an individual or to society.
The 21 companies varied dramatically in regard to how they handle communicating the risks and results of these tests. All of them had a disclaimer that the information does not represent comprehensive medical advice. Fine. But a minority acknowledged any risks or limitations of the tests. Less than half had a statement of HIPAA compliance. And 17 out of 21 provided no information as to whether customers could request their data to be deleted, while 18 out of 21 stated that there could be follow-up for abnormal results, but often it was unclear exactly how that would work.
So, let’s circle back to the first question: Should a healthy person be able to get a laboratory test simply because they want to? The libertarians among us would argue certainly yes, though perhaps without thinking through the societal implications of abnormal results. The evidence-based medicine folks will, accurately, state that there are no clinical trials to suggest that screening healthy people with tests like these has any benefit.
But we should be cautious here. This question is scienceable; you could design a trial to test whether screening healthy 43-year-olds for testosterone level led to significant improvements in overall mortality. It would just take a few million people and about 40 years of follow-up.
And even if it didn’t help, we let people throw their money away on useless things all the time. The only difference between someone spending money on a useless test or on a useless dietary supplement is that someone has to deal with the result.
So, can you do this right? Can you make a direct-to-consumer testing company that is not essentially a free-rider on the rest of the health care ecosystem?
I think there are ways. You’d need physicians involved at all stages to help interpret the testing and guide next steps. You’d need some transparent guidelines, written in language that patients can understand, for what will happen given any conceivable result – and what costs those results might lead to for them and their insurance company. Most important, you’d need longitudinal follow-up and the ability to recommend changes, retest in the future, and potentially address the cost implications of the downstream findings. In the end, it starts to sound very much like a doctor’s office.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
Let’s assume, for the sake of argument, that I am a healthy 43-year old man. Nevertheless, I am interested in getting my vitamin D level checked. My primary care doc says it’s unnecessary, but that doesn’t matter because a variety of direct-to-consumer testing companies will do it without a doctor’s prescription – for a fee of course.
Is that okay? Should I be able to get the test?
What if instead of my vitamin D level, I want to test my testosterone level, or my PSA, or my cadmium level, or my Lyme disease antibodies, or even have a full-body MRI scan?
These questions are becoming more and more common, because the direct-to-consumer testing market is exploding.
We’re talking about direct-to-consumer testing, thanks to this paper: Policies of US Companies Offering Direct-to-Consumer Laboratory Tests, appearing in JAMA Internal Medicine, which characterizes the testing practices of direct-to-consumer testing companies.
But before we get to the study, a word on this market. Direct-to-consumer lab testing is projected to be a $2 billion industry by 2025, and lab testing megacorporations Quest Diagnostics and Labcorp are both jumping headlong into this space.
Why is this happening? A couple of reasons, I think. First, the increasing cost of health care has led payers to place significant restrictions on what tests can be ordered and under what circumstances. Physicians are all too familiar with the “prior authorization” system that seeks to limit even the tests we think would benefit our patients.
Frustrated with such a system, it’s no wonder that patients are increasingly deciding to go it on their own. Sure, insurance won’t cover these tests, but the prices are transparent and competition actually keeps them somewhat reasonable. So, is this a win-win? Shouldn’t we allow people to get the tests they want, at least if they are willing to pay for it?
Of course, it’s not quite that simple. If the tests are normal, or negative, then sure – no harm, no foul. But when they are positive, everything changes. What happens when the PSA test I got myself via a direct-to-consumer testing company comes back elevated? Well, at that point, I am right back into the traditional mode of medicine – seeing my doctor, probably getting repeat testing, biopsies, etc., – and some payer will be on the hook for that, which is to say that all of us will be on the hook for that.
One other reason direct-to-consumer testing is getting more popular is a more difficult-to-characterize phenomenon which I might call postpandemic individualism. I’ve seen this across several domains, but I think in some ways the pandemic led people to focus more attention on themselves, perhaps because we were so isolated from each other. Optimizing health through data – whether using a fitness tracking watch, meticulously counting macronutrient intake, or ordering your own lab tests – may be a form of exerting control over a universe that feels increasingly chaotic. But what do I know? I’m not a psychologist.
The study characterizes a total of 21 direct-to-consumer testing companies. They offer a variety of services, as you can see here, with the majority in the endocrine space: thyroid, diabetes, men’s and women’s health. A smattering of companies offer more esoteric testing, such as heavy metals and Lyme disease.
Who’s in charge of all this? It’s fairly regulated, actually, but perhaps not in the way you think. The FDA uses its CLIA authority to ensure that these tests are accurate. The FTC ensures that the companies do not engage in false advertising. But no one is minding the store as to whether the tests are actually beneficial either to an individual or to society.
The 21 companies varied dramatically in regard to how they handle communicating the risks and results of these tests. All of them had a disclaimer that the information does not represent comprehensive medical advice. Fine. But a minority acknowledged any risks or limitations of the tests. Less than half had a statement of HIPAA compliance. And 17 out of 21 provided no information as to whether customers could request their data to be deleted, while 18 out of 21 stated that there could be follow-up for abnormal results, but often it was unclear exactly how that would work.
So, let’s circle back to the first question: Should a healthy person be able to get a laboratory test simply because they want to? The libertarians among us would argue certainly yes, though perhaps without thinking through the societal implications of abnormal results. The evidence-based medicine folks will, accurately, state that there are no clinical trials to suggest that screening healthy people with tests like these has any benefit.
But we should be cautious here. This question is scienceable; you could design a trial to test whether screening healthy 43-year-olds for testosterone level led to significant improvements in overall mortality. It would just take a few million people and about 40 years of follow-up.
And even if it didn’t help, we let people throw their money away on useless things all the time. The only difference between someone spending money on a useless test or on a useless dietary supplement is that someone has to deal with the result.
So, can you do this right? Can you make a direct-to-consumer testing company that is not essentially a free-rider on the rest of the health care ecosystem?
I think there are ways. You’d need physicians involved at all stages to help interpret the testing and guide next steps. You’d need some transparent guidelines, written in language that patients can understand, for what will happen given any conceivable result – and what costs those results might lead to for them and their insurance company. Most important, you’d need longitudinal follow-up and the ability to recommend changes, retest in the future, and potentially address the cost implications of the downstream findings. In the end, it starts to sound very much like a doctor’s office.
F. Perry Wilson, MD, MSCE, is an associate professor of medicine and public health and director of Yale’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no relevant conflicts of interest.
A version of this article first appeared on Medscape.com.
Bad blood: Could brain bleeds be contagious?
This transcript has been edited for clarity.
How do you tell if a condition is caused by an infection?
It seems like an obvious question, right? In the post–van Leeuwenhoek era we can look at whatever part of the body is diseased under a microscope and see microbes – you know, the usual suspects.
Except when we can’t. And there are plenty of cases where we can’t: where the microbe is too small to be seen without more advanced imaging techniques, like with viruses; or when the pathogen is sparsely populated or hard to culture, like Mycobacterium.
Finding out that a condition is the result of an infection is not only an exercise for 19th century physicians. After all, it was 2008 when Barry Marshall and Robin Warren won their Nobel Prize for proving that stomach ulcers, long thought to be due to “stress,” were actually caused by a tiny microbe called Helicobacter pylori.
And this week, we are looking at a study which, once again, begins to suggest that a condition thought to be more or less random – cerebral amyloid angiopathy – may actually be the result of an infectious disease.
We’re talking about this paper, appearing in JAMA, which is just a great example of old-fashioned shoe-leather epidemiology. But let’s get up to speed on cerebral amyloid angiopathy (CAA) first.
CAA is characterized by the deposition of amyloid protein in the brain. While there are some genetic causes, they are quite rare, and most cases are thought to be idiopathic. Recent analyses suggest that somewhere between 5% and 7% of cognitively normal older adults have CAA, but the rate is much higher among those with intracerebral hemorrhage – brain bleeds. In fact, CAA is the second-most common cause of bleeding in the brain, second only to severe hypertension.
An article in Nature highlights cases that seemed to develop after the administration of cadaveric pituitary hormone.
Other studies have shown potential transmission via dura mater grafts and neurosurgical instruments. But despite those clues, no infectious organism has been identified. Some have suggested that the long latent period and difficulty of finding a responsible microbe points to a prion-like disease not yet known. But these studies are more or less case series. The new JAMA paper gives us, if not a smoking gun, a pretty decent set of fingerprints.
Here’s the idea: If CAA is caused by some infectious agent, it may be transmitted in the blood. We know that a decent percentage of people who have spontaneous brain bleeds have CAA. If those people donated blood in the past, maybe the people who received that blood would be at risk for brain bleeds too.
Of course, to really test that hypothesis, you’d need to know who every blood donor in a country was and every person who received that blood and all their subsequent diagnoses for basically their entire lives. No one has that kind of data, right?
Well, if you’ve been watching this space, you’ll know that a few countries do. Enter Sweden and Denmark, with their national electronic health record that captures all of this information, and much more, on every single person who lives or has lived in those countries since before 1970. Unbelievable.
So that’s exactly what the researchers, led by Jingchen Zhao at Karolinska (Sweden) University, did. They identified roughly 760,000 individuals in Sweden and 330,000 people in Denmark who had received a blood transfusion between 1970 and 2017.
Of course, most of those blood donors – 99% of them, actually – never went on to have any bleeding in the brain. It is a rare thing, fortunately.
But some of the donors did, on average within about 5 years of the time they donated blood. The researchers characterized each donor as either never having a brain bleed, having a single bleed, or having multiple bleeds. The latter is most strongly associated with CAA.
The big question: Would recipients who got blood from individuals who later on had brain bleeds, have brain bleeds themselves?
The answer is yes, though with an asterisk. You can see the results here. The risk of recipients having a brain bleed was lowest if the blood they received was from people who never had a brain bleed, higher if the individual had a single brain bleed, and highest if they got blood from a donor who would go on to have multiple brain bleeds.
All in all, individuals who received blood from someone who would later have multiple hemorrhages were three times more likely to themselves develop bleeds themselves. It’s fairly compelling evidence of a transmissible agent.
Of course, there are some potential confounders to consider here. Whose blood you get is not totally random. If, for example, people with type O blood are just more likely to have brain bleeds, then you could get results like this, as type O tends to donate to type O and both groups would have higher risk after donation. But the authors adjusted for blood type. They also adjusted for number of transfusions, calendar year, age, sex, and indication for transfusion.
Perhaps most compelling, and most clever, is that they used ischemic stroke as a negative control. Would people who received blood from someone who later had an ischemic stroke themselves be more likely to go on to have an ischemic stroke? No signal at all. It does not appear that there is a transmissible agent associated with ischemic stroke – only the brain bleeds.
I know what you’re thinking. What’s the agent? What’s the microbe, or virus, or prion, or toxin? The study gives us no insight there. These nationwide databases are awesome but they can only do so much. Because of the vagaries of medical coding and the difficulty of making the CAA diagnosis, the authors are using brain bleeds as a proxy here; we don’t even know for sure whether these were CAA-associated brain bleeds.
It’s also worth noting that there’s little we can do about this. None of the blood donors in this study had a brain bleed prior to donation; it’s not like we could screen people out of donating in the future. We have no test for whatever this agent is, if it even exists, nor do we have a potential treatment. Fortunately, whatever it is, it is extremely rare.
Still, this paper feels like a shot across the bow. At this point, the probability has shifted strongly away from CAA being a purely random disease and toward it being an infectious one. It may be time to round up some of the unusual suspects.
Dr. F. Perry Wilson is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
How do you tell if a condition is caused by an infection?
It seems like an obvious question, right? In the post–van Leeuwenhoek era we can look at whatever part of the body is diseased under a microscope and see microbes – you know, the usual suspects.
Except when we can’t. And there are plenty of cases where we can’t: where the microbe is too small to be seen without more advanced imaging techniques, like with viruses; or when the pathogen is sparsely populated or hard to culture, like Mycobacterium.
Finding out that a condition is the result of an infection is not only an exercise for 19th century physicians. After all, it was 2008 when Barry Marshall and Robin Warren won their Nobel Prize for proving that stomach ulcers, long thought to be due to “stress,” were actually caused by a tiny microbe called Helicobacter pylori.
And this week, we are looking at a study which, once again, begins to suggest that a condition thought to be more or less random – cerebral amyloid angiopathy – may actually be the result of an infectious disease.
We’re talking about this paper, appearing in JAMA, which is just a great example of old-fashioned shoe-leather epidemiology. But let’s get up to speed on cerebral amyloid angiopathy (CAA) first.
CAA is characterized by the deposition of amyloid protein in the brain. While there are some genetic causes, they are quite rare, and most cases are thought to be idiopathic. Recent analyses suggest that somewhere between 5% and 7% of cognitively normal older adults have CAA, but the rate is much higher among those with intracerebral hemorrhage – brain bleeds. In fact, CAA is the second-most common cause of bleeding in the brain, second only to severe hypertension.
An article in Nature highlights cases that seemed to develop after the administration of cadaveric pituitary hormone.
Other studies have shown potential transmission via dura mater grafts and neurosurgical instruments. But despite those clues, no infectious organism has been identified. Some have suggested that the long latent period and difficulty of finding a responsible microbe points to a prion-like disease not yet known. But these studies are more or less case series. The new JAMA paper gives us, if not a smoking gun, a pretty decent set of fingerprints.
Here’s the idea: If CAA is caused by some infectious agent, it may be transmitted in the blood. We know that a decent percentage of people who have spontaneous brain bleeds have CAA. If those people donated blood in the past, maybe the people who received that blood would be at risk for brain bleeds too.
Of course, to really test that hypothesis, you’d need to know who every blood donor in a country was and every person who received that blood and all their subsequent diagnoses for basically their entire lives. No one has that kind of data, right?
Well, if you’ve been watching this space, you’ll know that a few countries do. Enter Sweden and Denmark, with their national electronic health record that captures all of this information, and much more, on every single person who lives or has lived in those countries since before 1970. Unbelievable.
So that’s exactly what the researchers, led by Jingchen Zhao at Karolinska (Sweden) University, did. They identified roughly 760,000 individuals in Sweden and 330,000 people in Denmark who had received a blood transfusion between 1970 and 2017.
Of course, most of those blood donors – 99% of them, actually – never went on to have any bleeding in the brain. It is a rare thing, fortunately.
But some of the donors did, on average within about 5 years of the time they donated blood. The researchers characterized each donor as either never having a brain bleed, having a single bleed, or having multiple bleeds. The latter is most strongly associated with CAA.
The big question: Would recipients who got blood from individuals who later on had brain bleeds, have brain bleeds themselves?
The answer is yes, though with an asterisk. You can see the results here. The risk of recipients having a brain bleed was lowest if the blood they received was from people who never had a brain bleed, higher if the individual had a single brain bleed, and highest if they got blood from a donor who would go on to have multiple brain bleeds.
All in all, individuals who received blood from someone who would later have multiple hemorrhages were three times more likely to themselves develop bleeds themselves. It’s fairly compelling evidence of a transmissible agent.
Of course, there are some potential confounders to consider here. Whose blood you get is not totally random. If, for example, people with type O blood are just more likely to have brain bleeds, then you could get results like this, as type O tends to donate to type O and both groups would have higher risk after donation. But the authors adjusted for blood type. They also adjusted for number of transfusions, calendar year, age, sex, and indication for transfusion.
Perhaps most compelling, and most clever, is that they used ischemic stroke as a negative control. Would people who received blood from someone who later had an ischemic stroke themselves be more likely to go on to have an ischemic stroke? No signal at all. It does not appear that there is a transmissible agent associated with ischemic stroke – only the brain bleeds.
I know what you’re thinking. What’s the agent? What’s the microbe, or virus, or prion, or toxin? The study gives us no insight there. These nationwide databases are awesome but they can only do so much. Because of the vagaries of medical coding and the difficulty of making the CAA diagnosis, the authors are using brain bleeds as a proxy here; we don’t even know for sure whether these were CAA-associated brain bleeds.
It’s also worth noting that there’s little we can do about this. None of the blood donors in this study had a brain bleed prior to donation; it’s not like we could screen people out of donating in the future. We have no test for whatever this agent is, if it even exists, nor do we have a potential treatment. Fortunately, whatever it is, it is extremely rare.
Still, this paper feels like a shot across the bow. At this point, the probability has shifted strongly away from CAA being a purely random disease and toward it being an infectious one. It may be time to round up some of the unusual suspects.
Dr. F. Perry Wilson is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
How do you tell if a condition is caused by an infection?
It seems like an obvious question, right? In the post–van Leeuwenhoek era we can look at whatever part of the body is diseased under a microscope and see microbes – you know, the usual suspects.
Except when we can’t. And there are plenty of cases where we can’t: where the microbe is too small to be seen without more advanced imaging techniques, like with viruses; or when the pathogen is sparsely populated or hard to culture, like Mycobacterium.
Finding out that a condition is the result of an infection is not only an exercise for 19th century physicians. After all, it was 2008 when Barry Marshall and Robin Warren won their Nobel Prize for proving that stomach ulcers, long thought to be due to “stress,” were actually caused by a tiny microbe called Helicobacter pylori.
And this week, we are looking at a study which, once again, begins to suggest that a condition thought to be more or less random – cerebral amyloid angiopathy – may actually be the result of an infectious disease.
We’re talking about this paper, appearing in JAMA, which is just a great example of old-fashioned shoe-leather epidemiology. But let’s get up to speed on cerebral amyloid angiopathy (CAA) first.
CAA is characterized by the deposition of amyloid protein in the brain. While there are some genetic causes, they are quite rare, and most cases are thought to be idiopathic. Recent analyses suggest that somewhere between 5% and 7% of cognitively normal older adults have CAA, but the rate is much higher among those with intracerebral hemorrhage – brain bleeds. In fact, CAA is the second-most common cause of bleeding in the brain, second only to severe hypertension.
An article in Nature highlights cases that seemed to develop after the administration of cadaveric pituitary hormone.
Other studies have shown potential transmission via dura mater grafts and neurosurgical instruments. But despite those clues, no infectious organism has been identified. Some have suggested that the long latent period and difficulty of finding a responsible microbe points to a prion-like disease not yet known. But these studies are more or less case series. The new JAMA paper gives us, if not a smoking gun, a pretty decent set of fingerprints.
Here’s the idea: If CAA is caused by some infectious agent, it may be transmitted in the blood. We know that a decent percentage of people who have spontaneous brain bleeds have CAA. If those people donated blood in the past, maybe the people who received that blood would be at risk for brain bleeds too.
Of course, to really test that hypothesis, you’d need to know who every blood donor in a country was and every person who received that blood and all their subsequent diagnoses for basically their entire lives. No one has that kind of data, right?
Well, if you’ve been watching this space, you’ll know that a few countries do. Enter Sweden and Denmark, with their national electronic health record that captures all of this information, and much more, on every single person who lives or has lived in those countries since before 1970. Unbelievable.
So that’s exactly what the researchers, led by Jingchen Zhao at Karolinska (Sweden) University, did. They identified roughly 760,000 individuals in Sweden and 330,000 people in Denmark who had received a blood transfusion between 1970 and 2017.
Of course, most of those blood donors – 99% of them, actually – never went on to have any bleeding in the brain. It is a rare thing, fortunately.
But some of the donors did, on average within about 5 years of the time they donated blood. The researchers characterized each donor as either never having a brain bleed, having a single bleed, or having multiple bleeds. The latter is most strongly associated with CAA.
The big question: Would recipients who got blood from individuals who later on had brain bleeds, have brain bleeds themselves?
The answer is yes, though with an asterisk. You can see the results here. The risk of recipients having a brain bleed was lowest if the blood they received was from people who never had a brain bleed, higher if the individual had a single brain bleed, and highest if they got blood from a donor who would go on to have multiple brain bleeds.
All in all, individuals who received blood from someone who would later have multiple hemorrhages were three times more likely to themselves develop bleeds themselves. It’s fairly compelling evidence of a transmissible agent.
Of course, there are some potential confounders to consider here. Whose blood you get is not totally random. If, for example, people with type O blood are just more likely to have brain bleeds, then you could get results like this, as type O tends to donate to type O and both groups would have higher risk after donation. But the authors adjusted for blood type. They also adjusted for number of transfusions, calendar year, age, sex, and indication for transfusion.
Perhaps most compelling, and most clever, is that they used ischemic stroke as a negative control. Would people who received blood from someone who later had an ischemic stroke themselves be more likely to go on to have an ischemic stroke? No signal at all. It does not appear that there is a transmissible agent associated with ischemic stroke – only the brain bleeds.
I know what you’re thinking. What’s the agent? What’s the microbe, or virus, or prion, or toxin? The study gives us no insight there. These nationwide databases are awesome but they can only do so much. Because of the vagaries of medical coding and the difficulty of making the CAA diagnosis, the authors are using brain bleeds as a proxy here; we don’t even know for sure whether these were CAA-associated brain bleeds.
It’s also worth noting that there’s little we can do about this. None of the blood donors in this study had a brain bleed prior to donation; it’s not like we could screen people out of donating in the future. We have no test for whatever this agent is, if it even exists, nor do we have a potential treatment. Fortunately, whatever it is, it is extremely rare.
Still, this paper feels like a shot across the bow. At this point, the probability has shifted strongly away from CAA being a purely random disease and toward it being an infectious one. It may be time to round up some of the unusual suspects.
Dr. F. Perry Wilson is an associate professor of medicine and public health and director of Yale University’s Clinical and Translational Research Accelerator in New Haven, Conn. He reported no conflicts of interest.
A version of this article first appeared on Medscape.com.
The new normal in body temperature
This transcript has been edited for clarity.
Every branch of science has its constants. Physics has the speed of light, the gravitational constant, the Planck constant. Chemistry gives us Avogadro’s number, Faraday’s constant, the charge of an electron. Medicine isn’t quite as reliable as physics when it comes to these things, but insofar as there are any constants in medicine, might I suggest normal body temperature: 37° Celsius, 98.6° Fahrenheit.
Sure, serum sodium may be less variable and lactate concentration more clinically relevant, but even my 7-year-old knows that normal body temperature is 98.6°.
Except, as it turns out, 98.6° isn’t normal at all.
How did we arrive at 37.0° C for normal body temperature? We got it from this guy – German physician Carl Reinhold August Wunderlich, who, in addition to looking eerily like Luciano Pavarotti, was the first to realize that fever was not itself a disease but a symptom of one.
In 1851, Dr. Wunderlich released his measurements of more than 1 million body temperatures taken from 25,000 Germans – a painstaking process at the time, which employed a foot-long thermometer and took 20 minutes to obtain a measurement.
The average temperature measured, of course, was 37° C.
We’re more than 150 years post-Wunderlich right now, and the average person in the United States might be quite a bit different from the average German in 1850. Moreover, we can do a lot better than just measuring a ton of people and taking the average, because we have statistics. The problem with measuring a bunch of people and taking the average temperature as normal is that you can’t be sure that the people you are measuring are normal. There are obvious causes of elevated temperature that you could exclude. Let’s not take people with a respiratory infection or who are taking Tylenol, for example. But as highlighted in this paper in JAMA Internal Medicine, we can do a lot better than that.
The study leverages the fact that body temperature is typically measured during all medical office visits and recorded in the ever-present electronic medical record.
Researchers from Stanford identified 724,199 patient encounters with outpatient temperature data. They excluded extreme temperatures – less than 34° C or greater than 40° C – excluded patients under 20 or above 80 years, and excluded those with extremes of height, weight, or body mass index.
You end up with a distribution like this. Note that the peak is clearly lower than 37° C.
But we’re still not at “normal.” Some people would be seeing their doctor for conditions that affect body temperature, such as infection. You could use diagnosis codes to flag these individuals and drop them, but that feels a bit arbitrary.
I really love how the researchers used data to fix this problem. They used a technique called LIMIT (Laboratory Information Mining for Individualized Thresholds). It works like this:
Take all the temperature measurements and then identify the outliers – the very tails of the distribution.
Look at all the diagnosis codes in those distributions. Determine which diagnosis codes are overrepresented in those distributions. Now you have a data-driven way to say that yes, these diagnoses are associated with weird temperatures. Next, eliminate everyone with those diagnoses from the dataset. What you are left with is a normal population, or at least a population that doesn’t have a condition that seems to meaningfully affect temperature.
So, who was dropped? Well, a lot of people, actually. It turned out that diabetes was way overrepresented in the outlier group. Although 9.2% of the population had diabetes, 26% of people with very low temperatures did, so everyone with diabetes is removed from the dataset. While 5% of the population had a cough at their encounter, 7% of the people with very high temperature and 7% of the people with very low temperature had a cough, so everyone with cough gets thrown out.
The algorithm excluded people on antibiotics or who had sinusitis, urinary tract infections, pneumonia, and, yes, a diagnosis of “fever.” The list makes sense, which is always nice when you have a purely algorithmic classification system.
What do we have left? What is the real normal temperature? Ready?
It’s 36.64° C, or about 98.0° F.
Of course, normal temperature varied depending on the time of day it was measured – higher in the afternoon.
The normal temperature in women tended to be higher than in men. The normal temperature declined with age as well.
In fact, the researchers built a nice online calculator where you can enter your own, or your patient’s, parameters and calculate a normal body temperature for them. Here’s mine. My normal temperature at around 2 p.m. should be 36.7° C.
So, we’re all more cold-blooded than we thought. Is this just because of better methods? Maybe. But studies have actually shown that body temperature may be decreasing over time in humans, possibly because of the lower levels of inflammation we face in modern life (thanks to improvements in hygiene and antibiotics).
Of course, I’m sure some of you are asking yourselves whether any of this really matters. Is 37° C close enough?
Sure, this may be sort of puttering around the edges of physical diagnosis, but I think the methodology is really interesting and can obviously be applied to other broadly collected data points. But these data show us that thin, older individuals really do run cooler, and that we may need to pay more attention to a low-grade fever in that population than we otherwise would.
In any case, it’s time for a little re-education. If someone asks you what normal body temperature is, just say 36.6° C, 98.0° F. For his work in this area, I suggest we call it Wunderlich’s constant.
Dr. Wilson is associate professor of medicine and public health at Yale University, New Haven, Conn., and director of Yale’s Clinical and Translational Research Accelerator. He has no disclosures.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Every branch of science has its constants. Physics has the speed of light, the gravitational constant, the Planck constant. Chemistry gives us Avogadro’s number, Faraday’s constant, the charge of an electron. Medicine isn’t quite as reliable as physics when it comes to these things, but insofar as there are any constants in medicine, might I suggest normal body temperature: 37° Celsius, 98.6° Fahrenheit.
Sure, serum sodium may be less variable and lactate concentration more clinically relevant, but even my 7-year-old knows that normal body temperature is 98.6°.
Except, as it turns out, 98.6° isn’t normal at all.
How did we arrive at 37.0° C for normal body temperature? We got it from this guy – German physician Carl Reinhold August Wunderlich, who, in addition to looking eerily like Luciano Pavarotti, was the first to realize that fever was not itself a disease but a symptom of one.
In 1851, Dr. Wunderlich released his measurements of more than 1 million body temperatures taken from 25,000 Germans – a painstaking process at the time, which employed a foot-long thermometer and took 20 minutes to obtain a measurement.
The average temperature measured, of course, was 37° C.
We’re more than 150 years post-Wunderlich right now, and the average person in the United States might be quite a bit different from the average German in 1850. Moreover, we can do a lot better than just measuring a ton of people and taking the average, because we have statistics. The problem with measuring a bunch of people and taking the average temperature as normal is that you can’t be sure that the people you are measuring are normal. There are obvious causes of elevated temperature that you could exclude. Let’s not take people with a respiratory infection or who are taking Tylenol, for example. But as highlighted in this paper in JAMA Internal Medicine, we can do a lot better than that.
The study leverages the fact that body temperature is typically measured during all medical office visits and recorded in the ever-present electronic medical record.
Researchers from Stanford identified 724,199 patient encounters with outpatient temperature data. They excluded extreme temperatures – less than 34° C or greater than 40° C – excluded patients under 20 or above 80 years, and excluded those with extremes of height, weight, or body mass index.
You end up with a distribution like this. Note that the peak is clearly lower than 37° C.
But we’re still not at “normal.” Some people would be seeing their doctor for conditions that affect body temperature, such as infection. You could use diagnosis codes to flag these individuals and drop them, but that feels a bit arbitrary.
I really love how the researchers used data to fix this problem. They used a technique called LIMIT (Laboratory Information Mining for Individualized Thresholds). It works like this:
Take all the temperature measurements and then identify the outliers – the very tails of the distribution.
Look at all the diagnosis codes in those distributions. Determine which diagnosis codes are overrepresented in those distributions. Now you have a data-driven way to say that yes, these diagnoses are associated with weird temperatures. Next, eliminate everyone with those diagnoses from the dataset. What you are left with is a normal population, or at least a population that doesn’t have a condition that seems to meaningfully affect temperature.
So, who was dropped? Well, a lot of people, actually. It turned out that diabetes was way overrepresented in the outlier group. Although 9.2% of the population had diabetes, 26% of people with very low temperatures did, so everyone with diabetes is removed from the dataset. While 5% of the population had a cough at their encounter, 7% of the people with very high temperature and 7% of the people with very low temperature had a cough, so everyone with cough gets thrown out.
The algorithm excluded people on antibiotics or who had sinusitis, urinary tract infections, pneumonia, and, yes, a diagnosis of “fever.” The list makes sense, which is always nice when you have a purely algorithmic classification system.
What do we have left? What is the real normal temperature? Ready?
It’s 36.64° C, or about 98.0° F.
Of course, normal temperature varied depending on the time of day it was measured – higher in the afternoon.
The normal temperature in women tended to be higher than in men. The normal temperature declined with age as well.
In fact, the researchers built a nice online calculator where you can enter your own, or your patient’s, parameters and calculate a normal body temperature for them. Here’s mine. My normal temperature at around 2 p.m. should be 36.7° C.
So, we’re all more cold-blooded than we thought. Is this just because of better methods? Maybe. But studies have actually shown that body temperature may be decreasing over time in humans, possibly because of the lower levels of inflammation we face in modern life (thanks to improvements in hygiene and antibiotics).
Of course, I’m sure some of you are asking yourselves whether any of this really matters. Is 37° C close enough?
Sure, this may be sort of puttering around the edges of physical diagnosis, but I think the methodology is really interesting and can obviously be applied to other broadly collected data points. But these data show us that thin, older individuals really do run cooler, and that we may need to pay more attention to a low-grade fever in that population than we otherwise would.
In any case, it’s time for a little re-education. If someone asks you what normal body temperature is, just say 36.6° C, 98.0° F. For his work in this area, I suggest we call it Wunderlich’s constant.
Dr. Wilson is associate professor of medicine and public health at Yale University, New Haven, Conn., and director of Yale’s Clinical and Translational Research Accelerator. He has no disclosures.
A version of this article appeared on Medscape.com.
This transcript has been edited for clarity.
Every branch of science has its constants. Physics has the speed of light, the gravitational constant, the Planck constant. Chemistry gives us Avogadro’s number, Faraday’s constant, the charge of an electron. Medicine isn’t quite as reliable as physics when it comes to these things, but insofar as there are any constants in medicine, might I suggest normal body temperature: 37° Celsius, 98.6° Fahrenheit.
Sure, serum sodium may be less variable and lactate concentration more clinically relevant, but even my 7-year-old knows that normal body temperature is 98.6°.
Except, as it turns out, 98.6° isn’t normal at all.
How did we arrive at 37.0° C for normal body temperature? We got it from this guy – German physician Carl Reinhold August Wunderlich, who, in addition to looking eerily like Luciano Pavarotti, was the first to realize that fever was not itself a disease but a symptom of one.
In 1851, Dr. Wunderlich released his measurements of more than 1 million body temperatures taken from 25,000 Germans – a painstaking process at the time, which employed a foot-long thermometer and took 20 minutes to obtain a measurement.
The average temperature measured, of course, was 37° C.
We’re more than 150 years post-Wunderlich right now, and the average person in the United States might be quite a bit different from the average German in 1850. Moreover, we can do a lot better than just measuring a ton of people and taking the average, because we have statistics. The problem with measuring a bunch of people and taking the average temperature as normal is that you can’t be sure that the people you are measuring are normal. There are obvious causes of elevated temperature that you could exclude. Let’s not take people with a respiratory infection or who are taking Tylenol, for example. But as highlighted in this paper in JAMA Internal Medicine, we can do a lot better than that.
The study leverages the fact that body temperature is typically measured during all medical office visits and recorded in the ever-present electronic medical record.
Researchers from Stanford identified 724,199 patient encounters with outpatient temperature data. They excluded extreme temperatures – less than 34° C or greater than 40° C – excluded patients under 20 or above 80 years, and excluded those with extremes of height, weight, or body mass index.
You end up with a distribution like this. Note that the peak is clearly lower than 37° C.
But we’re still not at “normal.” Some people would be seeing their doctor for conditions that affect body temperature, such as infection. You could use diagnosis codes to flag these individuals and drop them, but that feels a bit arbitrary.
I really love how the researchers used data to fix this problem. They used a technique called LIMIT (Laboratory Information Mining for Individualized Thresholds). It works like this:
Take all the temperature measurements and then identify the outliers – the very tails of the distribution.
Look at all the diagnosis codes in those distributions. Determine which diagnosis codes are overrepresented in those distributions. Now you have a data-driven way to say that yes, these diagnoses are associated with weird temperatures. Next, eliminate everyone with those diagnoses from the dataset. What you are left with is a normal population, or at least a population that doesn’t have a condition that seems to meaningfully affect temperature.
So, who was dropped? Well, a lot of people, actually. It turned out that diabetes was way overrepresented in the outlier group. Although 9.2% of the population had diabetes, 26% of people with very low temperatures did, so everyone with diabetes is removed from the dataset. While 5% of the population had a cough at their encounter, 7% of the people with very high temperature and 7% of the people with very low temperature had a cough, so everyone with cough gets thrown out.
The algorithm excluded people on antibiotics or who had sinusitis, urinary tract infections, pneumonia, and, yes, a diagnosis of “fever.” The list makes sense, which is always nice when you have a purely algorithmic classification system.
What do we have left? What is the real normal temperature? Ready?
It’s 36.64° C, or about 98.0° F.
Of course, normal temperature varied depending on the time of day it was measured – higher in the afternoon.
The normal temperature in women tended to be higher than in men. The normal temperature declined with age as well.
In fact, the researchers built a nice online calculator where you can enter your own, or your patient’s, parameters and calculate a normal body temperature for them. Here’s mine. My normal temperature at around 2 p.m. should be 36.7° C.
So, we’re all more cold-blooded than we thought. Is this just because of better methods? Maybe. But studies have actually shown that body temperature may be decreasing over time in humans, possibly because of the lower levels of inflammation we face in modern life (thanks to improvements in hygiene and antibiotics).
Of course, I’m sure some of you are asking yourselves whether any of this really matters. Is 37° C close enough?
Sure, this may be sort of puttering around the edges of physical diagnosis, but I think the methodology is really interesting and can obviously be applied to other broadly collected data points. But these data show us that thin, older individuals really do run cooler, and that we may need to pay more attention to a low-grade fever in that population than we otherwise would.
In any case, it’s time for a little re-education. If someone asks you what normal body temperature is, just say 36.6° C, 98.0° F. For his work in this area, I suggest we call it Wunderlich’s constant.
Dr. Wilson is associate professor of medicine and public health at Yale University, New Haven, Conn., and director of Yale’s Clinical and Translational Research Accelerator. He has no disclosures.
A version of this article appeared on Medscape.com.
‘Decapitated’ boy saved by surgery team
This transcript has been edited for clarity.
F. Perry Wilson, MD, MSCE: I am joined today by Dr. Ohad Einav. He’s a staff surgeon in orthopedics at Hadassah Medical Center in Jerusalem. He’s with me to talk about an absolutely incredible surgical case, something that is terrifying to most non–orthopedic surgeons and I imagine is fairly scary for spine surgeons like him as well.
Ohad Einav, MD: Thank you for having me.
Dr. Wilson: Can you tell us about Suleiman Hassan and what happened to him before he came into your care?
Dr. Einav: Hassan is a 12-year-old child who was riding his bicycle on the West Bank, about 40 minutes from here. Unfortunately, he was involved in a motor vehicle accident and he suffered injuries to his abdomen and cervical spine. He was transported to our service by helicopter from the scene of the accident.
Dr. Wilson: “Injury to the cervical spine” might be something of an understatement. He had what’s called atlanto-occipital dislocation, colloquially often referred to as internal decapitation. Can you tell us what that means? It sounds terrifying.
Dr. Einav: It’s an injury to the ligaments between the occiput and the upper cervical spine, with or without bony fracture. The atlanto-occipital joint is formed by the superior articular facet of the atlas and the occipital condyle, stabilized by an articular capsule between the head and neck, and is supported by various ligaments around it that stabilize the joint and allow joint movements, including flexion, extension, and some rotation in the lower levels.
Dr. Wilson: This joint has several degrees of freedom, which means it needs a lot of support. With this type of injury, where essentially you have severing of the ligaments, is it usually survivable? How dangerous is this?
Dr. Einav: The mortality rate is 50%-60%, depending on the primary impact, the injury, transportation later on, and then the surgery and surgical management.
Dr. Wilson: Tell us a bit about this patient’s status when he came to your medical center. I assume he was in bad shape.
Dr. Einav: Hassan arrived at our medical center with a Glasgow Coma Scale score of 15. He was fully conscious. He was hemodynamically stable except for a bad laceration on his abdomen. He had a Philadelphia collar around his neck. He was transported by chopper because the paramedics suspected that he had a cervical spine injury and decided to bring him to a Level 1 trauma center.
He was monitored and we treated him according to the ATLS [advanced trauma life support] protocol. He didn’t have any gross sensory deficits, but he was a little confused about the whole situation and the accident. Therefore, we could do a general examination but we couldn’t rely on that regarding any sensory deficit that he may or may not have. We decided as a team that it would be better to slow down and control the situation. We decided not to operate on him immediately. We basically stabilized him and made sure that he didn’t have any traumatic internal organ damage. Later on we took him to the OR and performed surgery.
Dr. Wilson: It’s amazing that he had intact motor function, considering the extent of his injury. The spinal cord was spared somewhat during the injury. There must have been a moment when you realized that this kid, who was conscious and could move all four extremities, had a very severe neck injury. Was that due to a CT scan or physical exam? And what was your feeling when you saw that he had atlanto-occipital dislocation?
Dr. Einav: As a surgeon, you have a gut feeling in regard to the general examination of the patient. But I never rely on gut feelings. On the CT, I understood exactly what he had, what we needed to do, and the time frame.
Dr. Wilson: You’ve done these types of surgeries before, right? Obviously, no one has done a lot of them because this isn’t very common. But you knew what to do. Did you have a plan? Where does your experience come into play in a situation like this?
Dr. Einav: I graduated from the spine program of Toronto University, where I did a fellowship in trauma of the spine and complex spine surgery. I had very good teachers, and during my fellowship I treated a few cases in older patients that were similar but not the same. Therefore, I knew exactly what needed to be done.
Dr. Wilson: For those of us who aren’t surgeons, take us into the OR with you. This is obviously an incredibly delicate procedure. You are high up in the spinal cord at the base of the brain. The slightest mistake could have devastating consequences. What are the key elements of this procedure? What can go wrong here? What is the number-one thing you have to look out for when you’re trying to fix an internal decapitation?
Dr. Einav: The key element in surgeries of the cervical spine – trauma and complex spine surgery – is planning. I never go to the OR without knowing what I’m going to do. I have a few plans – plan A, plan B, plan C – in case something fails. So, I definitely know what the next step will be. I always think about the surgery a few hours before, if I have time to prepare.
The second thing that is very important is teamwork. The team needs to be coordinated. Everybody needs to know what their job is. With these types of injuries, it’s not the time for rookies. If you are new, please stand back and let the more experienced people do that job. I’m talking about surgeons, nurses, anesthesiologists – everyone.
Another important thing in planning is choosing the right hardware. For example, in this case we had a problem because most of the hardware is designed for adults, and we had to improvise because there isn’t a lot of hardware on the market for the pediatric population. The adult plates and screws are too big, so we had to improvise.
Dr. Wilson: Tell us more about that. How do you improvise spinal hardware for a 12-year-old?
Dr. Einav: In this case, I chose to use hardware from one of the companies that works with us.
You can see in this model the area of the injury, and the area that we worked on. To perform the surgery, I had to use some plates and rods from a different company. This company’s (NuVasive) hardware has a small attachment to the skull, which was helpful for affixing the skull to the cervical spine, instead of using a big plate that would sit at the base of the skull and would not be very good for him. Most of the hardware is made for adults and not for kids.
Dr. Wilson: Will that hardware preserve the motor function of his neck? Will he be able to turn his head and extend and flex it?
Dr. Einav: The injury leads to instability and destruction of both articulations between the head and neck. Therefore, those articulations won’t be able to function the same way in the future. There is a decrease of something like 50% of the flexion and extension of Hassan’s cervical spine. Therefore, I decided that in this case there would be no chance of saving Hassan’s motor function unless we performed a fusion between the head and the neck, and therefore I decided that this would be the best procedure with the best survival rate. So, in the future, he will have some diminished flexion, extension, and rotation of his head.
Dr. Wilson: How long did his surgery take?
Dr. Einav: To be honest, I don’t remember. But I can tell you that it took us time. It was very challenging to coordinate with everyone. The most problematic part of the surgery to perform is what we call “flip-over.”
The anesthesiologist intubated the patient when he was supine, and later on, we flipped him prone to operate on the spine. This maneuver can actually lead to injury by itself, and injury at this level is fatal. So, we took our time and got Hassan into the OR. The anesthesiologist did a great job with the GlideScope – inserting the endotracheal tube. Later on, we neuromonitored him. Basically, we connected Hassan’s peripheral nerves to a computer and monitored his motor function. Gently we flipped him over, and after that we saw a little change in his motor function, so we had to modify his position so we could preserve his motor function. We then started the procedure, which took a few hours. I don’t know exactly how many.
Dr. Wilson: That just speaks to how delicate this is for everything from the intubation, where typically you’re manipulating the head, to the repositioning. Clearly this requires a lot of teamwork.
What happened after the operation? How is he doing?
Dr. Einav: After the operation, Hassan had a great recovery. He’s doing well. He doesn’t have any motor or sensory deficits. He’s able to ambulate without any aid. He had no signs of infection, which can happen after a car accident, neither from his abdominal wound nor from the occipital cervical surgery. He feels well. We saw him in the clinic. We removed his collar. We monitored him at the clinic. He looked amazing.
Dr. Wilson: That’s incredible. Are there long-term risks for him that you need to be looking out for?
Dr. Einav: Yes, and that’s the reason that we are monitoring him post surgery. While he was in the hospital, we monitored his motor and sensory functions, as well as his wound healing. Later on, in the clinic, for a few weeks after surgery we monitored for any failure of the hardware and bone graft. We check for healing of the bone graft and bone substitutes we put in to heal those bones.
Dr. Wilson: He will grow, right? He’s only 12, so he still has some years of growth in him. Is he going to need more surgery or any kind of hardware upgrade?
Dr. Einav: I hope not. In my surgeries, I never rely on the hardware for long durations. If I decide to do, for example, fusion, I rely on the hardware for a certain amount of time. And then I plan that the biology will do the work. If I plan for fusion, I put bone grafts in the preferred area for a fusion. Then if the hardware fails, I wouldn’t need to take out the hardware, and there would be no change in the condition of the patient.
Dr. Wilson: What an incredible story. It’s clear that you and your team kept your cool despite a very high-acuity situation with a ton of risk. What a tremendous outcome that this boy is not only alive but fully functional. So, congratulations to you and your team. That was very strong work.
Dr. Einav: Thank you very much. I would like to thank our team. We have to remember that the surgeon is not standing alone in the war. Hassan’s story is a success story of a very big group of people from various backgrounds and religions. They work day and night to help people and save lives. To the paramedics, the physiologists, the traumatologists, the pediatricians, the nurses, the physiotherapists, and obviously the surgeons, a big thank you. His story is our success story.
Dr. Wilson: It’s inspiring to see so many people come together to do what we all are here for, which is to fight against suffering, disease, and death. Thank you for keeping up that fight. And thank you for joining me here.
Dr. Einav: Thank you very much.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
F. Perry Wilson, MD, MSCE: I am joined today by Dr. Ohad Einav. He’s a staff surgeon in orthopedics at Hadassah Medical Center in Jerusalem. He’s with me to talk about an absolutely incredible surgical case, something that is terrifying to most non–orthopedic surgeons and I imagine is fairly scary for spine surgeons like him as well.
Ohad Einav, MD: Thank you for having me.
Dr. Wilson: Can you tell us about Suleiman Hassan and what happened to him before he came into your care?
Dr. Einav: Hassan is a 12-year-old child who was riding his bicycle on the West Bank, about 40 minutes from here. Unfortunately, he was involved in a motor vehicle accident and he suffered injuries to his abdomen and cervical spine. He was transported to our service by helicopter from the scene of the accident.
Dr. Wilson: “Injury to the cervical spine” might be something of an understatement. He had what’s called atlanto-occipital dislocation, colloquially often referred to as internal decapitation. Can you tell us what that means? It sounds terrifying.
Dr. Einav: It’s an injury to the ligaments between the occiput and the upper cervical spine, with or without bony fracture. The atlanto-occipital joint is formed by the superior articular facet of the atlas and the occipital condyle, stabilized by an articular capsule between the head and neck, and is supported by various ligaments around it that stabilize the joint and allow joint movements, including flexion, extension, and some rotation in the lower levels.
Dr. Wilson: This joint has several degrees of freedom, which means it needs a lot of support. With this type of injury, where essentially you have severing of the ligaments, is it usually survivable? How dangerous is this?
Dr. Einav: The mortality rate is 50%-60%, depending on the primary impact, the injury, transportation later on, and then the surgery and surgical management.
Dr. Wilson: Tell us a bit about this patient’s status when he came to your medical center. I assume he was in bad shape.
Dr. Einav: Hassan arrived at our medical center with a Glasgow Coma Scale score of 15. He was fully conscious. He was hemodynamically stable except for a bad laceration on his abdomen. He had a Philadelphia collar around his neck. He was transported by chopper because the paramedics suspected that he had a cervical spine injury and decided to bring him to a Level 1 trauma center.
He was monitored and we treated him according to the ATLS [advanced trauma life support] protocol. He didn’t have any gross sensory deficits, but he was a little confused about the whole situation and the accident. Therefore, we could do a general examination but we couldn’t rely on that regarding any sensory deficit that he may or may not have. We decided as a team that it would be better to slow down and control the situation. We decided not to operate on him immediately. We basically stabilized him and made sure that he didn’t have any traumatic internal organ damage. Later on we took him to the OR and performed surgery.
Dr. Wilson: It’s amazing that he had intact motor function, considering the extent of his injury. The spinal cord was spared somewhat during the injury. There must have been a moment when you realized that this kid, who was conscious and could move all four extremities, had a very severe neck injury. Was that due to a CT scan or physical exam? And what was your feeling when you saw that he had atlanto-occipital dislocation?
Dr. Einav: As a surgeon, you have a gut feeling in regard to the general examination of the patient. But I never rely on gut feelings. On the CT, I understood exactly what he had, what we needed to do, and the time frame.
Dr. Wilson: You’ve done these types of surgeries before, right? Obviously, no one has done a lot of them because this isn’t very common. But you knew what to do. Did you have a plan? Where does your experience come into play in a situation like this?
Dr. Einav: I graduated from the spine program of Toronto University, where I did a fellowship in trauma of the spine and complex spine surgery. I had very good teachers, and during my fellowship I treated a few cases in older patients that were similar but not the same. Therefore, I knew exactly what needed to be done.
Dr. Wilson: For those of us who aren’t surgeons, take us into the OR with you. This is obviously an incredibly delicate procedure. You are high up in the spinal cord at the base of the brain. The slightest mistake could have devastating consequences. What are the key elements of this procedure? What can go wrong here? What is the number-one thing you have to look out for when you’re trying to fix an internal decapitation?
Dr. Einav: The key element in surgeries of the cervical spine – trauma and complex spine surgery – is planning. I never go to the OR without knowing what I’m going to do. I have a few plans – plan A, plan B, plan C – in case something fails. So, I definitely know what the next step will be. I always think about the surgery a few hours before, if I have time to prepare.
The second thing that is very important is teamwork. The team needs to be coordinated. Everybody needs to know what their job is. With these types of injuries, it’s not the time for rookies. If you are new, please stand back and let the more experienced people do that job. I’m talking about surgeons, nurses, anesthesiologists – everyone.
Another important thing in planning is choosing the right hardware. For example, in this case we had a problem because most of the hardware is designed for adults, and we had to improvise because there isn’t a lot of hardware on the market for the pediatric population. The adult plates and screws are too big, so we had to improvise.
Dr. Wilson: Tell us more about that. How do you improvise spinal hardware for a 12-year-old?
Dr. Einav: In this case, I chose to use hardware from one of the companies that works with us.
You can see in this model the area of the injury, and the area that we worked on. To perform the surgery, I had to use some plates and rods from a different company. This company’s (NuVasive) hardware has a small attachment to the skull, which was helpful for affixing the skull to the cervical spine, instead of using a big plate that would sit at the base of the skull and would not be very good for him. Most of the hardware is made for adults and not for kids.
Dr. Wilson: Will that hardware preserve the motor function of his neck? Will he be able to turn his head and extend and flex it?
Dr. Einav: The injury leads to instability and destruction of both articulations between the head and neck. Therefore, those articulations won’t be able to function the same way in the future. There is a decrease of something like 50% of the flexion and extension of Hassan’s cervical spine. Therefore, I decided that in this case there would be no chance of saving Hassan’s motor function unless we performed a fusion between the head and the neck, and therefore I decided that this would be the best procedure with the best survival rate. So, in the future, he will have some diminished flexion, extension, and rotation of his head.
Dr. Wilson: How long did his surgery take?
Dr. Einav: To be honest, I don’t remember. But I can tell you that it took us time. It was very challenging to coordinate with everyone. The most problematic part of the surgery to perform is what we call “flip-over.”
The anesthesiologist intubated the patient when he was supine, and later on, we flipped him prone to operate on the spine. This maneuver can actually lead to injury by itself, and injury at this level is fatal. So, we took our time and got Hassan into the OR. The anesthesiologist did a great job with the GlideScope – inserting the endotracheal tube. Later on, we neuromonitored him. Basically, we connected Hassan’s peripheral nerves to a computer and monitored his motor function. Gently we flipped him over, and after that we saw a little change in his motor function, so we had to modify his position so we could preserve his motor function. We then started the procedure, which took a few hours. I don’t know exactly how many.
Dr. Wilson: That just speaks to how delicate this is for everything from the intubation, where typically you’re manipulating the head, to the repositioning. Clearly this requires a lot of teamwork.
What happened after the operation? How is he doing?
Dr. Einav: After the operation, Hassan had a great recovery. He’s doing well. He doesn’t have any motor or sensory deficits. He’s able to ambulate without any aid. He had no signs of infection, which can happen after a car accident, neither from his abdominal wound nor from the occipital cervical surgery. He feels well. We saw him in the clinic. We removed his collar. We monitored him at the clinic. He looked amazing.
Dr. Wilson: That’s incredible. Are there long-term risks for him that you need to be looking out for?
Dr. Einav: Yes, and that’s the reason that we are monitoring him post surgery. While he was in the hospital, we monitored his motor and sensory functions, as well as his wound healing. Later on, in the clinic, for a few weeks after surgery we monitored for any failure of the hardware and bone graft. We check for healing of the bone graft and bone substitutes we put in to heal those bones.
Dr. Wilson: He will grow, right? He’s only 12, so he still has some years of growth in him. Is he going to need more surgery or any kind of hardware upgrade?
Dr. Einav: I hope not. In my surgeries, I never rely on the hardware for long durations. If I decide to do, for example, fusion, I rely on the hardware for a certain amount of time. And then I plan that the biology will do the work. If I plan for fusion, I put bone grafts in the preferred area for a fusion. Then if the hardware fails, I wouldn’t need to take out the hardware, and there would be no change in the condition of the patient.
Dr. Wilson: What an incredible story. It’s clear that you and your team kept your cool despite a very high-acuity situation with a ton of risk. What a tremendous outcome that this boy is not only alive but fully functional. So, congratulations to you and your team. That was very strong work.
Dr. Einav: Thank you very much. I would like to thank our team. We have to remember that the surgeon is not standing alone in the war. Hassan’s story is a success story of a very big group of people from various backgrounds and religions. They work day and night to help people and save lives. To the paramedics, the physiologists, the traumatologists, the pediatricians, the nurses, the physiotherapists, and obviously the surgeons, a big thank you. His story is our success story.
Dr. Wilson: It’s inspiring to see so many people come together to do what we all are here for, which is to fight against suffering, disease, and death. Thank you for keeping up that fight. And thank you for joining me here.
Dr. Einav: Thank you very much.
A version of this article first appeared on Medscape.com.
This transcript has been edited for clarity.
F. Perry Wilson, MD, MSCE: I am joined today by Dr. Ohad Einav. He’s a staff surgeon in orthopedics at Hadassah Medical Center in Jerusalem. He’s with me to talk about an absolutely incredible surgical case, something that is terrifying to most non–orthopedic surgeons and I imagine is fairly scary for spine surgeons like him as well.
Ohad Einav, MD: Thank you for having me.
Dr. Wilson: Can you tell us about Suleiman Hassan and what happened to him before he came into your care?
Dr. Einav: Hassan is a 12-year-old child who was riding his bicycle on the West Bank, about 40 minutes from here. Unfortunately, he was involved in a motor vehicle accident and he suffered injuries to his abdomen and cervical spine. He was transported to our service by helicopter from the scene of the accident.
Dr. Wilson: “Injury to the cervical spine” might be something of an understatement. He had what’s called atlanto-occipital dislocation, colloquially often referred to as internal decapitation. Can you tell us what that means? It sounds terrifying.
Dr. Einav: It’s an injury to the ligaments between the occiput and the upper cervical spine, with or without bony fracture. The atlanto-occipital joint is formed by the superior articular facet of the atlas and the occipital condyle, stabilized by an articular capsule between the head and neck, and is supported by various ligaments around it that stabilize the joint and allow joint movements, including flexion, extension, and some rotation in the lower levels.
Dr. Wilson: This joint has several degrees of freedom, which means it needs a lot of support. With this type of injury, where essentially you have severing of the ligaments, is it usually survivable? How dangerous is this?
Dr. Einav: The mortality rate is 50%-60%, depending on the primary impact, the injury, transportation later on, and then the surgery and surgical management.
Dr. Wilson: Tell us a bit about this patient’s status when he came to your medical center. I assume he was in bad shape.
Dr. Einav: Hassan arrived at our medical center with a Glasgow Coma Scale score of 15. He was fully conscious. He was hemodynamically stable except for a bad laceration on his abdomen. He had a Philadelphia collar around his neck. He was transported by chopper because the paramedics suspected that he had a cervical spine injury and decided to bring him to a Level 1 trauma center.
He was monitored and we treated him according to the ATLS [advanced trauma life support] protocol. He didn’t have any gross sensory deficits, but he was a little confused about the whole situation and the accident. Therefore, we could do a general examination but we couldn’t rely on that regarding any sensory deficit that he may or may not have. We decided as a team that it would be better to slow down and control the situation. We decided not to operate on him immediately. We basically stabilized him and made sure that he didn’t have any traumatic internal organ damage. Later on we took him to the OR and performed surgery.
Dr. Wilson: It’s amazing that he had intact motor function, considering the extent of his injury. The spinal cord was spared somewhat during the injury. There must have been a moment when you realized that this kid, who was conscious and could move all four extremities, had a very severe neck injury. Was that due to a CT scan or physical exam? And what was your feeling when you saw that he had atlanto-occipital dislocation?
Dr. Einav: As a surgeon, you have a gut feeling in regard to the general examination of the patient. But I never rely on gut feelings. On the CT, I understood exactly what he had, what we needed to do, and the time frame.
Dr. Wilson: You’ve done these types of surgeries before, right? Obviously, no one has done a lot of them because this isn’t very common. But you knew what to do. Did you have a plan? Where does your experience come into play in a situation like this?
Dr. Einav: I graduated from the spine program of Toronto University, where I did a fellowship in trauma of the spine and complex spine surgery. I had very good teachers, and during my fellowship I treated a few cases in older patients that were similar but not the same. Therefore, I knew exactly what needed to be done.
Dr. Wilson: For those of us who aren’t surgeons, take us into the OR with you. This is obviously an incredibly delicate procedure. You are high up in the spinal cord at the base of the brain. The slightest mistake could have devastating consequences. What are the key elements of this procedure? What can go wrong here? What is the number-one thing you have to look out for when you’re trying to fix an internal decapitation?
Dr. Einav: The key element in surgeries of the cervical spine – trauma and complex spine surgery – is planning. I never go to the OR without knowing what I’m going to do. I have a few plans – plan A, plan B, plan C – in case something fails. So, I definitely know what the next step will be. I always think about the surgery a few hours before, if I have time to prepare.
The second thing that is very important is teamwork. The team needs to be coordinated. Everybody needs to know what their job is. With these types of injuries, it’s not the time for rookies. If you are new, please stand back and let the more experienced people do that job. I’m talking about surgeons, nurses, anesthesiologists – everyone.
Another important thing in planning is choosing the right hardware. For example, in this case we had a problem because most of the hardware is designed for adults, and we had to improvise because there isn’t a lot of hardware on the market for the pediatric population. The adult plates and screws are too big, so we had to improvise.
Dr. Wilson: Tell us more about that. How do you improvise spinal hardware for a 12-year-old?
Dr. Einav: In this case, I chose to use hardware from one of the companies that works with us.
You can see in this model the area of the injury, and the area that we worked on. To perform the surgery, I had to use some plates and rods from a different company. This company’s (NuVasive) hardware has a small attachment to the skull, which was helpful for affixing the skull to the cervical spine, instead of using a big plate that would sit at the base of the skull and would not be very good for him. Most of the hardware is made for adults and not for kids.
Dr. Wilson: Will that hardware preserve the motor function of his neck? Will he be able to turn his head and extend and flex it?
Dr. Einav: The injury leads to instability and destruction of both articulations between the head and neck. Therefore, those articulations won’t be able to function the same way in the future. There is a decrease of something like 50% of the flexion and extension of Hassan’s cervical spine. Therefore, I decided that in this case there would be no chance of saving Hassan’s motor function unless we performed a fusion between the head and the neck, and therefore I decided that this would be the best procedure with the best survival rate. So, in the future, he will have some diminished flexion, extension, and rotation of his head.
Dr. Wilson: How long did his surgery take?
Dr. Einav: To be honest, I don’t remember. But I can tell you that it took us time. It was very challenging to coordinate with everyone. The most problematic part of the surgery to perform is what we call “flip-over.”
The anesthesiologist intubated the patient when he was supine, and later on, we flipped him prone to operate on the spine. This maneuver can actually lead to injury by itself, and injury at this level is fatal. So, we took our time and got Hassan into the OR. The anesthesiologist did a great job with the GlideScope – inserting the endotracheal tube. Later on, we neuromonitored him. Basically, we connected Hassan’s peripheral nerves to a computer and monitored his motor function. Gently we flipped him over, and after that we saw a little change in his motor function, so we had to modify his position so we could preserve his motor function. We then started the procedure, which took a few hours. I don’t know exactly how many.
Dr. Wilson: That just speaks to how delicate this is for everything from the intubation, where typically you’re manipulating the head, to the repositioning. Clearly this requires a lot of teamwork.
What happened after the operation? How is he doing?
Dr. Einav: After the operation, Hassan had a great recovery. He’s doing well. He doesn’t have any motor or sensory deficits. He’s able to ambulate without any aid. He had no signs of infection, which can happen after a car accident, neither from his abdominal wound nor from the occipital cervical surgery. He feels well. We saw him in the clinic. We removed his collar. We monitored him at the clinic. He looked amazing.
Dr. Wilson: That’s incredible. Are there long-term risks for him that you need to be looking out for?
Dr. Einav: Yes, and that’s the reason that we are monitoring him post surgery. While he was in the hospital, we monitored his motor and sensory functions, as well as his wound healing. Later on, in the clinic, for a few weeks after surgery we monitored for any failure of the hardware and bone graft. We check for healing of the bone graft and bone substitutes we put in to heal those bones.
Dr. Wilson: He will grow, right? He’s only 12, so he still has some years of growth in him. Is he going to need more surgery or any kind of hardware upgrade?
Dr. Einav: I hope not. In my surgeries, I never rely on the hardware for long durations. If I decide to do, for example, fusion, I rely on the hardware for a certain amount of time. And then I plan that the biology will do the work. If I plan for fusion, I put bone grafts in the preferred area for a fusion. Then if the hardware fails, I wouldn’t need to take out the hardware, and there would be no change in the condition of the patient.
Dr. Wilson: What an incredible story. It’s clear that you and your team kept your cool despite a very high-acuity situation with a ton of risk. What a tremendous outcome that this boy is not only alive but fully functional. So, congratulations to you and your team. That was very strong work.
Dr. Einav: Thank you very much. I would like to thank our team. We have to remember that the surgeon is not standing alone in the war. Hassan’s story is a success story of a very big group of people from various backgrounds and religions. They work day and night to help people and save lives. To the paramedics, the physiologists, the traumatologists, the pediatricians, the nurses, the physiotherapists, and obviously the surgeons, a big thank you. His story is our success story.
Dr. Wilson: It’s inspiring to see so many people come together to do what we all are here for, which is to fight against suffering, disease, and death. Thank you for keeping up that fight. And thank you for joining me here.
Dr. Einav: Thank you very much.
A version of this article first appeared on Medscape.com.