User login
FDA to add myocarditis warning to mRNA COVID-19 vaccines
The Food and Drug Administration is adding a warning to mRNA COVID-19 vaccines’ fact sheets as medical experts continue to investigate cases of heart inflammation, which are rare but are more likely to occur in young men and teen boys.
Doran Fink, MD, PhD, deputy director of the FDA’s division of vaccines and related products applications, told a Centers for Disease Control and Prevention expert panel on June 23 that the FDA is finalizing language on a warning statement for health care providers, vaccine recipients, and parents or caregivers of teens.
The incidents are more likely to follow the second dose of the Pfizer or Moderna vaccine, with chest pain and other symptoms occurring within several days to a week, the warning will note.
“Based on limited follow-up, most cases appear to have been associated with resolution of symptoms, but limited information is available about potential long-term sequelae,” Dr. Fink said, describing the statement to the Advisory Committee on Immunization Practices, independent experts who advise the CDC.
“Symptoms suggestive of myocarditis or pericarditis should result in vaccine recipients seeking medical attention,” he said.
Benefits outweigh risks
Although no formal vote occurred after the meeting, the ACIP members delivered a strong endorsement for continuing to vaccinate 12- to 29-year-olds with the Pfizer and Moderna vaccines despite the warning.
“To me it’s clear, based on current information, that the benefits of vaccine clearly outweigh the risks,” said ACIP member Veronica McNally, president and CEO of the Franny Strong Foundation in Bloomfield, Mich., a sentiment echoed by other members.
As ACIP was meeting, leaders of the nation’s major physician, nurse, and public health associations issued a statement supporting continued vaccination: “The facts are clear: this is an extremely rare side effect, and only an exceedingly small number of people will experience it after vaccination.
“Importantly, for the young people who do, most cases are mild, and individuals recover often on their own or with minimal treatment. In addition, we know that myocarditis and pericarditis are much more common if you get COVID-19, and the risks to the heart from COVID-19 infection can be more severe.”
ACIP heard the evidence behind that claim. According to the Vaccine Safety Datalink, which contains data from more than 12 million medical records, myocarditis or pericarditis occurs in 12- to 39-year-olds at a rate of 8 per 1 million after the second Pfizer dose and 19.8 per 1 million after the second Moderna dose.
The CDC continues to investigate the link between the mRNA vaccines and heart inflammation, including any differences between the vaccines.
Most of the symptoms resolved quickly, said Tom Shimabukuro, deputy director of CDC’s Immunization Safety Office. Of 323 cases analyzed by the CDC, 309 were hospitalized, 295 were discharged, and 218, or 79%, had recovered from symptoms.
“Most postvaccine myocarditis has been responding to minimal treatment,” pediatric cardiologist Matthew Oster, MD, MPH, from Children’s Healthcare of Atlanta, told the panel.
COVID ‘risks are higher’
Overall, the CDC has reported 2,767 COVID-19 deaths among people aged 12-29 years, and there have been 4,018 reported cases of the COVID-linked inflammatory disorder MIS-C since the beginning of the pandemic.
That amounts to 1 MIS-C case in every 3,200 COVID infections – 36% of them among teens aged 12-20 years and 62% among children who are Hispanic or Black and non-Hispanic, according to a CDC presentation.
The CDC estimated that every 1 million second-dose COVID vaccines administered to 12- to 17-year-old boys could prevent 5,700 cases of COVID-19, 215 hospitalizations, 71 ICU admissions, and 2 deaths. There could also be 56-69 myocarditis cases.
The emergence of new variants in the United States and the skewed pattern of vaccination around the country also may increase the risk to unvaccinated young people, noted Grace Lee, MD, MPH, chair of the ACIP’s COVID-19 Vaccine Safety Technical Subgroup and a pediatric infectious disease physician at Stanford (Calif.) Children’s Health.
“If you’re in an area with low vaccination, the risks are higher,” she said. “The benefits [of the vaccine] are going to be far, far greater than any risk.”
Individuals, parents, and their clinicians should consider the full scope of risk when making decisions about vaccination, she said.
As the pandemic evolves, medical experts have to balance the known risks and benefits while they gather more information, said William Schaffner, MD, an infectious disease physician at Vanderbilt University, Nashville, Tenn., and medical director of the National Foundation for Infectious Diseases.
“The story is not over,” Dr. Schaffner said in an interview. “Clearly, we are still working in the face of a pandemic, so there’s urgency to continue vaccinating. But they would like to know more about the long-term consequences of the myocarditis.”
Booster possibilities
Meanwhile, ACIP began conversations on the parameters for a possible vaccine booster. For now, there are simply questions: Would a third vaccine help the immunocompromised gain protection? Should people get a different type of vaccine – mRNA versus adenovirus vector – for their booster? Most important, how long do antibodies last?
“Prior to going around giving everyone boosters, we really need to improve the overall vaccination coverage,” said Helen Keipp Talbot, MD, associate professor of medicine at Vanderbilt University. “That will protect everyone.”
A version of this article first appeared on Medscape.com.
The Food and Drug Administration is adding a warning to mRNA COVID-19 vaccines’ fact sheets as medical experts continue to investigate cases of heart inflammation, which are rare but are more likely to occur in young men and teen boys.
Doran Fink, MD, PhD, deputy director of the FDA’s division of vaccines and related products applications, told a Centers for Disease Control and Prevention expert panel on June 23 that the FDA is finalizing language on a warning statement for health care providers, vaccine recipients, and parents or caregivers of teens.
The incidents are more likely to follow the second dose of the Pfizer or Moderna vaccine, with chest pain and other symptoms occurring within several days to a week, the warning will note.
“Based on limited follow-up, most cases appear to have been associated with resolution of symptoms, but limited information is available about potential long-term sequelae,” Dr. Fink said, describing the statement to the Advisory Committee on Immunization Practices, independent experts who advise the CDC.
“Symptoms suggestive of myocarditis or pericarditis should result in vaccine recipients seeking medical attention,” he said.
Benefits outweigh risks
Although no formal vote occurred after the meeting, the ACIP members delivered a strong endorsement for continuing to vaccinate 12- to 29-year-olds with the Pfizer and Moderna vaccines despite the warning.
“To me it’s clear, based on current information, that the benefits of vaccine clearly outweigh the risks,” said ACIP member Veronica McNally, president and CEO of the Franny Strong Foundation in Bloomfield, Mich., a sentiment echoed by other members.
As ACIP was meeting, leaders of the nation’s major physician, nurse, and public health associations issued a statement supporting continued vaccination: “The facts are clear: this is an extremely rare side effect, and only an exceedingly small number of people will experience it after vaccination.
“Importantly, for the young people who do, most cases are mild, and individuals recover often on their own or with minimal treatment. In addition, we know that myocarditis and pericarditis are much more common if you get COVID-19, and the risks to the heart from COVID-19 infection can be more severe.”
ACIP heard the evidence behind that claim. According to the Vaccine Safety Datalink, which contains data from more than 12 million medical records, myocarditis or pericarditis occurs in 12- to 39-year-olds at a rate of 8 per 1 million after the second Pfizer dose and 19.8 per 1 million after the second Moderna dose.
The CDC continues to investigate the link between the mRNA vaccines and heart inflammation, including any differences between the vaccines.
Most of the symptoms resolved quickly, said Tom Shimabukuro, deputy director of CDC’s Immunization Safety Office. Of 323 cases analyzed by the CDC, 309 were hospitalized, 295 were discharged, and 218, or 79%, had recovered from symptoms.
“Most postvaccine myocarditis has been responding to minimal treatment,” pediatric cardiologist Matthew Oster, MD, MPH, from Children’s Healthcare of Atlanta, told the panel.
COVID ‘risks are higher’
Overall, the CDC has reported 2,767 COVID-19 deaths among people aged 12-29 years, and there have been 4,018 reported cases of the COVID-linked inflammatory disorder MIS-C since the beginning of the pandemic.
That amounts to 1 MIS-C case in every 3,200 COVID infections – 36% of them among teens aged 12-20 years and 62% among children who are Hispanic or Black and non-Hispanic, according to a CDC presentation.
The CDC estimated that every 1 million second-dose COVID vaccines administered to 12- to 17-year-old boys could prevent 5,700 cases of COVID-19, 215 hospitalizations, 71 ICU admissions, and 2 deaths. There could also be 56-69 myocarditis cases.
The emergence of new variants in the United States and the skewed pattern of vaccination around the country also may increase the risk to unvaccinated young people, noted Grace Lee, MD, MPH, chair of the ACIP’s COVID-19 Vaccine Safety Technical Subgroup and a pediatric infectious disease physician at Stanford (Calif.) Children’s Health.
“If you’re in an area with low vaccination, the risks are higher,” she said. “The benefits [of the vaccine] are going to be far, far greater than any risk.”
Individuals, parents, and their clinicians should consider the full scope of risk when making decisions about vaccination, she said.
As the pandemic evolves, medical experts have to balance the known risks and benefits while they gather more information, said William Schaffner, MD, an infectious disease physician at Vanderbilt University, Nashville, Tenn., and medical director of the National Foundation for Infectious Diseases.
“The story is not over,” Dr. Schaffner said in an interview. “Clearly, we are still working in the face of a pandemic, so there’s urgency to continue vaccinating. But they would like to know more about the long-term consequences of the myocarditis.”
Booster possibilities
Meanwhile, ACIP began conversations on the parameters for a possible vaccine booster. For now, there are simply questions: Would a third vaccine help the immunocompromised gain protection? Should people get a different type of vaccine – mRNA versus adenovirus vector – for their booster? Most important, how long do antibodies last?
“Prior to going around giving everyone boosters, we really need to improve the overall vaccination coverage,” said Helen Keipp Talbot, MD, associate professor of medicine at Vanderbilt University. “That will protect everyone.”
A version of this article first appeared on Medscape.com.
The Food and Drug Administration is adding a warning to mRNA COVID-19 vaccines’ fact sheets as medical experts continue to investigate cases of heart inflammation, which are rare but are more likely to occur in young men and teen boys.
Doran Fink, MD, PhD, deputy director of the FDA’s division of vaccines and related products applications, told a Centers for Disease Control and Prevention expert panel on June 23 that the FDA is finalizing language on a warning statement for health care providers, vaccine recipients, and parents or caregivers of teens.
The incidents are more likely to follow the second dose of the Pfizer or Moderna vaccine, with chest pain and other symptoms occurring within several days to a week, the warning will note.
“Based on limited follow-up, most cases appear to have been associated with resolution of symptoms, but limited information is available about potential long-term sequelae,” Dr. Fink said, describing the statement to the Advisory Committee on Immunization Practices, independent experts who advise the CDC.
“Symptoms suggestive of myocarditis or pericarditis should result in vaccine recipients seeking medical attention,” he said.
Benefits outweigh risks
Although no formal vote occurred after the meeting, the ACIP members delivered a strong endorsement for continuing to vaccinate 12- to 29-year-olds with the Pfizer and Moderna vaccines despite the warning.
“To me it’s clear, based on current information, that the benefits of vaccine clearly outweigh the risks,” said ACIP member Veronica McNally, president and CEO of the Franny Strong Foundation in Bloomfield, Mich., a sentiment echoed by other members.
As ACIP was meeting, leaders of the nation’s major physician, nurse, and public health associations issued a statement supporting continued vaccination: “The facts are clear: this is an extremely rare side effect, and only an exceedingly small number of people will experience it after vaccination.
“Importantly, for the young people who do, most cases are mild, and individuals recover often on their own or with minimal treatment. In addition, we know that myocarditis and pericarditis are much more common if you get COVID-19, and the risks to the heart from COVID-19 infection can be more severe.”
ACIP heard the evidence behind that claim. According to the Vaccine Safety Datalink, which contains data from more than 12 million medical records, myocarditis or pericarditis occurs in 12- to 39-year-olds at a rate of 8 per 1 million after the second Pfizer dose and 19.8 per 1 million after the second Moderna dose.
The CDC continues to investigate the link between the mRNA vaccines and heart inflammation, including any differences between the vaccines.
Most of the symptoms resolved quickly, said Tom Shimabukuro, deputy director of CDC’s Immunization Safety Office. Of 323 cases analyzed by the CDC, 309 were hospitalized, 295 were discharged, and 218, or 79%, had recovered from symptoms.
“Most postvaccine myocarditis has been responding to minimal treatment,” pediatric cardiologist Matthew Oster, MD, MPH, from Children’s Healthcare of Atlanta, told the panel.
COVID ‘risks are higher’
Overall, the CDC has reported 2,767 COVID-19 deaths among people aged 12-29 years, and there have been 4,018 reported cases of the COVID-linked inflammatory disorder MIS-C since the beginning of the pandemic.
That amounts to 1 MIS-C case in every 3,200 COVID infections – 36% of them among teens aged 12-20 years and 62% among children who are Hispanic or Black and non-Hispanic, according to a CDC presentation.
The CDC estimated that every 1 million second-dose COVID vaccines administered to 12- to 17-year-old boys could prevent 5,700 cases of COVID-19, 215 hospitalizations, 71 ICU admissions, and 2 deaths. There could also be 56-69 myocarditis cases.
The emergence of new variants in the United States and the skewed pattern of vaccination around the country also may increase the risk to unvaccinated young people, noted Grace Lee, MD, MPH, chair of the ACIP’s COVID-19 Vaccine Safety Technical Subgroup and a pediatric infectious disease physician at Stanford (Calif.) Children’s Health.
“If you’re in an area with low vaccination, the risks are higher,” she said. “The benefits [of the vaccine] are going to be far, far greater than any risk.”
Individuals, parents, and their clinicians should consider the full scope of risk when making decisions about vaccination, she said.
As the pandemic evolves, medical experts have to balance the known risks and benefits while they gather more information, said William Schaffner, MD, an infectious disease physician at Vanderbilt University, Nashville, Tenn., and medical director of the National Foundation for Infectious Diseases.
“The story is not over,” Dr. Schaffner said in an interview. “Clearly, we are still working in the face of a pandemic, so there’s urgency to continue vaccinating. But they would like to know more about the long-term consequences of the myocarditis.”
Booster possibilities
Meanwhile, ACIP began conversations on the parameters for a possible vaccine booster. For now, there are simply questions: Would a third vaccine help the immunocompromised gain protection? Should people get a different type of vaccine – mRNA versus adenovirus vector – for their booster? Most important, how long do antibodies last?
“Prior to going around giving everyone boosters, we really need to improve the overall vaccination coverage,” said Helen Keipp Talbot, MD, associate professor of medicine at Vanderbilt University. “That will protect everyone.”
A version of this article first appeared on Medscape.com.
Gray hair goes away and squids go to space
Goodbye stress, goodbye gray hair
Last year was a doozy, so it wouldn’t be too surprising if we all had a few new gray strands in our hair. But what if we told you that you don’t need to start dying them or plucking them out? What if they could magically go back to the way they were? Well, it may be possible, sans magic and sans stress.
Investigators recently discovered that the age-old belief that stress will permanently turn your hair gray may not be true after all. There’s a strong possibility that it could turn back to its original color once the stressful agent is eliminated.
“Understanding the mechanisms that allow ‘old’ gray hairs to return to their ‘young’ pigmented states could yield new clues about the malleability of human aging in general and how it is influenced by stress,” said senior author Martin Picard, PhD, of Columbia University, New York.
For the study, 14 volunteers were asked to keep a stress diary and review their levels of stress throughout the week. The researchers used a new method of viewing and capturing the images of tiny parts of the hairs to see how much graying took place in each part of the strand. And what they found – some strands naturally turning back to the original color – had never been documented before.
How did it happen? Our good friend the mitochondria. We haven’t really heard that word since eighth-grade biology, but it’s actually the key link between stress hormones and hair pigmentation. Think of them as little radars picking up all different kinds of signals in your body, like mental/emotional stress. They get a big enough alert and they’re going to react, thus gray hair.
So that’s all it takes? Cut the stress and a full head of gray can go back to brown? Not exactly. The researchers said there may be a “threshold because of biological age and other factors.” They believe middle age is near that threshold and it could easily be pushed over due to stress and could potentially go back. But if you’ve been rocking the salt and pepper or silver fox for a number of years and are looking for change, you might want to just eliminate the stress and pick up a bottle of dye.
One small step for squid
Space does a number on the human body. Forget the obvious like going for a walk outside without a spacesuit, or even the well-known risks like the degradation of bone in microgravity; there are numerous smaller but still important changes to the body during spaceflight, like the disruption of the symbiotic relationship between gut bacteria and the human body. This causes the immune system to lose the ability to recognize threats, and illnesses spread more easily.
Naturally, if astronauts are going to undertake years-long journeys to Mars and beyond, a thorough understanding of this disturbance is necessary, and that’s why NASA has sent a bunch of squid to the International Space Station.
When it comes to animal studies, squid aren’t the usual culprits, but there’s a reason NASA chose calamari over the alternatives: The Hawaiian bobtail squid has a symbiotic relationship with bacteria that regulate their bioluminescence in much the same way that we have a symbiotic relationship with our gut bacteria, but the squid is a much simpler animal. If the bioluminescence-regulating bacteria are disturbed during their time in space, it will be much easier to figure out what’s going wrong.
The experiment is ongoing, but we should salute the brave squid who have taken a giant leap for squidkind. Though if NASA didn’t send them up in a giant bubble, we’re going to be very disappointed.
Less plastic, more vanilla
Have you been racked by guilt over the number of plastic water bottles you use? What about the amount of ice cream you eat? Well, this one’s for you.
Plastic isn’t the first thing you think about when you open up a pint of vanilla ice cream and catch the sweet, spicy vanilla scent, or when you smell those fresh vanilla scones coming out of the oven at the coffee shop, but a new study shows that the flavor of vanilla can come from water bottles.
Here’s the deal. A compound called vanillin is responsible for the scent of vanilla, and it can come naturally from the bean or it can be made synthetically. Believe it or not, 85% of vanillin is made synthetically from fossil fuels!
We’ve definitely grown accustomed to our favorite vanilla scents, foods, and cosmetics. In 2018, the global demand for vanillin was about 40,800 tons and is expected to grow to 65,000 tons by 2025, which far exceeds the supply of natural vanilla.
So what can we do? Well, we can use genetically engineered bacteria to turn plastic water bottles into vanillin, according to a study published in the journal Green Chemistry.
The plastic can be broken down into terephthalic acid, which is very similar, chemically speaking, to vanillin. Similar enough that a bit of bioengineering produced Escherichia coli that could convert the acid into the tasty treat, according to researchers at the University of Edinburgh.
A perfect solution? Decreasing plastic waste while producing a valued food product? The thought of consuming plastic isn’t appetizing, so just eat your ice cream and try to forget about it.
No withdrawals from this bank
Into each life, some milestones must fall: High school graduation, birth of a child, first house, 50th wedding anniversary, COVID-19. One LOTME staffer got really excited – way too excited, actually – when his Nissan Sentra reached 300,000 miles.
Well, there are milestones, and then there are milestones. “1,000 Reasons for Hope” is a report celebrating the first 1,000 brains donated to the VA-BU-CLF Brain Bank. For those of you keeping score at home, that would be the Department of Veterans Affairs, Boston University, and the Concussion Legacy Foundation.
The Brain Bank, created in 2008 to study concussions and chronic traumatic encephalopathy, is the brainchild – yes, we went there – of Chris Nowinski, PhD, a former professional wrestler, and Ann McKee, MD, an expert on neurogenerative disease. “Our discoveries have already inspired changes to sports that will prevent many future cases of CTE in the next generation of athletes,” Dr. Nowinski, the CEO of CLF, said in a written statement.
Data from the first thousand brains show that 706 men, including 305 former NFL players, had football as their primary exposure to head impacts. Women were underrepresented, making up only 2.8% of brain donations, so recruiting females is a priority. Anyone interested in pledging can go to PledgeMyBrain.org or call 617-992-0615 for the 24-hour emergency donation pager.
LOTME wanted to help, so we called the Brain Bank to find out about donating. They asked a few questions and we told them what we do for a living. “Oh, you’re with LOTME? Yeah, we’ve … um, seen that before. It’s, um … funny. Can we put you on hold?” We’re starting to get a little sick of the on-hold music by now.
Goodbye stress, goodbye gray hair
Last year was a doozy, so it wouldn’t be too surprising if we all had a few new gray strands in our hair. But what if we told you that you don’t need to start dying them or plucking them out? What if they could magically go back to the way they were? Well, it may be possible, sans magic and sans stress.
Investigators recently discovered that the age-old belief that stress will permanently turn your hair gray may not be true after all. There’s a strong possibility that it could turn back to its original color once the stressful agent is eliminated.
“Understanding the mechanisms that allow ‘old’ gray hairs to return to their ‘young’ pigmented states could yield new clues about the malleability of human aging in general and how it is influenced by stress,” said senior author Martin Picard, PhD, of Columbia University, New York.
For the study, 14 volunteers were asked to keep a stress diary and review their levels of stress throughout the week. The researchers used a new method of viewing and capturing the images of tiny parts of the hairs to see how much graying took place in each part of the strand. And what they found – some strands naturally turning back to the original color – had never been documented before.
How did it happen? Our good friend the mitochondria. We haven’t really heard that word since eighth-grade biology, but it’s actually the key link between stress hormones and hair pigmentation. Think of them as little radars picking up all different kinds of signals in your body, like mental/emotional stress. They get a big enough alert and they’re going to react, thus gray hair.
So that’s all it takes? Cut the stress and a full head of gray can go back to brown? Not exactly. The researchers said there may be a “threshold because of biological age and other factors.” They believe middle age is near that threshold and it could easily be pushed over due to stress and could potentially go back. But if you’ve been rocking the salt and pepper or silver fox for a number of years and are looking for change, you might want to just eliminate the stress and pick up a bottle of dye.
One small step for squid
Space does a number on the human body. Forget the obvious like going for a walk outside without a spacesuit, or even the well-known risks like the degradation of bone in microgravity; there are numerous smaller but still important changes to the body during spaceflight, like the disruption of the symbiotic relationship between gut bacteria and the human body. This causes the immune system to lose the ability to recognize threats, and illnesses spread more easily.
Naturally, if astronauts are going to undertake years-long journeys to Mars and beyond, a thorough understanding of this disturbance is necessary, and that’s why NASA has sent a bunch of squid to the International Space Station.
When it comes to animal studies, squid aren’t the usual culprits, but there’s a reason NASA chose calamari over the alternatives: The Hawaiian bobtail squid has a symbiotic relationship with bacteria that regulate their bioluminescence in much the same way that we have a symbiotic relationship with our gut bacteria, but the squid is a much simpler animal. If the bioluminescence-regulating bacteria are disturbed during their time in space, it will be much easier to figure out what’s going wrong.
The experiment is ongoing, but we should salute the brave squid who have taken a giant leap for squidkind. Though if NASA didn’t send them up in a giant bubble, we’re going to be very disappointed.
Less plastic, more vanilla
Have you been racked by guilt over the number of plastic water bottles you use? What about the amount of ice cream you eat? Well, this one’s for you.
Plastic isn’t the first thing you think about when you open up a pint of vanilla ice cream and catch the sweet, spicy vanilla scent, or when you smell those fresh vanilla scones coming out of the oven at the coffee shop, but a new study shows that the flavor of vanilla can come from water bottles.
Here’s the deal. A compound called vanillin is responsible for the scent of vanilla, and it can come naturally from the bean or it can be made synthetically. Believe it or not, 85% of vanillin is made synthetically from fossil fuels!
We’ve definitely grown accustomed to our favorite vanilla scents, foods, and cosmetics. In 2018, the global demand for vanillin was about 40,800 tons and is expected to grow to 65,000 tons by 2025, which far exceeds the supply of natural vanilla.
So what can we do? Well, we can use genetically engineered bacteria to turn plastic water bottles into vanillin, according to a study published in the journal Green Chemistry.
The plastic can be broken down into terephthalic acid, which is very similar, chemically speaking, to vanillin. Similar enough that a bit of bioengineering produced Escherichia coli that could convert the acid into the tasty treat, according to researchers at the University of Edinburgh.
A perfect solution? Decreasing plastic waste while producing a valued food product? The thought of consuming plastic isn’t appetizing, so just eat your ice cream and try to forget about it.
No withdrawals from this bank
Into each life, some milestones must fall: High school graduation, birth of a child, first house, 50th wedding anniversary, COVID-19. One LOTME staffer got really excited – way too excited, actually – when his Nissan Sentra reached 300,000 miles.
Well, there are milestones, and then there are milestones. “1,000 Reasons for Hope” is a report celebrating the first 1,000 brains donated to the VA-BU-CLF Brain Bank. For those of you keeping score at home, that would be the Department of Veterans Affairs, Boston University, and the Concussion Legacy Foundation.
The Brain Bank, created in 2008 to study concussions and chronic traumatic encephalopathy, is the brainchild – yes, we went there – of Chris Nowinski, PhD, a former professional wrestler, and Ann McKee, MD, an expert on neurogenerative disease. “Our discoveries have already inspired changes to sports that will prevent many future cases of CTE in the next generation of athletes,” Dr. Nowinski, the CEO of CLF, said in a written statement.
Data from the first thousand brains show that 706 men, including 305 former NFL players, had football as their primary exposure to head impacts. Women were underrepresented, making up only 2.8% of brain donations, so recruiting females is a priority. Anyone interested in pledging can go to PledgeMyBrain.org or call 617-992-0615 for the 24-hour emergency donation pager.
LOTME wanted to help, so we called the Brain Bank to find out about donating. They asked a few questions and we told them what we do for a living. “Oh, you’re with LOTME? Yeah, we’ve … um, seen that before. It’s, um … funny. Can we put you on hold?” We’re starting to get a little sick of the on-hold music by now.
Goodbye stress, goodbye gray hair
Last year was a doozy, so it wouldn’t be too surprising if we all had a few new gray strands in our hair. But what if we told you that you don’t need to start dying them or plucking them out? What if they could magically go back to the way they were? Well, it may be possible, sans magic and sans stress.
Investigators recently discovered that the age-old belief that stress will permanently turn your hair gray may not be true after all. There’s a strong possibility that it could turn back to its original color once the stressful agent is eliminated.
“Understanding the mechanisms that allow ‘old’ gray hairs to return to their ‘young’ pigmented states could yield new clues about the malleability of human aging in general and how it is influenced by stress,” said senior author Martin Picard, PhD, of Columbia University, New York.
For the study, 14 volunteers were asked to keep a stress diary and review their levels of stress throughout the week. The researchers used a new method of viewing and capturing the images of tiny parts of the hairs to see how much graying took place in each part of the strand. And what they found – some strands naturally turning back to the original color – had never been documented before.
How did it happen? Our good friend the mitochondria. We haven’t really heard that word since eighth-grade biology, but it’s actually the key link between stress hormones and hair pigmentation. Think of them as little radars picking up all different kinds of signals in your body, like mental/emotional stress. They get a big enough alert and they’re going to react, thus gray hair.
So that’s all it takes? Cut the stress and a full head of gray can go back to brown? Not exactly. The researchers said there may be a “threshold because of biological age and other factors.” They believe middle age is near that threshold and it could easily be pushed over due to stress and could potentially go back. But if you’ve been rocking the salt and pepper or silver fox for a number of years and are looking for change, you might want to just eliminate the stress and pick up a bottle of dye.
One small step for squid
Space does a number on the human body. Forget the obvious like going for a walk outside without a spacesuit, or even the well-known risks like the degradation of bone in microgravity; there are numerous smaller but still important changes to the body during spaceflight, like the disruption of the symbiotic relationship between gut bacteria and the human body. This causes the immune system to lose the ability to recognize threats, and illnesses spread more easily.
Naturally, if astronauts are going to undertake years-long journeys to Mars and beyond, a thorough understanding of this disturbance is necessary, and that’s why NASA has sent a bunch of squid to the International Space Station.
When it comes to animal studies, squid aren’t the usual culprits, but there’s a reason NASA chose calamari over the alternatives: The Hawaiian bobtail squid has a symbiotic relationship with bacteria that regulate their bioluminescence in much the same way that we have a symbiotic relationship with our gut bacteria, but the squid is a much simpler animal. If the bioluminescence-regulating bacteria are disturbed during their time in space, it will be much easier to figure out what’s going wrong.
The experiment is ongoing, but we should salute the brave squid who have taken a giant leap for squidkind. Though if NASA didn’t send them up in a giant bubble, we’re going to be very disappointed.
Less plastic, more vanilla
Have you been racked by guilt over the number of plastic water bottles you use? What about the amount of ice cream you eat? Well, this one’s for you.
Plastic isn’t the first thing you think about when you open up a pint of vanilla ice cream and catch the sweet, spicy vanilla scent, or when you smell those fresh vanilla scones coming out of the oven at the coffee shop, but a new study shows that the flavor of vanilla can come from water bottles.
Here’s the deal. A compound called vanillin is responsible for the scent of vanilla, and it can come naturally from the bean or it can be made synthetically. Believe it or not, 85% of vanillin is made synthetically from fossil fuels!
We’ve definitely grown accustomed to our favorite vanilla scents, foods, and cosmetics. In 2018, the global demand for vanillin was about 40,800 tons and is expected to grow to 65,000 tons by 2025, which far exceeds the supply of natural vanilla.
So what can we do? Well, we can use genetically engineered bacteria to turn plastic water bottles into vanillin, according to a study published in the journal Green Chemistry.
The plastic can be broken down into terephthalic acid, which is very similar, chemically speaking, to vanillin. Similar enough that a bit of bioengineering produced Escherichia coli that could convert the acid into the tasty treat, according to researchers at the University of Edinburgh.
A perfect solution? Decreasing plastic waste while producing a valued food product? The thought of consuming plastic isn’t appetizing, so just eat your ice cream and try to forget about it.
No withdrawals from this bank
Into each life, some milestones must fall: High school graduation, birth of a child, first house, 50th wedding anniversary, COVID-19. One LOTME staffer got really excited – way too excited, actually – when his Nissan Sentra reached 300,000 miles.
Well, there are milestones, and then there are milestones. “1,000 Reasons for Hope” is a report celebrating the first 1,000 brains donated to the VA-BU-CLF Brain Bank. For those of you keeping score at home, that would be the Department of Veterans Affairs, Boston University, and the Concussion Legacy Foundation.
The Brain Bank, created in 2008 to study concussions and chronic traumatic encephalopathy, is the brainchild – yes, we went there – of Chris Nowinski, PhD, a former professional wrestler, and Ann McKee, MD, an expert on neurogenerative disease. “Our discoveries have already inspired changes to sports that will prevent many future cases of CTE in the next generation of athletes,” Dr. Nowinski, the CEO of CLF, said in a written statement.
Data from the first thousand brains show that 706 men, including 305 former NFL players, had football as their primary exposure to head impacts. Women were underrepresented, making up only 2.8% of brain donations, so recruiting females is a priority. Anyone interested in pledging can go to PledgeMyBrain.org or call 617-992-0615 for the 24-hour emergency donation pager.
LOTME wanted to help, so we called the Brain Bank to find out about donating. They asked a few questions and we told them what we do for a living. “Oh, you’re with LOTME? Yeah, we’ve … um, seen that before. It’s, um … funny. Can we put you on hold?” We’re starting to get a little sick of the on-hold music by now.
Scaly beard rash
Waxy loose scale with associated erythema on the face and scalp is a classic sign of seborrheic dermatitis (SD).
SD is caused by inflammation related to the presence of Malassezia, which proliferates on sebum-rich areas of skin. Malassezia is normally present on the skin, but some individuals have a heightened sensitivity to it, leading to erythema and scale. It is prudent to examine the scalp, nasolabial folds, and around the ears where it often occurs concomitantly.
There are multiple topical and systemic options which treat the fungal involvement, the subsequent inflammation, and reduce the scale.1 Topical azole antifungals are effective for reducing the amount of Malassezia present. Topical steroids work well to reduce the erythema. Fortunately, low-potency steroids, including hydrocortisone and desonide, are adequate. This is important since SD frequently involves the face and higher-potency steroids can cause skin atrophy or rebound erythema.
Salicylic acid products exfoliate the scale and topical tar products suppress the scale, both leading to clinical improvement. Sunlight and narrow beam UVB light therapy are also effective treatments. As was true with this patient, SD often improves during the summer months (when there is more sunlight) and when patients shave, as this allows for additional sun exposure to the skin.
The patient in this case was told to use ketoconazole shampoo for his scalp, beard, and mustache. He was instructed to use it at least 3 times per week, applying it to the scalp as the first part of his bathing routine and then waiting until the end to rinse it off. This technique maximizes the antifungal shampoo’s contact time on the skin. He was also given a prescription for ketoconazole cream to apply twice daily to the areas of facial erythema and scale. He was counseled that shaving his beard and mustache might help reduce the SD in those areas.
Photo and text courtesy of Daniel Stulberg, MD, FAAFP, Department of Family and Community Medicine, University of New Mexico School of Medicine, Albuquerque
Borda LJ, Perper M, Keri JE. Treatment of seborrheic dermatitis: a comprehensive review. J Dermatolog Treat. 2019;30:158-169. doi: 10.1080/09546634.2018.1473554
Waxy loose scale with associated erythema on the face and scalp is a classic sign of seborrheic dermatitis (SD).
SD is caused by inflammation related to the presence of Malassezia, which proliferates on sebum-rich areas of skin. Malassezia is normally present on the skin, but some individuals have a heightened sensitivity to it, leading to erythema and scale. It is prudent to examine the scalp, nasolabial folds, and around the ears where it often occurs concomitantly.
There are multiple topical and systemic options which treat the fungal involvement, the subsequent inflammation, and reduce the scale.1 Topical azole antifungals are effective for reducing the amount of Malassezia present. Topical steroids work well to reduce the erythema. Fortunately, low-potency steroids, including hydrocortisone and desonide, are adequate. This is important since SD frequently involves the face and higher-potency steroids can cause skin atrophy or rebound erythema.
Salicylic acid products exfoliate the scale and topical tar products suppress the scale, both leading to clinical improvement. Sunlight and narrow beam UVB light therapy are also effective treatments. As was true with this patient, SD often improves during the summer months (when there is more sunlight) and when patients shave, as this allows for additional sun exposure to the skin.
The patient in this case was told to use ketoconazole shampoo for his scalp, beard, and mustache. He was instructed to use it at least 3 times per week, applying it to the scalp as the first part of his bathing routine and then waiting until the end to rinse it off. This technique maximizes the antifungal shampoo’s contact time on the skin. He was also given a prescription for ketoconazole cream to apply twice daily to the areas of facial erythema and scale. He was counseled that shaving his beard and mustache might help reduce the SD in those areas.
Photo and text courtesy of Daniel Stulberg, MD, FAAFP, Department of Family and Community Medicine, University of New Mexico School of Medicine, Albuquerque
Waxy loose scale with associated erythema on the face and scalp is a classic sign of seborrheic dermatitis (SD).
SD is caused by inflammation related to the presence of Malassezia, which proliferates on sebum-rich areas of skin. Malassezia is normally present on the skin, but some individuals have a heightened sensitivity to it, leading to erythema and scale. It is prudent to examine the scalp, nasolabial folds, and around the ears where it often occurs concomitantly.
There are multiple topical and systemic options which treat the fungal involvement, the subsequent inflammation, and reduce the scale.1 Topical azole antifungals are effective for reducing the amount of Malassezia present. Topical steroids work well to reduce the erythema. Fortunately, low-potency steroids, including hydrocortisone and desonide, are adequate. This is important since SD frequently involves the face and higher-potency steroids can cause skin atrophy or rebound erythema.
Salicylic acid products exfoliate the scale and topical tar products suppress the scale, both leading to clinical improvement. Sunlight and narrow beam UVB light therapy are also effective treatments. As was true with this patient, SD often improves during the summer months (when there is more sunlight) and when patients shave, as this allows for additional sun exposure to the skin.
The patient in this case was told to use ketoconazole shampoo for his scalp, beard, and mustache. He was instructed to use it at least 3 times per week, applying it to the scalp as the first part of his bathing routine and then waiting until the end to rinse it off. This technique maximizes the antifungal shampoo’s contact time on the skin. He was also given a prescription for ketoconazole cream to apply twice daily to the areas of facial erythema and scale. He was counseled that shaving his beard and mustache might help reduce the SD in those areas.
Photo and text courtesy of Daniel Stulberg, MD, FAAFP, Department of Family and Community Medicine, University of New Mexico School of Medicine, Albuquerque
Borda LJ, Perper M, Keri JE. Treatment of seborrheic dermatitis: a comprehensive review. J Dermatolog Treat. 2019;30:158-169. doi: 10.1080/09546634.2018.1473554
Borda LJ, Perper M, Keri JE. Treatment of seborrheic dermatitis: a comprehensive review. J Dermatolog Treat. 2019;30:158-169. doi: 10.1080/09546634.2018.1473554
Performance matters in adenoma detection
Low adenoma detection rates (ADRs) were associated with a greater risk of death in colorectal cancer (CRC) patients, especially among those with high-risk adenomas, based on a review of more than 250,000 colonoscopies.
“Both performance quality of the endoscopist as well as specific characteristics of resected adenomas at colonoscopy are associated with colorectal cancer mortality,” but the impact of these combined factors on colorectal cancer mortality has not been examined on a large scale, according to Elisabeth A. Waldmann, MD, of the Medical University of Vienna and colleagues.
In a study published in Clinical Gastroenterology & Hepatology, the researchers reviewed 259,885 colonoscopies performed by 361 endoscopists. Over an average follow-up period of 59 months, 165 CRC-related deaths occurred.
Across all risk groups, CRC mortality was higher among patients whose colonoscopies yielded an ADR of less than 25%, although this was not statistically significant in all groups.
The researchers then stratified patients into those with a negative colonoscopy, those with low-risk adenomas (one to two adenomas less than 10 mm), and those with high-risk adenomas (advanced adenomas or at least three adenomas), with the negative colonoscopy group used as the reference group for comparisons. The average age of the patients was 61 years, and approximately half were women.
Endoscopists were classified as having an ADR of less than 25% or 25% and higher.
Among individuals with low-risk adenomas, CRC mortality was similar whether the ADR on a negative colonoscopy was less than 25% or 25% or higher (adjusted hazard ratios, 1.25 and 1.22, respectively). CRC mortality also remained unaffected by ADR in patients with negatively colonoscopies (aHR, 1.27).
By contrast, individuals with high-risk adenomas had a significantly increased risk of CRC death if their colonoscopy was performed by an endoscopist with an ADR of less than 25%, compared with those whose endoscopists had ADRs of 25% or higher (aHR, 2.25 and 1.35, respectively).
“Our study demonstrated that adding ADR to the risk stratification model improved risk assessment in all risk groups,” the researchers noted. “Importantly, stratification improved most for individuals with high-risk adenomas, the group demanding most resources in health care systems.”
The study findings were limited by several factors including the focus on only screening and surveillance colonoscopies, not including diagnostic colonoscopies, and the inability to adjust for comorbidities and lifestyle factors that might impact CRC mortality, the researchers noted. The 22.4% average ADR in the current study was low, compared with other studies, and could be a limitation as well, although previous guidelines recommend a target ADR of at least 20%.
“Despite the extensive body of literature supporting the importance of ADR in terms of CRC prevention, its implementation into clinical surveillance is challenging,” as physicians under pressure might try to game their ADRs, the researchers wrote.
The findings support the value of mandatory assessment of performance quality, the researchers added. However, “because of the potential possibility of gaming one’s ADR one conclusion drawn by the study results should be that endoscopists’ quality parameters should be monitored and those not meeting the standards trained to improve rather than requiring minimum ADRs as premise for offering screening colonoscopy.”
Improve performance, but don’t discount patient factors
The study is important at this time because colorectal cancer is the third-leading cause of cancer death in the United States, Atsushi Sakuraba, MD, of the University of Chicago said in an interview.
“Screening colonoscopy has been shown to decrease CRC mortality, but factors influencing outcomes after screening colonoscopies remain to be determined,” he said.
“It was expected that high-quality colonoscopy performed by an endoscopist with ADR of 25% or greater was associated with a lower risk for CRC death,” Dr. Sakuraba said. “The strength of the study is that the authors demonstrated that high-quality colonoscopy was more important in individuals with high-risk adenomas, such as advanced adenomas or at least three adenomas.”
The study findings have implications for practice in that they show the importance of monitoring performance quality in screening colonoscopy, Dr. Sakuraba said, “especially when patients have high-risk adenomas.” However, “the authors included only age and sex as variables, but the influence of other factors, such as smoking, [body mass index], and race, need to be studied.”
The researchers had no financial conflicts to disclose. Dr. Sakuraba had no financial conflicts to disclose.
Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.
Low adenoma detection rates (ADRs) were associated with a greater risk of death in colorectal cancer (CRC) patients, especially among those with high-risk adenomas, based on a review of more than 250,000 colonoscopies.
“Both performance quality of the endoscopist as well as specific characteristics of resected adenomas at colonoscopy are associated with colorectal cancer mortality,” but the impact of these combined factors on colorectal cancer mortality has not been examined on a large scale, according to Elisabeth A. Waldmann, MD, of the Medical University of Vienna and colleagues.
In a study published in Clinical Gastroenterology & Hepatology, the researchers reviewed 259,885 colonoscopies performed by 361 endoscopists. Over an average follow-up period of 59 months, 165 CRC-related deaths occurred.
Across all risk groups, CRC mortality was higher among patients whose colonoscopies yielded an ADR of less than 25%, although this was not statistically significant in all groups.
The researchers then stratified patients into those with a negative colonoscopy, those with low-risk adenomas (one to two adenomas less than 10 mm), and those with high-risk adenomas (advanced adenomas or at least three adenomas), with the negative colonoscopy group used as the reference group for comparisons. The average age of the patients was 61 years, and approximately half were women.
Endoscopists were classified as having an ADR of less than 25% or 25% and higher.
Among individuals with low-risk adenomas, CRC mortality was similar whether the ADR on a negative colonoscopy was less than 25% or 25% or higher (adjusted hazard ratios, 1.25 and 1.22, respectively). CRC mortality also remained unaffected by ADR in patients with negatively colonoscopies (aHR, 1.27).
By contrast, individuals with high-risk adenomas had a significantly increased risk of CRC death if their colonoscopy was performed by an endoscopist with an ADR of less than 25%, compared with those whose endoscopists had ADRs of 25% or higher (aHR, 2.25 and 1.35, respectively).
“Our study demonstrated that adding ADR to the risk stratification model improved risk assessment in all risk groups,” the researchers noted. “Importantly, stratification improved most for individuals with high-risk adenomas, the group demanding most resources in health care systems.”
The study findings were limited by several factors including the focus on only screening and surveillance colonoscopies, not including diagnostic colonoscopies, and the inability to adjust for comorbidities and lifestyle factors that might impact CRC mortality, the researchers noted. The 22.4% average ADR in the current study was low, compared with other studies, and could be a limitation as well, although previous guidelines recommend a target ADR of at least 20%.
“Despite the extensive body of literature supporting the importance of ADR in terms of CRC prevention, its implementation into clinical surveillance is challenging,” as physicians under pressure might try to game their ADRs, the researchers wrote.
The findings support the value of mandatory assessment of performance quality, the researchers added. However, “because of the potential possibility of gaming one’s ADR one conclusion drawn by the study results should be that endoscopists’ quality parameters should be monitored and those not meeting the standards trained to improve rather than requiring minimum ADRs as premise for offering screening colonoscopy.”
Improve performance, but don’t discount patient factors
The study is important at this time because colorectal cancer is the third-leading cause of cancer death in the United States, Atsushi Sakuraba, MD, of the University of Chicago said in an interview.
“Screening colonoscopy has been shown to decrease CRC mortality, but factors influencing outcomes after screening colonoscopies remain to be determined,” he said.
“It was expected that high-quality colonoscopy performed by an endoscopist with ADR of 25% or greater was associated with a lower risk for CRC death,” Dr. Sakuraba said. “The strength of the study is that the authors demonstrated that high-quality colonoscopy was more important in individuals with high-risk adenomas, such as advanced adenomas or at least three adenomas.”
The study findings have implications for practice in that they show the importance of monitoring performance quality in screening colonoscopy, Dr. Sakuraba said, “especially when patients have high-risk adenomas.” However, “the authors included only age and sex as variables, but the influence of other factors, such as smoking, [body mass index], and race, need to be studied.”
The researchers had no financial conflicts to disclose. Dr. Sakuraba had no financial conflicts to disclose.
Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.
Low adenoma detection rates (ADRs) were associated with a greater risk of death in colorectal cancer (CRC) patients, especially among those with high-risk adenomas, based on a review of more than 250,000 colonoscopies.
“Both performance quality of the endoscopist as well as specific characteristics of resected adenomas at colonoscopy are associated with colorectal cancer mortality,” but the impact of these combined factors on colorectal cancer mortality has not been examined on a large scale, according to Elisabeth A. Waldmann, MD, of the Medical University of Vienna and colleagues.
In a study published in Clinical Gastroenterology & Hepatology, the researchers reviewed 259,885 colonoscopies performed by 361 endoscopists. Over an average follow-up period of 59 months, 165 CRC-related deaths occurred.
Across all risk groups, CRC mortality was higher among patients whose colonoscopies yielded an ADR of less than 25%, although this was not statistically significant in all groups.
The researchers then stratified patients into those with a negative colonoscopy, those with low-risk adenomas (one to two adenomas less than 10 mm), and those with high-risk adenomas (advanced adenomas or at least three adenomas), with the negative colonoscopy group used as the reference group for comparisons. The average age of the patients was 61 years, and approximately half were women.
Endoscopists were classified as having an ADR of less than 25% or 25% and higher.
Among individuals with low-risk adenomas, CRC mortality was similar whether the ADR on a negative colonoscopy was less than 25% or 25% or higher (adjusted hazard ratios, 1.25 and 1.22, respectively). CRC mortality also remained unaffected by ADR in patients with negatively colonoscopies (aHR, 1.27).
By contrast, individuals with high-risk adenomas had a significantly increased risk of CRC death if their colonoscopy was performed by an endoscopist with an ADR of less than 25%, compared with those whose endoscopists had ADRs of 25% or higher (aHR, 2.25 and 1.35, respectively).
“Our study demonstrated that adding ADR to the risk stratification model improved risk assessment in all risk groups,” the researchers noted. “Importantly, stratification improved most for individuals with high-risk adenomas, the group demanding most resources in health care systems.”
The study findings were limited by several factors including the focus on only screening and surveillance colonoscopies, not including diagnostic colonoscopies, and the inability to adjust for comorbidities and lifestyle factors that might impact CRC mortality, the researchers noted. The 22.4% average ADR in the current study was low, compared with other studies, and could be a limitation as well, although previous guidelines recommend a target ADR of at least 20%.
“Despite the extensive body of literature supporting the importance of ADR in terms of CRC prevention, its implementation into clinical surveillance is challenging,” as physicians under pressure might try to game their ADRs, the researchers wrote.
The findings support the value of mandatory assessment of performance quality, the researchers added. However, “because of the potential possibility of gaming one’s ADR one conclusion drawn by the study results should be that endoscopists’ quality parameters should be monitored and those not meeting the standards trained to improve rather than requiring minimum ADRs as premise for offering screening colonoscopy.”
Improve performance, but don’t discount patient factors
The study is important at this time because colorectal cancer is the third-leading cause of cancer death in the United States, Atsushi Sakuraba, MD, of the University of Chicago said in an interview.
“Screening colonoscopy has been shown to decrease CRC mortality, but factors influencing outcomes after screening colonoscopies remain to be determined,” he said.
“It was expected that high-quality colonoscopy performed by an endoscopist with ADR of 25% or greater was associated with a lower risk for CRC death,” Dr. Sakuraba said. “The strength of the study is that the authors demonstrated that high-quality colonoscopy was more important in individuals with high-risk adenomas, such as advanced adenomas or at least three adenomas.”
The study findings have implications for practice in that they show the importance of monitoring performance quality in screening colonoscopy, Dr. Sakuraba said, “especially when patients have high-risk adenomas.” However, “the authors included only age and sex as variables, but the influence of other factors, such as smoking, [body mass index], and race, need to be studied.”
The researchers had no financial conflicts to disclose. Dr. Sakuraba had no financial conflicts to disclose.
Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC.
FROM CLINICAL GASTROENTEROLOGY & HEPATOLOGY
HMAs benefit children with relapsed/refractory AML
Hypomethylating agents are generally considered to be agents of choice for older adults with acute myeloid leukemia who cannot tolerate the rigors of more intensive therapies, but HMAs also can serve as a bridge to transplant for children and young adults with relapsed or refractory acute myeloid leukemia.
That’s according to Himalee S. Sabnis, MD, MSc and colleagues at Emory University and the Aflac Cancer and Blood Disorders Center at Children’s Healthcare of Atlanta.
In a scientific poster presented during the annual meeting of the American Society of Pediatric Hematology/Oncology, the investigators reported results of a retrospective study of HMA use in patients with relapsed or refractory pediatric AML treated in their center.
Curative intent and palliation
They identified 25 patients (15 boys) with a median age of 8.3 years (range 1.4 to 21 years) with relapsed/refractory AML who received HMAs for curative intent prior to hematopoietic stem cell transplant (HSCT), palliation, or in combination with donor leukocyte infusion (DLI).
Of the 21 patients with relapsed disease, 16 were in first relapse and 5 were in second relapse or greater. Four of the patients had primary refractory disease. The cytogenetic and molecular features were KMT2A rearrangements in six patients, monosomy 7/deletion 7 q in four patients, 8;21 translocation in three patients, and FLT3-ITD mutations in four patients.
The patients received a median of 5.3 HMA cycles each. Of the 133 total HMA cycles, 87 were with azacitidine, and 46 were with decitabine.
HMAs were used as monotherapy in 62% of cycles, and in combination with other therapies in 38%. Of the combinations, 16 were with donor leukocyte infusion, and 9 were gemtuzumab ozogamicin (Mylotarg).
Of the 13 patients for whom HMAs were used as part of treatment plan with curative intent, 5 proceeded to HSCT, and 8 did not. Of the 5 patients, 1 died from transplant-related causes, and 4 were alive post transplant. Of the 8 patients who did not undergo transplant, 1 had chimeric antigen receptor T- cell (CAR T) therapy, and 7 experienced disease progression.
The mean duration of palliative care was 144 days, with patients receiving from one to nine cycles with an HMA, and no treatment interruptions due to toxicity.
Of 5 patients who received donor leukocyte infusions, 3 reached minimal residual disease negativity; all 3 of these patients had late relapses but remained long-term survivors, the investigators reported.
They concluded that “hypomethylating agents can be used effectively as a bridge to transplantation in relapsed and refractory AML with gemtuzumab ozogamicin being the most common agent for combination therapy. Palliation with HMAs is associated with low toxicity and high tolerability in relapsed/refractory AML. Use of HMAs with DLI can induce sustained remissions in some patients.”
The authors propose prospective clinical trials using HMAs in the relapsed/refractory pediatric AML setting in combination with gemtuzumab ozogamicin, alternative targeted agents, and chemotherapy.
HMAs in treatment-related AML
Shilpa Shahani, MD, a pediatric oncologist and assistant clinical professor of pediatrics at City of Hope in Duarte, Calif., who was not involved in the study, has experience administering HMAs primarily in the adolescent and young adult population with AML.
“Azacitidine and decitabine are good for treatment-related leukemias,” she said in an interview. “They can be used otherwise for people who have relapsed disease and are trying to navigate other options.”
Although they are not standard first-line agents in younger patients, HMAs can play a useful role in therapy for relapsed or refractory disease, she said.
The authors and Dr. Shahani reported having no conflicts of interest to disclose.
Hypomethylating agents are generally considered to be agents of choice for older adults with acute myeloid leukemia who cannot tolerate the rigors of more intensive therapies, but HMAs also can serve as a bridge to transplant for children and young adults with relapsed or refractory acute myeloid leukemia.
That’s according to Himalee S. Sabnis, MD, MSc and colleagues at Emory University and the Aflac Cancer and Blood Disorders Center at Children’s Healthcare of Atlanta.
In a scientific poster presented during the annual meeting of the American Society of Pediatric Hematology/Oncology, the investigators reported results of a retrospective study of HMA use in patients with relapsed or refractory pediatric AML treated in their center.
Curative intent and palliation
They identified 25 patients (15 boys) with a median age of 8.3 years (range 1.4 to 21 years) with relapsed/refractory AML who received HMAs for curative intent prior to hematopoietic stem cell transplant (HSCT), palliation, or in combination with donor leukocyte infusion (DLI).
Of the 21 patients with relapsed disease, 16 were in first relapse and 5 were in second relapse or greater. Four of the patients had primary refractory disease. The cytogenetic and molecular features were KMT2A rearrangements in six patients, monosomy 7/deletion 7 q in four patients, 8;21 translocation in three patients, and FLT3-ITD mutations in four patients.
The patients received a median of 5.3 HMA cycles each. Of the 133 total HMA cycles, 87 were with azacitidine, and 46 were with decitabine.
HMAs were used as monotherapy in 62% of cycles, and in combination with other therapies in 38%. Of the combinations, 16 were with donor leukocyte infusion, and 9 were gemtuzumab ozogamicin (Mylotarg).
Of the 13 patients for whom HMAs were used as part of treatment plan with curative intent, 5 proceeded to HSCT, and 8 did not. Of the 5 patients, 1 died from transplant-related causes, and 4 were alive post transplant. Of the 8 patients who did not undergo transplant, 1 had chimeric antigen receptor T- cell (CAR T) therapy, and 7 experienced disease progression.
The mean duration of palliative care was 144 days, with patients receiving from one to nine cycles with an HMA, and no treatment interruptions due to toxicity.
Of 5 patients who received donor leukocyte infusions, 3 reached minimal residual disease negativity; all 3 of these patients had late relapses but remained long-term survivors, the investigators reported.
They concluded that “hypomethylating agents can be used effectively as a bridge to transplantation in relapsed and refractory AML with gemtuzumab ozogamicin being the most common agent for combination therapy. Palliation with HMAs is associated with low toxicity and high tolerability in relapsed/refractory AML. Use of HMAs with DLI can induce sustained remissions in some patients.”
The authors propose prospective clinical trials using HMAs in the relapsed/refractory pediatric AML setting in combination with gemtuzumab ozogamicin, alternative targeted agents, and chemotherapy.
HMAs in treatment-related AML
Shilpa Shahani, MD, a pediatric oncologist and assistant clinical professor of pediatrics at City of Hope in Duarte, Calif., who was not involved in the study, has experience administering HMAs primarily in the adolescent and young adult population with AML.
“Azacitidine and decitabine are good for treatment-related leukemias,” she said in an interview. “They can be used otherwise for people who have relapsed disease and are trying to navigate other options.”
Although they are not standard first-line agents in younger patients, HMAs can play a useful role in therapy for relapsed or refractory disease, she said.
The authors and Dr. Shahani reported having no conflicts of interest to disclose.
Hypomethylating agents are generally considered to be agents of choice for older adults with acute myeloid leukemia who cannot tolerate the rigors of more intensive therapies, but HMAs also can serve as a bridge to transplant for children and young adults with relapsed or refractory acute myeloid leukemia.
That’s according to Himalee S. Sabnis, MD, MSc and colleagues at Emory University and the Aflac Cancer and Blood Disorders Center at Children’s Healthcare of Atlanta.
In a scientific poster presented during the annual meeting of the American Society of Pediatric Hematology/Oncology, the investigators reported results of a retrospective study of HMA use in patients with relapsed or refractory pediatric AML treated in their center.
Curative intent and palliation
They identified 25 patients (15 boys) with a median age of 8.3 years (range 1.4 to 21 years) with relapsed/refractory AML who received HMAs for curative intent prior to hematopoietic stem cell transplant (HSCT), palliation, or in combination with donor leukocyte infusion (DLI).
Of the 21 patients with relapsed disease, 16 were in first relapse and 5 were in second relapse or greater. Four of the patients had primary refractory disease. The cytogenetic and molecular features were KMT2A rearrangements in six patients, monosomy 7/deletion 7 q in four patients, 8;21 translocation in three patients, and FLT3-ITD mutations in four patients.
The patients received a median of 5.3 HMA cycles each. Of the 133 total HMA cycles, 87 were with azacitidine, and 46 were with decitabine.
HMAs were used as monotherapy in 62% of cycles, and in combination with other therapies in 38%. Of the combinations, 16 were with donor leukocyte infusion, and 9 were gemtuzumab ozogamicin (Mylotarg).
Of the 13 patients for whom HMAs were used as part of treatment plan with curative intent, 5 proceeded to HSCT, and 8 did not. Of the 5 patients, 1 died from transplant-related causes, and 4 were alive post transplant. Of the 8 patients who did not undergo transplant, 1 had chimeric antigen receptor T- cell (CAR T) therapy, and 7 experienced disease progression.
The mean duration of palliative care was 144 days, with patients receiving from one to nine cycles with an HMA, and no treatment interruptions due to toxicity.
Of 5 patients who received donor leukocyte infusions, 3 reached minimal residual disease negativity; all 3 of these patients had late relapses but remained long-term survivors, the investigators reported.
They concluded that “hypomethylating agents can be used effectively as a bridge to transplantation in relapsed and refractory AML with gemtuzumab ozogamicin being the most common agent for combination therapy. Palliation with HMAs is associated with low toxicity and high tolerability in relapsed/refractory AML. Use of HMAs with DLI can induce sustained remissions in some patients.”
The authors propose prospective clinical trials using HMAs in the relapsed/refractory pediatric AML setting in combination with gemtuzumab ozogamicin, alternative targeted agents, and chemotherapy.
HMAs in treatment-related AML
Shilpa Shahani, MD, a pediatric oncologist and assistant clinical professor of pediatrics at City of Hope in Duarte, Calif., who was not involved in the study, has experience administering HMAs primarily in the adolescent and young adult population with AML.
“Azacitidine and decitabine are good for treatment-related leukemias,” she said in an interview. “They can be used otherwise for people who have relapsed disease and are trying to navigate other options.”
Although they are not standard first-line agents in younger patients, HMAs can play a useful role in therapy for relapsed or refractory disease, she said.
The authors and Dr. Shahani reported having no conflicts of interest to disclose.
FROM THE 2021 ASPHO CONFERENCE
Restricted dietary acid load may reduce odds of migraine
Key clinical point: High dietary acid load was associated with higher odds of migraine. Restricting dietary acid load could therefore reduce the odds of migraine in susceptible patients.
Major finding: The risk for migraine was higher among individuals in highest vs. lowest tertile of dietary acid load measures, including potential renal acid load (odds ratio [OR], 7.208; 95% confidence interval [95% CI], 3.33-15.55), net endogenous acid production (OR, 4.10; 95% CI, 1.92-8.77) scores, and the protein/potassium ratio (OR, 4.12; 95% CI, 1.93-8.81; all Ptrend less than .001).
Study details: Findings are from a case-control study of 1,096 participants including those with migraine (n=514) and healthy volunteers (n=582).
Disclosures: The study was supported by the Iranian Centre of Neurological Research, Neuroscience Institute. All authors declared no conflicts of interest.
Source: Mousavi M et al. Neurol Ther. 2021 Apr 24. doi: 10.1007/s40120-021-00247-2.
Key clinical point: High dietary acid load was associated with higher odds of migraine. Restricting dietary acid load could therefore reduce the odds of migraine in susceptible patients.
Major finding: The risk for migraine was higher among individuals in highest vs. lowest tertile of dietary acid load measures, including potential renal acid load (odds ratio [OR], 7.208; 95% confidence interval [95% CI], 3.33-15.55), net endogenous acid production (OR, 4.10; 95% CI, 1.92-8.77) scores, and the protein/potassium ratio (OR, 4.12; 95% CI, 1.93-8.81; all Ptrend less than .001).
Study details: Findings are from a case-control study of 1,096 participants including those with migraine (n=514) and healthy volunteers (n=582).
Disclosures: The study was supported by the Iranian Centre of Neurological Research, Neuroscience Institute. All authors declared no conflicts of interest.
Source: Mousavi M et al. Neurol Ther. 2021 Apr 24. doi: 10.1007/s40120-021-00247-2.
Key clinical point: High dietary acid load was associated with higher odds of migraine. Restricting dietary acid load could therefore reduce the odds of migraine in susceptible patients.
Major finding: The risk for migraine was higher among individuals in highest vs. lowest tertile of dietary acid load measures, including potential renal acid load (odds ratio [OR], 7.208; 95% confidence interval [95% CI], 3.33-15.55), net endogenous acid production (OR, 4.10; 95% CI, 1.92-8.77) scores, and the protein/potassium ratio (OR, 4.12; 95% CI, 1.93-8.81; all Ptrend less than .001).
Study details: Findings are from a case-control study of 1,096 participants including those with migraine (n=514) and healthy volunteers (n=582).
Disclosures: The study was supported by the Iranian Centre of Neurological Research, Neuroscience Institute. All authors declared no conflicts of interest.
Source: Mousavi M et al. Neurol Ther. 2021 Apr 24. doi: 10.1007/s40120-021-00247-2.
Migraine linked to increased hypertension risk in menopausal women
Key clinical point: Menopausal women with migraine are at a higher risk for incident hypertension.
Major finding: Migraine was associated with an increased risk for incident hypertension (hazard ratiomigraine, 1.29; 95% confidence interval, 1.24-1.35) in menopausal women.
Study details: Findings are from a longitudinal cohort study of 56,202 menopausal women free of hypertension or cardiovascular disease at the age of menopause who participated in the French E3N cohort.
Disclosures: The authors reported no targeted funding. CJ MacDonald and T Kurth received funding and/or honoraria from multiple sources. Other authors had no disclosures relevant to the manuscript.
Source: MacDonald CJ et al. Neurology. 2021 Apr 21. doi: 10.1212/WNL.0000000000011986.
Key clinical point: Menopausal women with migraine are at a higher risk for incident hypertension.
Major finding: Migraine was associated with an increased risk for incident hypertension (hazard ratiomigraine, 1.29; 95% confidence interval, 1.24-1.35) in menopausal women.
Study details: Findings are from a longitudinal cohort study of 56,202 menopausal women free of hypertension or cardiovascular disease at the age of menopause who participated in the French E3N cohort.
Disclosures: The authors reported no targeted funding. CJ MacDonald and T Kurth received funding and/or honoraria from multiple sources. Other authors had no disclosures relevant to the manuscript.
Source: MacDonald CJ et al. Neurology. 2021 Apr 21. doi: 10.1212/WNL.0000000000011986.
Key clinical point: Menopausal women with migraine are at a higher risk for incident hypertension.
Major finding: Migraine was associated with an increased risk for incident hypertension (hazard ratiomigraine, 1.29; 95% confidence interval, 1.24-1.35) in menopausal women.
Study details: Findings are from a longitudinal cohort study of 56,202 menopausal women free of hypertension or cardiovascular disease at the age of menopause who participated in the French E3N cohort.
Disclosures: The authors reported no targeted funding. CJ MacDonald and T Kurth received funding and/or honoraria from multiple sources. Other authors had no disclosures relevant to the manuscript.
Source: MacDonald CJ et al. Neurology. 2021 Apr 21. doi: 10.1212/WNL.0000000000011986.
Algorithms for Prediction of Clinical Deterioration on the General Wards: A Scoping Review
The early identification of clinical deterioration among adult hospitalized patients remains a challenge.1 Delayed identification is associated with increased morbidity and mortality, unplanned intensive care unit (ICU) admissions, prolonged hospitalization, and higher costs.2,3 Earlier detection of deterioration using predictive algorithms of vital sign monitoring might avoid these negative outcomes.4 In this scoping review, we summarize current algorithms and their evidence.
Vital signs provide the backbone for detecting clinical deterioration. Early warning scores (EWS) and outreach protocols were developed to bring structure to the assessment of vital signs. Most EWS claim to predict clinical end points such as unplanned ICU admission up to 24 hours in advance.5,6 Reviews of EWS showed a positive trend toward reduced length of stay and mortality. However, conclusions about general efficacy could not be generated because of case heterogeneity and methodologic shortcomings.4,7 Continuous automated vital sign monitoring of patients on the general ward can now be accomplished with wearable devices.8 The first reports on continuous monitoring showed earlier detection of deterioration but not improved clinical end points.4,9 Since then, different reports on continuous monitoring have shown positive effects but concluded that unprocessed monitoring data per se falls short of generating actionable alarms.4,10,11
Predictive algorithms, which often use artificial intelligence (AI), are increasingly employed to recognize complex patterns or abnormalities and support predictions of events in big data sets.12,13 Especially when combined with continuous vital sign monitoring, predictive algorithms have the potential to expedite detection of clinical deterioration and improve patient outcomes. Predictive algorithms using vital signs in the ICU have shown promising results.14 The impact of predictive algorithms on the general wards, however, is unclear.
The aims of our scoping review were to explore the extent and range of and evidence for predictive vital signs–based algorithms on the adult general ward; to describe the variety of these algorithms; and to categorize effects, facilitators, and barriers of their implementation.15
MATERIALS AND METHODS
We performed a scoping review to create a summary of the current state of research. We used the five-step method of Levac and followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses Extension for Scoping Reviews guidelines (Appendix 1).16,17
PubMed, Embase, and CINAHL databases were searched for English-language articles written between January 1, 2010, and November 20, 2020. We developed the search queries with an experienced information scientist, and we used database-specific terms and strategies for input, clinical outcome, method, predictive capability, and population (Appendix 2). Additionally, we searched the references of the selected articles, as well as publications citing these articles.
All studies identified were screened by title and abstract by two researchers (RP and YE). The selected studies were read in their entirety and checked for eligibility using the following inclusion criteria: automated algorithm; vital signs-based; real-time prediction; of clinical deterioration; in an adult, general ward population. In cases where there were successive publications with the same algorithm and population, we selected the most recent study.
For screening and selection, we used the Rayyan QCRI online tool (Qatar Computing Research Institute) and Endnote X9 (Clarivate Analytics). We extracted information using a data extraction form and organized it into descriptive characteristics of the selected studies (Table 1): an input data table showing number of admissions, intermittent or continuous measurements, vital signs measured, laboratory results (Appendix Table 1), a table summarizing study designs and settings (Appendix Table 2), and a prediction performance table (Table 2). We report characteristics of the populations and algorithms, prediction specifications such as area under the receiver operating curve (AUROC), and predictive values. Predictive values are affected by prevalence, which may differ among populations. To compare the algorithms, we calculated an indexed positive predictive value (PPV) and a number needed to evaluate (NNE) using a weighted average prevalence of clinical deterioration of 3.0%.
We defined clinical deterioration as end points, including rapid response team activation, cardiopulmonary resuscitation, transfer to an ICU, or death.
Effects, facilitators, and barriers were identified and categorized using ATLAS.ti 8 software (ATLAS.ti) and evaluated by three researchers (RP, MK, and THvdB). These were categorized using the adapted frameworks of Gagnon et al18 for the barriers and facilitators and of Donabedian19 for the effects (Appendix 3).
The Gagnon et al framework was adapted by changing two of four domains—that is, “Individual” was changed to “Professional” and “Human” to “Physiology.” The domains of “Technology” and “Organization” remained unchanged. The Donabedian domains of “Outcome,” “Process,” and “Structure” also remained unchanged (Table 3).
We divided the studies into two groups: studies on predictive algorithms with and without AI when reporting on characteristics and performance. For the secondary aim of exploring implementation impact, we reported facilitators and barriers in a narrative way, highlighting the most frequent and notable findings.
RESULTS
As shown in the Figure, we found 1741 publications, of which we read the full-text of 109. There were 1632 publications that did not meet the inclusion criteria. The publications by Churpek et al,20,21 Bartkiowak et al,22 Edelson et al,23 Escobar et al,24,25 and Kipnis et al26 reported on the same algorithms or databases but had significantly different approaches. For multiple publications using the same algorithm and population, the most recent was named with inclusion of the earlier findings.20,21,27-29 The resulting 21 papers are included in this review.
Descriptive characteristics of the studies are summarized in Table 1. Nineteen of the publications were full papers and two were conference abstracts. Most of the studies (n = 18) were from the United States; there was one study from South Korea,30 one study from Portugal,31 and one study from the United Kingdom.32 In 15 of the studies, there was a strict focus on general or specific wards; 6 studies also included the ICU and/or emergency departments.
Two of the studies were clinical trials, 2 were prospective observational studies, and 17 were retrospective studies. Five studies reported on an active predictive model during admission. Of these, 3 reported that the model was clinically implemented, using the predictions in their clinical workflow. None of the implemented studies used AI.
All input variables are presented in Appendix Table 1.
The non-AI algorithm prediction horizons ranged from 4 to 24 hours, with a median of 24 hours (interquartile range [IQR], 12-24 hours). The AI algorithms ranged from 2 to 48 hours and had a median horizon of 14 hours (IQR, 12-24 hours).
We found three studies reporting patient outcomes. The most recent of these was a large multicenter implementation study by Escobar et al25 that included an extensive follow-up response. This study reported a significantly decreased 30-day mortality in the intervention cohort. A smaller randomized controlled trial reported no significant differences in patient outcomes with earlier warning alarms.27 A third study reported more appropriate rapid response team deployment and decreased mortality in a subgroup analysis.35
Effects, Facilitators, and Barriers
As shown in the Appendix Figure and further detailed in Table 3, the described effects were predominantly positive—57 positive effects vs 11 negative effects. These positive effects sorted primarily into the outcome and process domains.
All of the studies that compared their proposed model with one of various warning systems (eg, EWS, National Early Warning Score [NEWS], Modified Early Warning Score [MEWS]) showed superior performance (based on AUROC and reported predictive values). In 17 studies, the authors reported their model as more useful or superior to the EWS.20-23,26-28,34,36-41 Four studies reported real-time detection of deterioration before regular EWS,20,26,42 and three studies reported positive effects on patient-related outcomes.26,35 Four negative effects were noted on the controllability, validity, and potential limitations.27,42
Of the 38 remarks in the Technology domain, difficulty with implementation in daily practice was a commonly cited barrier.22,24,40,42 Difficulties included creating real-time data feeds out of the EMR, though there were mentions of some successful examples.25,27,36 Difficulty in the interpretability of AI was also considered a potential barrier.30,32,33,35,39,41 There were remarks as to the applicability of the prolonged prediction horizon because of the associated decoupling from the clinical view.39,42
Conservative attitudes toward new technologies and inadequate knowledge were mentioned as barriers.39 Repeated remarks were made on the difficulty of interpreting and responding to a predicted escalation, as the clinical pattern might not be recognizable at such an early stage. On the other hand, it is expected that less invasive countermeasures would be adequate to avert further escalation. Earlier recognition of possible escalations also raised potential ethical questions, such as when to discuss palliative care.24
The heterogeneity of the general ward population and the relatively low prevalence of deterioration were mentioned as barriers.24,30,38,41 There were also concerns that not all escalations are preventable and that some patient outcomes may not be modifiable.24,38
Many investigators expected reductions in false alarms and associated alarm fatigue (reflected as higher PPVs). Furthermore, they expected workflow to improve and workload to decrease.21,23,27,31,33,35,38,41 Despite the capacity of modern EMRs to store large amounts of patient data, some investigators felt improvements to real-time access, data quality and validity, and data density are needed to ensure valid associated predictions.21,22,24,32,37
DISCUSSION
As the complexity and comorbidity of hospitalized adults grow, predicting clinical deterioration is becoming more important. With an ever-increasing amount of available
There are several important limitations across these studies. In a clinical setting, these models would function as a screening test. Almost all studies report an AUROC; however, sensitivity and PPV or NNE (defined as 1/PPV) may be more useful than AUROC when predicting low-frequency events with high-potential clinical impact.44 Assessing the NNE is especially relevant because of its relation to alarm fatigue and responsiveness of clinicians.43 Alarm fatigue and lack of adequate response to alarms were repeatedly cited as potential barriers for application of automated scores.
Although the results of our scoping review are promising, there are limited data on clinical outcomes using these algorithms. Only three of five algorithms were used to guide clinical decision-making.25,27,35 Kollef et al27 showed shorter hospitalizations and Evans et al35 found decreased mortality rates in a multimorbid subgroup. Escobar et al25 found an overall and consistent decrease in mortality in a large, heterogenic population of inpatients across 21 hospitals. While Escobar et al’s findings provide strong evidence that predictive algorithms and structured follow-up on alarms can improve patient outcomes, it recognizes that not all facilities will have the resources to implement them.25 Dedicated round-the-clock follow-up of alarms has yet to be proven feasible for smaller institutions, and leaner solutions must be explored. The example set by Escobar et al25 should be translated into various settings to prove its reproducibility and to substantiate the clinical impact of predictive models and structured follow-up.
According to expert opinion, the use of high-frequency or continuous monitoring at low-acuity wards and AI algorithms to detect trends and patterns will reduce failure-to-rescue rates.4,9,43 However, most studies in our review focused on periodic spot-checked vital signs, and none of the AI algorithms were implemented in clinical care (Appendix Table 1
STRENGTHS AND LIMITATIONS
We performed a comprehensive review of the current literature using a clear and reproducible methodology to minimize the risk of missing relevant publications. The identified research is mainly limited to large US centers and consists of mostly retrospective studies. Heterogeneity among inputs, endpoints, time horizons, and evaluation metrics make comparisons challenging. Comments on facilitators, barriers, and effects were limited.
RECOMMENDATIONS FOR FUTURE RESEARCH
Artificial intelligence and the use of continuous monitoring hold great promise in creating optimal predictive algorithms. Future studies should directly compare AI- and non-AI-based algorithms using continuous monitoring to determine predictive accuracy, feasibility, costs, and outcomes. A consensus on endpoint definitions, input variables, methodology, and reporting is needed to enhance reproducibility, comparability, and generalizability of future research.
CONCLUSION
- van Galen LS, Struik PW, Driesen BEJM, et al. Delayed recognition of deterioration of patients in general wards is mostly caused by human related monitoring failures: a root cause analysis of unplanned ICU admissions. PLoS One. 2016;11(8):e0161393. https://doi.org/10.1371/journal. pone.0161393
- Mardini L, Lipes J, Jayaraman D. Adverse outcomes associated with delayed intensive care consultation in medical and surgical inpatients. J Crit Care. 2012;27(6):688-693. https://doi.org/10.1016/j.jcrc.2012.04.011
- Young MP, Gooder VJ, McBride K, James B, Fisher ES. Inpatient transfers to the intensive care unit: delays are associated with increased mortality and morbidity. J Gen Intern Med. 2003;18(2):77-83. https://doi.org/10.1046/ j.1525-1497.2003.20441.x
- Khanna AK, Hoppe P, Saugel B. Automated continuous noninvasive ward monitoring: future directions and challenges. Crit Care. 2019;23(1):194. https://doi.org/10.1186/s13054-019-2485-7
- Ludikhuize J, Hamming A, de Jonge E, Fikkers BG. Rapid response systems in The Netherlands. Jt Comm J Qual Patient Saf. 2011;37(3):138-197. https:// doi.org/10.1016/s1553-7250(11)37017-1
- Cuthbertson BH, Boroujerdi M, McKie L, Aucott L, Prescott G. Can physiological variables and early warning scoring systems allow early recognition of the deteriorating surgical patient? Crit Care Med. 2007;35(2):402-409. https://doi.org/10.1097/01.ccm.0000254826.10520.87
- Alam N, Hobbelink EL, van Tienhoven AJ, van de Ven PM, Jansma EP, Nanayakkara PWB. The impact of the use of the Early Warning Score (EWS) on patient outcomes: a systematic review. Resuscitation. 2014;85(5):587-594. https://doi.org/10.1016/j.resuscitation.2014.01.013
- Weenk M, Koeneman M, van de Belt TH, Engelen LJLPG, van Goor H, Bredie SJH. Wireless and continuous monitoring of vital signs in patients at the general ward. Resuscitation. 2019;136:47-53. https://doi.org/10.1016/j.resuscitation.2019.01.017
- Cardona-Morrell M, Prgomet M, Turner RM, Nicholson M, Hillman K. Effectiveness of continuous or intermittent vital signs monitoring in preventing adverse events on general wards: a systematic review and meta-analysis. Int J Clin Pract. 2016;70(10):806-824. https://doi.org/10.1111/ijcp.12846
- Brown H, Terrence J, Vasquez P, Bates DW, Zimlichman E. Continuous monitoring in an inpatient medical-surgical unit: a controlled clinical trial. Am J Med. 2014;127(3):226-232. https://doi.org/10.1016/j.amjmed.2013.12.004
- Mestrom E, De Bie A, van de Steeg M, Driessen M, Atallah L, Bezemer R. Implementation of an automated early warning scoring system in a E8 Journal of Hospital Medicine® Published Online June 2021 An Official Publication of the Society of Hospital Medicine Peelen et al | Predicting Deterioration: A Scoping Review surgical ward: practical use and effects on patient outcomes. PLoS One. 2019;14(5):e0213402. https://doi.org/10.1371/journal.pone.0213402
- Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. https://doi.org/10.1136/ svn-2017-000101
- Iwashyna TJ, Liu V. What’s so different about big data? A primer for clinicians trained to think epidemiologically. Ann Am Thorac Soc. 2014;11(7):1130- 1135. https://doi.org/10.1513/annalsats.201405-185as
- Jalali A, Bender D, Rehman M, Nadkanri V, Nataraj C. Advanced analytics for outcome prediction in intensive care units. Conf Proc IEEE Eng Med Biol Soc. 2016;2016:2520-2524. https://doi.org/10.1109/embc.2016.7591243
- Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143. https://doi.org/10.1186/s12874-018-0611-x
- Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19-32. https://doi.org/10.1080/13645 57032000119616
- Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMAScR): checklist and explanation. Ann Intern Med. 2018;169(7):467- 473. https://doi.org/10.7326/m18-0850
- Gagnon MP, Desmartis M, Gagnon J, et al. Framework for user involvement in health technology assessment at the local level: views of health managers, user representatives, and clinicians. Int J Technol Assess Health Care. 2015;31(1-2):68-77. https://doi.org/10.1017/s0266462315000070
- Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743-1748. https://doi.org/10.1001/jama.260.12.1743
- Churpek MM, Yuen TC, Winslow C, et al. Multicenter development and validation of a risk stratification tool for ward patients. Am J Respir Crit Care Med. 2014;190(6):649-655. https://doi.org/10.1164/rccm.201406-1022oc
- Churpek MM, Yuen TC, Winslow C, Meltzer DO, Kattan MW, Edelson DP. Multicenter comparison of machine learning methods and conventional regression for predicting clinical deterioration on the wards. Crit Care Med. 2016;44(2):368-374. https://doi.org/10.1097/ccm.0000000000001571
- Bartkowiak B, Snyder AM, Benjamin A, et al. Validating the electronic cardiac arrest risk triage (eCART) score for risk stratification of surgical inpatients in the postoperative setting: retrospective cohort study. Ann Surg. 2019;269(6):1059-1063. https://doi.org/10.1097/sla.0000000000002665
- Edelson DP, Carey K, Winslow CJ, Churpek MM. Less is more: detecting clinical deterioration in the hospital with machine learning using only age, heart rate and respiratory rate. Abstract presented at: American Thoracic Society International Conference; May 22, 2018; San Diego, California. Am J Resp Crit Care Med. 2018;197:A4444.
- Escobar GJ, LaGuardia JC, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388-395. https:// doi.org/10.1002/jhm.1929
- Escobar GJ, Liu VX, Schuler A, Lawson B, Greene JD, Kipnis P. Automated identification of adults at risk for in-hospital clinical deterioration. N Engl J Med. 2020;383(20):1951-1960. https://doi.org/10.1056/nejmsa2001090
- Kipnis P, Turk BJ, Wulf DA, et al. Development and validation of an electronic medical record-based alert score for detection of inpatient deterioration outside the ICU. J Biomed Inform. 2016;64:10-19. https://doi.org/10.1016/j. jbi.2016.09.013
- Kollef MH, Chen Y, Heard K, et al. A randomized trial of real-time automated clinical deterioration alerts sent to a rapid response team. J Hosp Med. 2014;9(7):424-429. https://doi.org/10.1002/jhm.2193
- Hackmann G, Chen M, Chipara O, et al. Toward a two-tier clinical warning system for hospitalized patients. AMIA Annu Symp Proc. 2011;2011:511-519.
- Bailey TC, Chen Y, Mao Y, Lu, C, Hackmann G, Micek ST. A trial of a real-time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236-242. https://doi.org/10.1002/jhm.2009
- Kwon JM, Lee Y, Lee Y, Lee S, Park J. An algorithm based on deep learning for predicting in-hospital cardiac arrest. J Am Heart Assoc. 2018;7(13):e008678. https://doi.org/10.1161/jaha.118.008678
- Correia S, Gomes A, Shahriari S, Almeida JP, Severo M, Azevedo A. Performance of the early warning system vital to predict unanticipated higher-level of care admission and in-hospital death of ward patients. Value Health. 2018;21(S3):S360. https://doi.org/10.1016/j.jval.2018.09.2152
- Shamout FE, Zhu T, Sharma P, Watkinson PJ, Clifton DA. Deep interpretable early warning system for the detection of clinical deterioration. IEEE J Biomed Health Inform. 2020;24(2):437-446. https://doi.org/10.1109/ jbhi.2019.2937803
- Bai Y, Do DH, Harris PRE, et al. Integrating monitor alarms with laboratory test results to enhance patient deterioration prediction. J Biomed Inform. 2015;53:81-92. https://doi.org/10.1016/j.jbi.2014.09.006
- Hu X, Sapo M, Nenov V, et al. Predictive combinations of monitor alarms preceding in-hospital code blue events. J Biomed Inform. 2012;45(5):913-921. https://doi.org/10.1016/j.jbi.2012.03.001
- Evans RS, Kuttler KG, Simpson KJ, et al. Automated detection of physiologic deterioration in hospitalized patients. J Am Med Inform Assoc. 2015;22(2):350-360. https://doi.org/10.1136/amiajnl-2014-002816
- Ghosh E, Eshelman L, Yang L, Carlson E, Lord B. Early deterioration indicator: data-driven approach to detecting deterioration in general ward. Resuscitation. 2018;122:99-105. https://doi.org/10.1016/j.resuscitation. 2017.10.026
- Kang MA, Churpek MM, Zadravecz FJ, Adhikari R, Twu NM, Edelson DP: Real-time risk prediction on the wards: a feasibility study. Crit Care Med. 2016;44(8):1468-1473. https://doi.org/10.1097/ccm.0000000000001716
- Hu SB, Wong DJL, Correa A, Li N, Deng JC. Prediction of clinical deterioration in hospitalized adult patients with hematologic malignancies using a neural network model. PLoS One. 2016;11(8):e0161401. https://doi. org/10.1371/journal.pone.0161401
- Rothman MJ, Rothman SI, Beals J 4th. Development and validation of a continuous measure of patient condition using the electronic medical record. J Biomed Inform. 2013;46(5):837-848. https://doi.org/10.1016/j. jbi.2013.06.011
- Alaa AM, Yoon J, Hu S, van der Schaar M. Personalized risk scoring for critical care prognosis using mixtures of Gaussian processes. IEEE Trans Biomed Eng. 2018;65(1):207-218. https://doi.org/10.1109/tbme.2017.2698602
- Mohamadlou H, Panchavati S, Calvert J, et al. Multicenter validation of a machine-learning algorithm for 48-h all-cause mortality prediction. Health Informatics J. 2020;26(3):1912-1925. https://doi.org/10.1177/1460458219894494
- Alvarez CA, Clark CA, Zhang S, et al. Predicting out of intensive care unit cardiopulmonary arrest or death using electronic medical record data. BMC Med Inform Decis Mak. 2013;13:28. https://doi.org/10.1186/1472-6947-13-28
- Vincent JL, Einav S, Pearse R, et al. Improving detection of patient deterioration in the general hospital ward environment. Eur J Anaesthesiol. 2018;35(5):325-333. https://doi.org/10.1097/eja.0000000000000798
- Romero-Brufau S, Huddleston JM, Escobar GJ, Liebow M. Why the C-statistic is not informative to evaluate early warning scores and what metrics to use. Crit Care. 2015;19(1):285. https://doi.org/10.1186/s13054-015-0999-1
- Weenk M, Bredie SJ, Koeneman M, Hesselink G, van Goor H, van de Belt TH. Continuous monitoring of the vital signs in the general ward using wearable devices: randomized controlled trial. J Med Internet Res. 2020;22(6):e15471. https://doi.org/10.2196/15471
- Wellner B, Grand J, Canzone E, et al. Predicting unplanned transfers to the intensive care unit: a machine learning approach leveraging diverse clinical elements. JMIR Med Inform. 2017;5(4):e45. https://doi.org/10.2196/medinform.8680
- Elliott M, Baird J. Pulse oximetry and the enduring neglect of respiratory rate assessment: a commentary on patient surveillance. Br J Nurs. 2019;28(19):1256-1259. https://doi.org/10.12968/bjon.2019.28.19.1256
- Blackwell JN, Keim-Malpass J, Clark MT, et al. Early detection of in-patient deterioration: one prediction model does not fit all. Crit Care Explor. 2020;2(5):e0116. https://doi.org/10.1097/cce.0000000000000116
- Johnson AEW, Pollard TJ, Shen L, et al. MIMIC-III, a freely accessible critical care database. Sci Data. 2016;3:160035. https://doi.org/10.1038/sdata.2016.35
- Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. https://doi. org/10.1370/afm.1713
- Kirkland LL, Malinchoc M, O’Byrne M, et al. A clinical deterioration prediction tool for internal medicine patients. Am J Med Qual. 2013;28(2):135-142 https://doi.org/10.1177/1062860612450459
The early identification of clinical deterioration among adult hospitalized patients remains a challenge.1 Delayed identification is associated with increased morbidity and mortality, unplanned intensive care unit (ICU) admissions, prolonged hospitalization, and higher costs.2,3 Earlier detection of deterioration using predictive algorithms of vital sign monitoring might avoid these negative outcomes.4 In this scoping review, we summarize current algorithms and their evidence.
Vital signs provide the backbone for detecting clinical deterioration. Early warning scores (EWS) and outreach protocols were developed to bring structure to the assessment of vital signs. Most EWS claim to predict clinical end points such as unplanned ICU admission up to 24 hours in advance.5,6 Reviews of EWS showed a positive trend toward reduced length of stay and mortality. However, conclusions about general efficacy could not be generated because of case heterogeneity and methodologic shortcomings.4,7 Continuous automated vital sign monitoring of patients on the general ward can now be accomplished with wearable devices.8 The first reports on continuous monitoring showed earlier detection of deterioration but not improved clinical end points.4,9 Since then, different reports on continuous monitoring have shown positive effects but concluded that unprocessed monitoring data per se falls short of generating actionable alarms.4,10,11
Predictive algorithms, which often use artificial intelligence (AI), are increasingly employed to recognize complex patterns or abnormalities and support predictions of events in big data sets.12,13 Especially when combined with continuous vital sign monitoring, predictive algorithms have the potential to expedite detection of clinical deterioration and improve patient outcomes. Predictive algorithms using vital signs in the ICU have shown promising results.14 The impact of predictive algorithms on the general wards, however, is unclear.
The aims of our scoping review were to explore the extent and range of and evidence for predictive vital signs–based algorithms on the adult general ward; to describe the variety of these algorithms; and to categorize effects, facilitators, and barriers of their implementation.15
MATERIALS AND METHODS
We performed a scoping review to create a summary of the current state of research. We used the five-step method of Levac and followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses Extension for Scoping Reviews guidelines (Appendix 1).16,17
PubMed, Embase, and CINAHL databases were searched for English-language articles written between January 1, 2010, and November 20, 2020. We developed the search queries with an experienced information scientist, and we used database-specific terms and strategies for input, clinical outcome, method, predictive capability, and population (Appendix 2). Additionally, we searched the references of the selected articles, as well as publications citing these articles.
All studies identified were screened by title and abstract by two researchers (RP and YE). The selected studies were read in their entirety and checked for eligibility using the following inclusion criteria: automated algorithm; vital signs-based; real-time prediction; of clinical deterioration; in an adult, general ward population. In cases where there were successive publications with the same algorithm and population, we selected the most recent study.
For screening and selection, we used the Rayyan QCRI online tool (Qatar Computing Research Institute) and Endnote X9 (Clarivate Analytics). We extracted information using a data extraction form and organized it into descriptive characteristics of the selected studies (Table 1): an input data table showing number of admissions, intermittent or continuous measurements, vital signs measured, laboratory results (Appendix Table 1), a table summarizing study designs and settings (Appendix Table 2), and a prediction performance table (Table 2). We report characteristics of the populations and algorithms, prediction specifications such as area under the receiver operating curve (AUROC), and predictive values. Predictive values are affected by prevalence, which may differ among populations. To compare the algorithms, we calculated an indexed positive predictive value (PPV) and a number needed to evaluate (NNE) using a weighted average prevalence of clinical deterioration of 3.0%.
We defined clinical deterioration as end points, including rapid response team activation, cardiopulmonary resuscitation, transfer to an ICU, or death.
Effects, facilitators, and barriers were identified and categorized using ATLAS.ti 8 software (ATLAS.ti) and evaluated by three researchers (RP, MK, and THvdB). These were categorized using the adapted frameworks of Gagnon et al18 for the barriers and facilitators and of Donabedian19 for the effects (Appendix 3).
The Gagnon et al framework was adapted by changing two of four domains—that is, “Individual” was changed to “Professional” and “Human” to “Physiology.” The domains of “Technology” and “Organization” remained unchanged. The Donabedian domains of “Outcome,” “Process,” and “Structure” also remained unchanged (Table 3).
We divided the studies into two groups: studies on predictive algorithms with and without AI when reporting on characteristics and performance. For the secondary aim of exploring implementation impact, we reported facilitators and barriers in a narrative way, highlighting the most frequent and notable findings.
RESULTS
As shown in the Figure, we found 1741 publications, of which we read the full-text of 109. There were 1632 publications that did not meet the inclusion criteria. The publications by Churpek et al,20,21 Bartkiowak et al,22 Edelson et al,23 Escobar et al,24,25 and Kipnis et al26 reported on the same algorithms or databases but had significantly different approaches. For multiple publications using the same algorithm and population, the most recent was named with inclusion of the earlier findings.20,21,27-29 The resulting 21 papers are included in this review.
Descriptive characteristics of the studies are summarized in Table 1. Nineteen of the publications were full papers and two were conference abstracts. Most of the studies (n = 18) were from the United States; there was one study from South Korea,30 one study from Portugal,31 and one study from the United Kingdom.32 In 15 of the studies, there was a strict focus on general or specific wards; 6 studies also included the ICU and/or emergency departments.
Two of the studies were clinical trials, 2 were prospective observational studies, and 17 were retrospective studies. Five studies reported on an active predictive model during admission. Of these, 3 reported that the model was clinically implemented, using the predictions in their clinical workflow. None of the implemented studies used AI.
All input variables are presented in Appendix Table 1.
The non-AI algorithm prediction horizons ranged from 4 to 24 hours, with a median of 24 hours (interquartile range [IQR], 12-24 hours). The AI algorithms ranged from 2 to 48 hours and had a median horizon of 14 hours (IQR, 12-24 hours).
We found three studies reporting patient outcomes. The most recent of these was a large multicenter implementation study by Escobar et al25 that included an extensive follow-up response. This study reported a significantly decreased 30-day mortality in the intervention cohort. A smaller randomized controlled trial reported no significant differences in patient outcomes with earlier warning alarms.27 A third study reported more appropriate rapid response team deployment and decreased mortality in a subgroup analysis.35
Effects, Facilitators, and Barriers
As shown in the Appendix Figure and further detailed in Table 3, the described effects were predominantly positive—57 positive effects vs 11 negative effects. These positive effects sorted primarily into the outcome and process domains.
All of the studies that compared their proposed model with one of various warning systems (eg, EWS, National Early Warning Score [NEWS], Modified Early Warning Score [MEWS]) showed superior performance (based on AUROC and reported predictive values). In 17 studies, the authors reported their model as more useful or superior to the EWS.20-23,26-28,34,36-41 Four studies reported real-time detection of deterioration before regular EWS,20,26,42 and three studies reported positive effects on patient-related outcomes.26,35 Four negative effects were noted on the controllability, validity, and potential limitations.27,42
Of the 38 remarks in the Technology domain, difficulty with implementation in daily practice was a commonly cited barrier.22,24,40,42 Difficulties included creating real-time data feeds out of the EMR, though there were mentions of some successful examples.25,27,36 Difficulty in the interpretability of AI was also considered a potential barrier.30,32,33,35,39,41 There were remarks as to the applicability of the prolonged prediction horizon because of the associated decoupling from the clinical view.39,42
Conservative attitudes toward new technologies and inadequate knowledge were mentioned as barriers.39 Repeated remarks were made on the difficulty of interpreting and responding to a predicted escalation, as the clinical pattern might not be recognizable at such an early stage. On the other hand, it is expected that less invasive countermeasures would be adequate to avert further escalation. Earlier recognition of possible escalations also raised potential ethical questions, such as when to discuss palliative care.24
The heterogeneity of the general ward population and the relatively low prevalence of deterioration were mentioned as barriers.24,30,38,41 There were also concerns that not all escalations are preventable and that some patient outcomes may not be modifiable.24,38
Many investigators expected reductions in false alarms and associated alarm fatigue (reflected as higher PPVs). Furthermore, they expected workflow to improve and workload to decrease.21,23,27,31,33,35,38,41 Despite the capacity of modern EMRs to store large amounts of patient data, some investigators felt improvements to real-time access, data quality and validity, and data density are needed to ensure valid associated predictions.21,22,24,32,37
DISCUSSION
As the complexity and comorbidity of hospitalized adults grow, predicting clinical deterioration is becoming more important. With an ever-increasing amount of available
There are several important limitations across these studies. In a clinical setting, these models would function as a screening test. Almost all studies report an AUROC; however, sensitivity and PPV or NNE (defined as 1/PPV) may be more useful than AUROC when predicting low-frequency events with high-potential clinical impact.44 Assessing the NNE is especially relevant because of its relation to alarm fatigue and responsiveness of clinicians.43 Alarm fatigue and lack of adequate response to alarms were repeatedly cited as potential barriers for application of automated scores.
Although the results of our scoping review are promising, there are limited data on clinical outcomes using these algorithms. Only three of five algorithms were used to guide clinical decision-making.25,27,35 Kollef et al27 showed shorter hospitalizations and Evans et al35 found decreased mortality rates in a multimorbid subgroup. Escobar et al25 found an overall and consistent decrease in mortality in a large, heterogenic population of inpatients across 21 hospitals. While Escobar et al’s findings provide strong evidence that predictive algorithms and structured follow-up on alarms can improve patient outcomes, it recognizes that not all facilities will have the resources to implement them.25 Dedicated round-the-clock follow-up of alarms has yet to be proven feasible for smaller institutions, and leaner solutions must be explored. The example set by Escobar et al25 should be translated into various settings to prove its reproducibility and to substantiate the clinical impact of predictive models and structured follow-up.
According to expert opinion, the use of high-frequency or continuous monitoring at low-acuity wards and AI algorithms to detect trends and patterns will reduce failure-to-rescue rates.4,9,43 However, most studies in our review focused on periodic spot-checked vital signs, and none of the AI algorithms were implemented in clinical care (Appendix Table 1
STRENGTHS AND LIMITATIONS
We performed a comprehensive review of the current literature using a clear and reproducible methodology to minimize the risk of missing relevant publications. The identified research is mainly limited to large US centers and consists of mostly retrospective studies. Heterogeneity among inputs, endpoints, time horizons, and evaluation metrics make comparisons challenging. Comments on facilitators, barriers, and effects were limited.
RECOMMENDATIONS FOR FUTURE RESEARCH
Artificial intelligence and the use of continuous monitoring hold great promise in creating optimal predictive algorithms. Future studies should directly compare AI- and non-AI-based algorithms using continuous monitoring to determine predictive accuracy, feasibility, costs, and outcomes. A consensus on endpoint definitions, input variables, methodology, and reporting is needed to enhance reproducibility, comparability, and generalizability of future research.
CONCLUSION
The early identification of clinical deterioration among adult hospitalized patients remains a challenge.1 Delayed identification is associated with increased morbidity and mortality, unplanned intensive care unit (ICU) admissions, prolonged hospitalization, and higher costs.2,3 Earlier detection of deterioration using predictive algorithms of vital sign monitoring might avoid these negative outcomes.4 In this scoping review, we summarize current algorithms and their evidence.
Vital signs provide the backbone for detecting clinical deterioration. Early warning scores (EWS) and outreach protocols were developed to bring structure to the assessment of vital signs. Most EWS claim to predict clinical end points such as unplanned ICU admission up to 24 hours in advance.5,6 Reviews of EWS showed a positive trend toward reduced length of stay and mortality. However, conclusions about general efficacy could not be generated because of case heterogeneity and methodologic shortcomings.4,7 Continuous automated vital sign monitoring of patients on the general ward can now be accomplished with wearable devices.8 The first reports on continuous monitoring showed earlier detection of deterioration but not improved clinical end points.4,9 Since then, different reports on continuous monitoring have shown positive effects but concluded that unprocessed monitoring data per se falls short of generating actionable alarms.4,10,11
Predictive algorithms, which often use artificial intelligence (AI), are increasingly employed to recognize complex patterns or abnormalities and support predictions of events in big data sets.12,13 Especially when combined with continuous vital sign monitoring, predictive algorithms have the potential to expedite detection of clinical deterioration and improve patient outcomes. Predictive algorithms using vital signs in the ICU have shown promising results.14 The impact of predictive algorithms on the general wards, however, is unclear.
The aims of our scoping review were to explore the extent and range of and evidence for predictive vital signs–based algorithms on the adult general ward; to describe the variety of these algorithms; and to categorize effects, facilitators, and barriers of their implementation.15
MATERIALS AND METHODS
We performed a scoping review to create a summary of the current state of research. We used the five-step method of Levac and followed the Preferred Reporting Items for Systematic reviews and Meta-Analyses Extension for Scoping Reviews guidelines (Appendix 1).16,17
PubMed, Embase, and CINAHL databases were searched for English-language articles written between January 1, 2010, and November 20, 2020. We developed the search queries with an experienced information scientist, and we used database-specific terms and strategies for input, clinical outcome, method, predictive capability, and population (Appendix 2). Additionally, we searched the references of the selected articles, as well as publications citing these articles.
All studies identified were screened by title and abstract by two researchers (RP and YE). The selected studies were read in their entirety and checked for eligibility using the following inclusion criteria: automated algorithm; vital signs-based; real-time prediction; of clinical deterioration; in an adult, general ward population. In cases where there were successive publications with the same algorithm and population, we selected the most recent study.
For screening and selection, we used the Rayyan QCRI online tool (Qatar Computing Research Institute) and Endnote X9 (Clarivate Analytics). We extracted information using a data extraction form and organized it into descriptive characteristics of the selected studies (Table 1): an input data table showing number of admissions, intermittent or continuous measurements, vital signs measured, laboratory results (Appendix Table 1), a table summarizing study designs and settings (Appendix Table 2), and a prediction performance table (Table 2). We report characteristics of the populations and algorithms, prediction specifications such as area under the receiver operating curve (AUROC), and predictive values. Predictive values are affected by prevalence, which may differ among populations. To compare the algorithms, we calculated an indexed positive predictive value (PPV) and a number needed to evaluate (NNE) using a weighted average prevalence of clinical deterioration of 3.0%.
We defined clinical deterioration as end points, including rapid response team activation, cardiopulmonary resuscitation, transfer to an ICU, or death.
Effects, facilitators, and barriers were identified and categorized using ATLAS.ti 8 software (ATLAS.ti) and evaluated by three researchers (RP, MK, and THvdB). These were categorized using the adapted frameworks of Gagnon et al18 for the barriers and facilitators and of Donabedian19 for the effects (Appendix 3).
The Gagnon et al framework was adapted by changing two of four domains—that is, “Individual” was changed to “Professional” and “Human” to “Physiology.” The domains of “Technology” and “Organization” remained unchanged. The Donabedian domains of “Outcome,” “Process,” and “Structure” also remained unchanged (Table 3).
We divided the studies into two groups: studies on predictive algorithms with and without AI when reporting on characteristics and performance. For the secondary aim of exploring implementation impact, we reported facilitators and barriers in a narrative way, highlighting the most frequent and notable findings.
RESULTS
As shown in the Figure, we found 1741 publications, of which we read the full-text of 109. There were 1632 publications that did not meet the inclusion criteria. The publications by Churpek et al,20,21 Bartkiowak et al,22 Edelson et al,23 Escobar et al,24,25 and Kipnis et al26 reported on the same algorithms or databases but had significantly different approaches. For multiple publications using the same algorithm and population, the most recent was named with inclusion of the earlier findings.20,21,27-29 The resulting 21 papers are included in this review.
Descriptive characteristics of the studies are summarized in Table 1. Nineteen of the publications were full papers and two were conference abstracts. Most of the studies (n = 18) were from the United States; there was one study from South Korea,30 one study from Portugal,31 and one study from the United Kingdom.32 In 15 of the studies, there was a strict focus on general or specific wards; 6 studies also included the ICU and/or emergency departments.
Two of the studies were clinical trials, 2 were prospective observational studies, and 17 were retrospective studies. Five studies reported on an active predictive model during admission. Of these, 3 reported that the model was clinically implemented, using the predictions in their clinical workflow. None of the implemented studies used AI.
All input variables are presented in Appendix Table 1.
The non-AI algorithm prediction horizons ranged from 4 to 24 hours, with a median of 24 hours (interquartile range [IQR], 12-24 hours). The AI algorithms ranged from 2 to 48 hours and had a median horizon of 14 hours (IQR, 12-24 hours).
We found three studies reporting patient outcomes. The most recent of these was a large multicenter implementation study by Escobar et al25 that included an extensive follow-up response. This study reported a significantly decreased 30-day mortality in the intervention cohort. A smaller randomized controlled trial reported no significant differences in patient outcomes with earlier warning alarms.27 A third study reported more appropriate rapid response team deployment and decreased mortality in a subgroup analysis.35
Effects, Facilitators, and Barriers
As shown in the Appendix Figure and further detailed in Table 3, the described effects were predominantly positive—57 positive effects vs 11 negative effects. These positive effects sorted primarily into the outcome and process domains.
All of the studies that compared their proposed model with one of various warning systems (eg, EWS, National Early Warning Score [NEWS], Modified Early Warning Score [MEWS]) showed superior performance (based on AUROC and reported predictive values). In 17 studies, the authors reported their model as more useful or superior to the EWS.20-23,26-28,34,36-41 Four studies reported real-time detection of deterioration before regular EWS,20,26,42 and three studies reported positive effects on patient-related outcomes.26,35 Four negative effects were noted on the controllability, validity, and potential limitations.27,42
Of the 38 remarks in the Technology domain, difficulty with implementation in daily practice was a commonly cited barrier.22,24,40,42 Difficulties included creating real-time data feeds out of the EMR, though there were mentions of some successful examples.25,27,36 Difficulty in the interpretability of AI was also considered a potential barrier.30,32,33,35,39,41 There were remarks as to the applicability of the prolonged prediction horizon because of the associated decoupling from the clinical view.39,42
Conservative attitudes toward new technologies and inadequate knowledge were mentioned as barriers.39 Repeated remarks were made on the difficulty of interpreting and responding to a predicted escalation, as the clinical pattern might not be recognizable at such an early stage. On the other hand, it is expected that less invasive countermeasures would be adequate to avert further escalation. Earlier recognition of possible escalations also raised potential ethical questions, such as when to discuss palliative care.24
The heterogeneity of the general ward population and the relatively low prevalence of deterioration were mentioned as barriers.24,30,38,41 There were also concerns that not all escalations are preventable and that some patient outcomes may not be modifiable.24,38
Many investigators expected reductions in false alarms and associated alarm fatigue (reflected as higher PPVs). Furthermore, they expected workflow to improve and workload to decrease.21,23,27,31,33,35,38,41 Despite the capacity of modern EMRs to store large amounts of patient data, some investigators felt improvements to real-time access, data quality and validity, and data density are needed to ensure valid associated predictions.21,22,24,32,37
DISCUSSION
As the complexity and comorbidity of hospitalized adults grow, predicting clinical deterioration is becoming more important. With an ever-increasing amount of available
There are several important limitations across these studies. In a clinical setting, these models would function as a screening test. Almost all studies report an AUROC; however, sensitivity and PPV or NNE (defined as 1/PPV) may be more useful than AUROC when predicting low-frequency events with high-potential clinical impact.44 Assessing the NNE is especially relevant because of its relation to alarm fatigue and responsiveness of clinicians.43 Alarm fatigue and lack of adequate response to alarms were repeatedly cited as potential barriers for application of automated scores.
Although the results of our scoping review are promising, there are limited data on clinical outcomes using these algorithms. Only three of five algorithms were used to guide clinical decision-making.25,27,35 Kollef et al27 showed shorter hospitalizations and Evans et al35 found decreased mortality rates in a multimorbid subgroup. Escobar et al25 found an overall and consistent decrease in mortality in a large, heterogenic population of inpatients across 21 hospitals. While Escobar et al’s findings provide strong evidence that predictive algorithms and structured follow-up on alarms can improve patient outcomes, it recognizes that not all facilities will have the resources to implement them.25 Dedicated round-the-clock follow-up of alarms has yet to be proven feasible for smaller institutions, and leaner solutions must be explored. The example set by Escobar et al25 should be translated into various settings to prove its reproducibility and to substantiate the clinical impact of predictive models and structured follow-up.
According to expert opinion, the use of high-frequency or continuous monitoring at low-acuity wards and AI algorithms to detect trends and patterns will reduce failure-to-rescue rates.4,9,43 However, most studies in our review focused on periodic spot-checked vital signs, and none of the AI algorithms were implemented in clinical care (Appendix Table 1
STRENGTHS AND LIMITATIONS
We performed a comprehensive review of the current literature using a clear and reproducible methodology to minimize the risk of missing relevant publications. The identified research is mainly limited to large US centers and consists of mostly retrospective studies. Heterogeneity among inputs, endpoints, time horizons, and evaluation metrics make comparisons challenging. Comments on facilitators, barriers, and effects were limited.
RECOMMENDATIONS FOR FUTURE RESEARCH
Artificial intelligence and the use of continuous monitoring hold great promise in creating optimal predictive algorithms. Future studies should directly compare AI- and non-AI-based algorithms using continuous monitoring to determine predictive accuracy, feasibility, costs, and outcomes. A consensus on endpoint definitions, input variables, methodology, and reporting is needed to enhance reproducibility, comparability, and generalizability of future research.
CONCLUSION
- van Galen LS, Struik PW, Driesen BEJM, et al. Delayed recognition of deterioration of patients in general wards is mostly caused by human related monitoring failures: a root cause analysis of unplanned ICU admissions. PLoS One. 2016;11(8):e0161393. https://doi.org/10.1371/journal. pone.0161393
- Mardini L, Lipes J, Jayaraman D. Adverse outcomes associated with delayed intensive care consultation in medical and surgical inpatients. J Crit Care. 2012;27(6):688-693. https://doi.org/10.1016/j.jcrc.2012.04.011
- Young MP, Gooder VJ, McBride K, James B, Fisher ES. Inpatient transfers to the intensive care unit: delays are associated with increased mortality and morbidity. J Gen Intern Med. 2003;18(2):77-83. https://doi.org/10.1046/ j.1525-1497.2003.20441.x
- Khanna AK, Hoppe P, Saugel B. Automated continuous noninvasive ward monitoring: future directions and challenges. Crit Care. 2019;23(1):194. https://doi.org/10.1186/s13054-019-2485-7
- Ludikhuize J, Hamming A, de Jonge E, Fikkers BG. Rapid response systems in The Netherlands. Jt Comm J Qual Patient Saf. 2011;37(3):138-197. https:// doi.org/10.1016/s1553-7250(11)37017-1
- Cuthbertson BH, Boroujerdi M, McKie L, Aucott L, Prescott G. Can physiological variables and early warning scoring systems allow early recognition of the deteriorating surgical patient? Crit Care Med. 2007;35(2):402-409. https://doi.org/10.1097/01.ccm.0000254826.10520.87
- Alam N, Hobbelink EL, van Tienhoven AJ, van de Ven PM, Jansma EP, Nanayakkara PWB. The impact of the use of the Early Warning Score (EWS) on patient outcomes: a systematic review. Resuscitation. 2014;85(5):587-594. https://doi.org/10.1016/j.resuscitation.2014.01.013
- Weenk M, Koeneman M, van de Belt TH, Engelen LJLPG, van Goor H, Bredie SJH. Wireless and continuous monitoring of vital signs in patients at the general ward. Resuscitation. 2019;136:47-53. https://doi.org/10.1016/j.resuscitation.2019.01.017
- Cardona-Morrell M, Prgomet M, Turner RM, Nicholson M, Hillman K. Effectiveness of continuous or intermittent vital signs monitoring in preventing adverse events on general wards: a systematic review and meta-analysis. Int J Clin Pract. 2016;70(10):806-824. https://doi.org/10.1111/ijcp.12846
- Brown H, Terrence J, Vasquez P, Bates DW, Zimlichman E. Continuous monitoring in an inpatient medical-surgical unit: a controlled clinical trial. Am J Med. 2014;127(3):226-232. https://doi.org/10.1016/j.amjmed.2013.12.004
- Mestrom E, De Bie A, van de Steeg M, Driessen M, Atallah L, Bezemer R. Implementation of an automated early warning scoring system in a E8 Journal of Hospital Medicine® Published Online June 2021 An Official Publication of the Society of Hospital Medicine Peelen et al | Predicting Deterioration: A Scoping Review surgical ward: practical use and effects on patient outcomes. PLoS One. 2019;14(5):e0213402. https://doi.org/10.1371/journal.pone.0213402
- Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. https://doi.org/10.1136/ svn-2017-000101
- Iwashyna TJ, Liu V. What’s so different about big data? A primer for clinicians trained to think epidemiologically. Ann Am Thorac Soc. 2014;11(7):1130- 1135. https://doi.org/10.1513/annalsats.201405-185as
- Jalali A, Bender D, Rehman M, Nadkanri V, Nataraj C. Advanced analytics for outcome prediction in intensive care units. Conf Proc IEEE Eng Med Biol Soc. 2016;2016:2520-2524. https://doi.org/10.1109/embc.2016.7591243
- Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143. https://doi.org/10.1186/s12874-018-0611-x
- Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19-32. https://doi.org/10.1080/13645 57032000119616
- Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMAScR): checklist and explanation. Ann Intern Med. 2018;169(7):467- 473. https://doi.org/10.7326/m18-0850
- Gagnon MP, Desmartis M, Gagnon J, et al. Framework for user involvement in health technology assessment at the local level: views of health managers, user representatives, and clinicians. Int J Technol Assess Health Care. 2015;31(1-2):68-77. https://doi.org/10.1017/s0266462315000070
- Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743-1748. https://doi.org/10.1001/jama.260.12.1743
- Churpek MM, Yuen TC, Winslow C, et al. Multicenter development and validation of a risk stratification tool for ward patients. Am J Respir Crit Care Med. 2014;190(6):649-655. https://doi.org/10.1164/rccm.201406-1022oc
- Churpek MM, Yuen TC, Winslow C, Meltzer DO, Kattan MW, Edelson DP. Multicenter comparison of machine learning methods and conventional regression for predicting clinical deterioration on the wards. Crit Care Med. 2016;44(2):368-374. https://doi.org/10.1097/ccm.0000000000001571
- Bartkowiak B, Snyder AM, Benjamin A, et al. Validating the electronic cardiac arrest risk triage (eCART) score for risk stratification of surgical inpatients in the postoperative setting: retrospective cohort study. Ann Surg. 2019;269(6):1059-1063. https://doi.org/10.1097/sla.0000000000002665
- Edelson DP, Carey K, Winslow CJ, Churpek MM. Less is more: detecting clinical deterioration in the hospital with machine learning using only age, heart rate and respiratory rate. Abstract presented at: American Thoracic Society International Conference; May 22, 2018; San Diego, California. Am J Resp Crit Care Med. 2018;197:A4444.
- Escobar GJ, LaGuardia JC, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388-395. https:// doi.org/10.1002/jhm.1929
- Escobar GJ, Liu VX, Schuler A, Lawson B, Greene JD, Kipnis P. Automated identification of adults at risk for in-hospital clinical deterioration. N Engl J Med. 2020;383(20):1951-1960. https://doi.org/10.1056/nejmsa2001090
- Kipnis P, Turk BJ, Wulf DA, et al. Development and validation of an electronic medical record-based alert score for detection of inpatient deterioration outside the ICU. J Biomed Inform. 2016;64:10-19. https://doi.org/10.1016/j. jbi.2016.09.013
- Kollef MH, Chen Y, Heard K, et al. A randomized trial of real-time automated clinical deterioration alerts sent to a rapid response team. J Hosp Med. 2014;9(7):424-429. https://doi.org/10.1002/jhm.2193
- Hackmann G, Chen M, Chipara O, et al. Toward a two-tier clinical warning system for hospitalized patients. AMIA Annu Symp Proc. 2011;2011:511-519.
- Bailey TC, Chen Y, Mao Y, Lu, C, Hackmann G, Micek ST. A trial of a real-time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236-242. https://doi.org/10.1002/jhm.2009
- Kwon JM, Lee Y, Lee Y, Lee S, Park J. An algorithm based on deep learning for predicting in-hospital cardiac arrest. J Am Heart Assoc. 2018;7(13):e008678. https://doi.org/10.1161/jaha.118.008678
- Correia S, Gomes A, Shahriari S, Almeida JP, Severo M, Azevedo A. Performance of the early warning system vital to predict unanticipated higher-level of care admission and in-hospital death of ward patients. Value Health. 2018;21(S3):S360. https://doi.org/10.1016/j.jval.2018.09.2152
- Shamout FE, Zhu T, Sharma P, Watkinson PJ, Clifton DA. Deep interpretable early warning system for the detection of clinical deterioration. IEEE J Biomed Health Inform. 2020;24(2):437-446. https://doi.org/10.1109/ jbhi.2019.2937803
- Bai Y, Do DH, Harris PRE, et al. Integrating monitor alarms with laboratory test results to enhance patient deterioration prediction. J Biomed Inform. 2015;53:81-92. https://doi.org/10.1016/j.jbi.2014.09.006
- Hu X, Sapo M, Nenov V, et al. Predictive combinations of monitor alarms preceding in-hospital code blue events. J Biomed Inform. 2012;45(5):913-921. https://doi.org/10.1016/j.jbi.2012.03.001
- Evans RS, Kuttler KG, Simpson KJ, et al. Automated detection of physiologic deterioration in hospitalized patients. J Am Med Inform Assoc. 2015;22(2):350-360. https://doi.org/10.1136/amiajnl-2014-002816
- Ghosh E, Eshelman L, Yang L, Carlson E, Lord B. Early deterioration indicator: data-driven approach to detecting deterioration in general ward. Resuscitation. 2018;122:99-105. https://doi.org/10.1016/j.resuscitation. 2017.10.026
- Kang MA, Churpek MM, Zadravecz FJ, Adhikari R, Twu NM, Edelson DP: Real-time risk prediction on the wards: a feasibility study. Crit Care Med. 2016;44(8):1468-1473. https://doi.org/10.1097/ccm.0000000000001716
- Hu SB, Wong DJL, Correa A, Li N, Deng JC. Prediction of clinical deterioration in hospitalized adult patients with hematologic malignancies using a neural network model. PLoS One. 2016;11(8):e0161401. https://doi. org/10.1371/journal.pone.0161401
- Rothman MJ, Rothman SI, Beals J 4th. Development and validation of a continuous measure of patient condition using the electronic medical record. J Biomed Inform. 2013;46(5):837-848. https://doi.org/10.1016/j. jbi.2013.06.011
- Alaa AM, Yoon J, Hu S, van der Schaar M. Personalized risk scoring for critical care prognosis using mixtures of Gaussian processes. IEEE Trans Biomed Eng. 2018;65(1):207-218. https://doi.org/10.1109/tbme.2017.2698602
- Mohamadlou H, Panchavati S, Calvert J, et al. Multicenter validation of a machine-learning algorithm for 48-h all-cause mortality prediction. Health Informatics J. 2020;26(3):1912-1925. https://doi.org/10.1177/1460458219894494
- Alvarez CA, Clark CA, Zhang S, et al. Predicting out of intensive care unit cardiopulmonary arrest or death using electronic medical record data. BMC Med Inform Decis Mak. 2013;13:28. https://doi.org/10.1186/1472-6947-13-28
- Vincent JL, Einav S, Pearse R, et al. Improving detection of patient deterioration in the general hospital ward environment. Eur J Anaesthesiol. 2018;35(5):325-333. https://doi.org/10.1097/eja.0000000000000798
- Romero-Brufau S, Huddleston JM, Escobar GJ, Liebow M. Why the C-statistic is not informative to evaluate early warning scores and what metrics to use. Crit Care. 2015;19(1):285. https://doi.org/10.1186/s13054-015-0999-1
- Weenk M, Bredie SJ, Koeneman M, Hesselink G, van Goor H, van de Belt TH. Continuous monitoring of the vital signs in the general ward using wearable devices: randomized controlled trial. J Med Internet Res. 2020;22(6):e15471. https://doi.org/10.2196/15471
- Wellner B, Grand J, Canzone E, et al. Predicting unplanned transfers to the intensive care unit: a machine learning approach leveraging diverse clinical elements. JMIR Med Inform. 2017;5(4):e45. https://doi.org/10.2196/medinform.8680
- Elliott M, Baird J. Pulse oximetry and the enduring neglect of respiratory rate assessment: a commentary on patient surveillance. Br J Nurs. 2019;28(19):1256-1259. https://doi.org/10.12968/bjon.2019.28.19.1256
- Blackwell JN, Keim-Malpass J, Clark MT, et al. Early detection of in-patient deterioration: one prediction model does not fit all. Crit Care Explor. 2020;2(5):e0116. https://doi.org/10.1097/cce.0000000000000116
- Johnson AEW, Pollard TJ, Shen L, et al. MIMIC-III, a freely accessible critical care database. Sci Data. 2016;3:160035. https://doi.org/10.1038/sdata.2016.35
- Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. https://doi. org/10.1370/afm.1713
- Kirkland LL, Malinchoc M, O’Byrne M, et al. A clinical deterioration prediction tool for internal medicine patients. Am J Med Qual. 2013;28(2):135-142 https://doi.org/10.1177/1062860612450459
- van Galen LS, Struik PW, Driesen BEJM, et al. Delayed recognition of deterioration of patients in general wards is mostly caused by human related monitoring failures: a root cause analysis of unplanned ICU admissions. PLoS One. 2016;11(8):e0161393. https://doi.org/10.1371/journal. pone.0161393
- Mardini L, Lipes J, Jayaraman D. Adverse outcomes associated with delayed intensive care consultation in medical and surgical inpatients. J Crit Care. 2012;27(6):688-693. https://doi.org/10.1016/j.jcrc.2012.04.011
- Young MP, Gooder VJ, McBride K, James B, Fisher ES. Inpatient transfers to the intensive care unit: delays are associated with increased mortality and morbidity. J Gen Intern Med. 2003;18(2):77-83. https://doi.org/10.1046/ j.1525-1497.2003.20441.x
- Khanna AK, Hoppe P, Saugel B. Automated continuous noninvasive ward monitoring: future directions and challenges. Crit Care. 2019;23(1):194. https://doi.org/10.1186/s13054-019-2485-7
- Ludikhuize J, Hamming A, de Jonge E, Fikkers BG. Rapid response systems in The Netherlands. Jt Comm J Qual Patient Saf. 2011;37(3):138-197. https:// doi.org/10.1016/s1553-7250(11)37017-1
- Cuthbertson BH, Boroujerdi M, McKie L, Aucott L, Prescott G. Can physiological variables and early warning scoring systems allow early recognition of the deteriorating surgical patient? Crit Care Med. 2007;35(2):402-409. https://doi.org/10.1097/01.ccm.0000254826.10520.87
- Alam N, Hobbelink EL, van Tienhoven AJ, van de Ven PM, Jansma EP, Nanayakkara PWB. The impact of the use of the Early Warning Score (EWS) on patient outcomes: a systematic review. Resuscitation. 2014;85(5):587-594. https://doi.org/10.1016/j.resuscitation.2014.01.013
- Weenk M, Koeneman M, van de Belt TH, Engelen LJLPG, van Goor H, Bredie SJH. Wireless and continuous monitoring of vital signs in patients at the general ward. Resuscitation. 2019;136:47-53. https://doi.org/10.1016/j.resuscitation.2019.01.017
- Cardona-Morrell M, Prgomet M, Turner RM, Nicholson M, Hillman K. Effectiveness of continuous or intermittent vital signs monitoring in preventing adverse events on general wards: a systematic review and meta-analysis. Int J Clin Pract. 2016;70(10):806-824. https://doi.org/10.1111/ijcp.12846
- Brown H, Terrence J, Vasquez P, Bates DW, Zimlichman E. Continuous monitoring in an inpatient medical-surgical unit: a controlled clinical trial. Am J Med. 2014;127(3):226-232. https://doi.org/10.1016/j.amjmed.2013.12.004
- Mestrom E, De Bie A, van de Steeg M, Driessen M, Atallah L, Bezemer R. Implementation of an automated early warning scoring system in a E8 Journal of Hospital Medicine® Published Online June 2021 An Official Publication of the Society of Hospital Medicine Peelen et al | Predicting Deterioration: A Scoping Review surgical ward: practical use and effects on patient outcomes. PLoS One. 2019;14(5):e0213402. https://doi.org/10.1371/journal.pone.0213402
- Jiang F, Jiang Y, Zhi H, et al. Artificial intelligence in healthcare: past, present and future. Stroke Vasc Neurol. 2017;2(4):230-243. https://doi.org/10.1136/ svn-2017-000101
- Iwashyna TJ, Liu V. What’s so different about big data? A primer for clinicians trained to think epidemiologically. Ann Am Thorac Soc. 2014;11(7):1130- 1135. https://doi.org/10.1513/annalsats.201405-185as
- Jalali A, Bender D, Rehman M, Nadkanri V, Nataraj C. Advanced analytics for outcome prediction in intensive care units. Conf Proc IEEE Eng Med Biol Soc. 2016;2016:2520-2524. https://doi.org/10.1109/embc.2016.7591243
- Munn Z, Peters MDJ, Stern C, Tufanaru C, McArthur A, Aromataris E. Systematic review or scoping review? Guidance for authors when choosing between a systematic or scoping review approach. BMC Med Res Methodol. 2018;18(1):143. https://doi.org/10.1186/s12874-018-0611-x
- Arksey H, O’Malley L. Scoping studies: towards a methodological framework. Int J Soc Res Methodol. 2005;8(1):19-32. https://doi.org/10.1080/13645 57032000119616
- Tricco AC, Lillie E, Zarin W, et al. PRISMA extension for scoping reviews (PRISMAScR): checklist and explanation. Ann Intern Med. 2018;169(7):467- 473. https://doi.org/10.7326/m18-0850
- Gagnon MP, Desmartis M, Gagnon J, et al. Framework for user involvement in health technology assessment at the local level: views of health managers, user representatives, and clinicians. Int J Technol Assess Health Care. 2015;31(1-2):68-77. https://doi.org/10.1017/s0266462315000070
- Donabedian A. The quality of care. How can it be assessed? JAMA. 1988;260(12):1743-1748. https://doi.org/10.1001/jama.260.12.1743
- Churpek MM, Yuen TC, Winslow C, et al. Multicenter development and validation of a risk stratification tool for ward patients. Am J Respir Crit Care Med. 2014;190(6):649-655. https://doi.org/10.1164/rccm.201406-1022oc
- Churpek MM, Yuen TC, Winslow C, Meltzer DO, Kattan MW, Edelson DP. Multicenter comparison of machine learning methods and conventional regression for predicting clinical deterioration on the wards. Crit Care Med. 2016;44(2):368-374. https://doi.org/10.1097/ccm.0000000000001571
- Bartkowiak B, Snyder AM, Benjamin A, et al. Validating the electronic cardiac arrest risk triage (eCART) score for risk stratification of surgical inpatients in the postoperative setting: retrospective cohort study. Ann Surg. 2019;269(6):1059-1063. https://doi.org/10.1097/sla.0000000000002665
- Edelson DP, Carey K, Winslow CJ, Churpek MM. Less is more: detecting clinical deterioration in the hospital with machine learning using only age, heart rate and respiratory rate. Abstract presented at: American Thoracic Society International Conference; May 22, 2018; San Diego, California. Am J Resp Crit Care Med. 2018;197:A4444.
- Escobar GJ, LaGuardia JC, Turk BJ, Ragins A, Kipnis P, Draper D. Early detection of impending physiologic deterioration among patients who are not in intensive care: development of predictive models using data from an automated electronic medical record. J Hosp Med. 2012;7(5):388-395. https:// doi.org/10.1002/jhm.1929
- Escobar GJ, Liu VX, Schuler A, Lawson B, Greene JD, Kipnis P. Automated identification of adults at risk for in-hospital clinical deterioration. N Engl J Med. 2020;383(20):1951-1960. https://doi.org/10.1056/nejmsa2001090
- Kipnis P, Turk BJ, Wulf DA, et al. Development and validation of an electronic medical record-based alert score for detection of inpatient deterioration outside the ICU. J Biomed Inform. 2016;64:10-19. https://doi.org/10.1016/j. jbi.2016.09.013
- Kollef MH, Chen Y, Heard K, et al. A randomized trial of real-time automated clinical deterioration alerts sent to a rapid response team. J Hosp Med. 2014;9(7):424-429. https://doi.org/10.1002/jhm.2193
- Hackmann G, Chen M, Chipara O, et al. Toward a two-tier clinical warning system for hospitalized patients. AMIA Annu Symp Proc. 2011;2011:511-519.
- Bailey TC, Chen Y, Mao Y, Lu, C, Hackmann G, Micek ST. A trial of a real-time alert for clinical deterioration in patients hospitalized on general medical wards. J Hosp Med. 2013;8(5):236-242. https://doi.org/10.1002/jhm.2009
- Kwon JM, Lee Y, Lee Y, Lee S, Park J. An algorithm based on deep learning for predicting in-hospital cardiac arrest. J Am Heart Assoc. 2018;7(13):e008678. https://doi.org/10.1161/jaha.118.008678
- Correia S, Gomes A, Shahriari S, Almeida JP, Severo M, Azevedo A. Performance of the early warning system vital to predict unanticipated higher-level of care admission and in-hospital death of ward patients. Value Health. 2018;21(S3):S360. https://doi.org/10.1016/j.jval.2018.09.2152
- Shamout FE, Zhu T, Sharma P, Watkinson PJ, Clifton DA. Deep interpretable early warning system for the detection of clinical deterioration. IEEE J Biomed Health Inform. 2020;24(2):437-446. https://doi.org/10.1109/ jbhi.2019.2937803
- Bai Y, Do DH, Harris PRE, et al. Integrating monitor alarms with laboratory test results to enhance patient deterioration prediction. J Biomed Inform. 2015;53:81-92. https://doi.org/10.1016/j.jbi.2014.09.006
- Hu X, Sapo M, Nenov V, et al. Predictive combinations of monitor alarms preceding in-hospital code blue events. J Biomed Inform. 2012;45(5):913-921. https://doi.org/10.1016/j.jbi.2012.03.001
- Evans RS, Kuttler KG, Simpson KJ, et al. Automated detection of physiologic deterioration in hospitalized patients. J Am Med Inform Assoc. 2015;22(2):350-360. https://doi.org/10.1136/amiajnl-2014-002816
- Ghosh E, Eshelman L, Yang L, Carlson E, Lord B. Early deterioration indicator: data-driven approach to detecting deterioration in general ward. Resuscitation. 2018;122:99-105. https://doi.org/10.1016/j.resuscitation. 2017.10.026
- Kang MA, Churpek MM, Zadravecz FJ, Adhikari R, Twu NM, Edelson DP: Real-time risk prediction on the wards: a feasibility study. Crit Care Med. 2016;44(8):1468-1473. https://doi.org/10.1097/ccm.0000000000001716
- Hu SB, Wong DJL, Correa A, Li N, Deng JC. Prediction of clinical deterioration in hospitalized adult patients with hematologic malignancies using a neural network model. PLoS One. 2016;11(8):e0161401. https://doi. org/10.1371/journal.pone.0161401
- Rothman MJ, Rothman SI, Beals J 4th. Development and validation of a continuous measure of patient condition using the electronic medical record. J Biomed Inform. 2013;46(5):837-848. https://doi.org/10.1016/j. jbi.2013.06.011
- Alaa AM, Yoon J, Hu S, van der Schaar M. Personalized risk scoring for critical care prognosis using mixtures of Gaussian processes. IEEE Trans Biomed Eng. 2018;65(1):207-218. https://doi.org/10.1109/tbme.2017.2698602
- Mohamadlou H, Panchavati S, Calvert J, et al. Multicenter validation of a machine-learning algorithm for 48-h all-cause mortality prediction. Health Informatics J. 2020;26(3):1912-1925. https://doi.org/10.1177/1460458219894494
- Alvarez CA, Clark CA, Zhang S, et al. Predicting out of intensive care unit cardiopulmonary arrest or death using electronic medical record data. BMC Med Inform Decis Mak. 2013;13:28. https://doi.org/10.1186/1472-6947-13-28
- Vincent JL, Einav S, Pearse R, et al. Improving detection of patient deterioration in the general hospital ward environment. Eur J Anaesthesiol. 2018;35(5):325-333. https://doi.org/10.1097/eja.0000000000000798
- Romero-Brufau S, Huddleston JM, Escobar GJ, Liebow M. Why the C-statistic is not informative to evaluate early warning scores and what metrics to use. Crit Care. 2015;19(1):285. https://doi.org/10.1186/s13054-015-0999-1
- Weenk M, Bredie SJ, Koeneman M, Hesselink G, van Goor H, van de Belt TH. Continuous monitoring of the vital signs in the general ward using wearable devices: randomized controlled trial. J Med Internet Res. 2020;22(6):e15471. https://doi.org/10.2196/15471
- Wellner B, Grand J, Canzone E, et al. Predicting unplanned transfers to the intensive care unit: a machine learning approach leveraging diverse clinical elements. JMIR Med Inform. 2017;5(4):e45. https://doi.org/10.2196/medinform.8680
- Elliott M, Baird J. Pulse oximetry and the enduring neglect of respiratory rate assessment: a commentary on patient surveillance. Br J Nurs. 2019;28(19):1256-1259. https://doi.org/10.12968/bjon.2019.28.19.1256
- Blackwell JN, Keim-Malpass J, Clark MT, et al. Early detection of in-patient deterioration: one prediction model does not fit all. Crit Care Explor. 2020;2(5):e0116. https://doi.org/10.1097/cce.0000000000000116
- Johnson AEW, Pollard TJ, Shen L, et al. MIMIC-III, a freely accessible critical care database. Sci Data. 2016;3:160035. https://doi.org/10.1038/sdata.2016.35
- Bodenheimer T, Sinsky C. From triple to quadruple aim: care of the patient requires care of the provider. Ann Fam Med. 2014;12(6):573-576. https://doi. org/10.1370/afm.1713
- Kirkland LL, Malinchoc M, O’Byrne M, et al. A clinical deterioration prediction tool for internal medicine patients. Am J Med Qual. 2013;28(2):135-142 https://doi.org/10.1177/1062860612450459
Women with migraine are ‘high-risk’ patients during pregnancy
new research suggests. Although pregnancy is generally considered a “safe period” for women with migraine, “we actually found they have more diabetes, more hypertension, more blood clots, more complications during their delivery, and more postpartum complications,” said study investigator Nirit Lev, MD, PhD, head, department of neurology, Meir Medical Center, Kfar Saba Sackler Faculty of Medicine, Tel Aviv University.
The results highlight the need for clinicians “to take people with migraines seriously” and reinforce the idea that migraine is not “just a headache,” said Dr. Lev.
Pregnant women with migraine should be considered high risk and have specialized neurologic follow-up during pregnancy and the postpartum period, she added.
The findings were presented at the 2021 Congress of the European Academy of Neurology.
Prevalent, disabling
Migraine is one of the most prevalent and disabling neurologic disorders. Such disorders are major causes of death and disability.
In childhood, there’s no difference between the sexes in terms of migraine prevalence, but after puberty, migraine is about three times more common in women than men. Fluctuating levels of estrogen and progesterone likely explain these differences, said Dr. Lev.
The prevalence of migraine among females peaks during their reproductive years. Most female migraine patients report an improvement in headache symptoms during pregnancy, with some experiencing a “complete remission.” However, a minority report worsening of migraine when expecting a child, said Dr. Lev.
Some patients have their first aura during pregnancy. The most common migraine aura is visual, a problem with the visual field that can affect motor and sensory functioning, said Dr. Lev.
Managing migraine during pregnancy is “very complicated,” said Dr. Lev. She said the first-line treatment is paracetamol (acetaminophen) and stressed that taking opioids should be avoided.
Retrospective database study
For the study, the researchers retrospectively reviewed pregnancy and delivery records from a database of Clalit Medical Services, which has more than 4.5 million members and is the largest such database in Israel. They collected demographic data and information on mode of delivery, medical and obstetric complications, hospitalizations, emergency department visits, use of medications, laboratory reports, and medical consultations.
The study included 145,102 women who gave birth from 2014 to 2020.
Of these, 10,646 had migraine without aura, and 1,576 had migraine with aura. The migraine diagnoses, which were based on International Headache Society criteria and diagnostic codes, were made prior to pregnancy.
Dr. Lev noted that the number of patients with migraine is likely an underestimation because migraine is “not always diagnosed.”
Results showed that the risk for obstetric complications was higher among pregnant women with migraine, especially those with aura, in comparison with women without migraine. About 6.9% of patients with migraine without aura were admitted to high-risk hospital departments, compared with 6% of pregnant control patients who did not have migraine (P < .0001). For patients with migraine with aura, the risk for admissions was even higher (8.7%; P < .0001 vs. control patients and P < .03 vs. patients with migraine without aura) and was “very highly statistically significant,” said Dr. Lev.
Pregnant women with migraine were at significantly increased risk for gestational diabetes, hyperlipidemia, and being diagnosed with a psychiatric disorder (all P < .0001). These women were also more likely to experience preeclampsia and blood clots (P < .0001).
Unexpected finding
The finding that the risk for diabetes was higher was “unexpected,” inasmuch as older women with migraine are typically at increased risk for metabolic syndrome and higher body mass index, said Dr. Lev.
Migraine patients had significantly more consultations with family physicians, gynecologists, and neurologists (P < .0001). In addition, they were more likely to utilize emergency services; take more medications, mostly analgesics; and undergo more laboratory studies and brain imaging.
Those with aura had significantly more specialist consultations and took more medications compared with migraine patients without aura.
There was a statistically significant increase in the use of epidural anesthesia for migraine patients (40.5% of women without migraine; 45.7% of those with migraine accompanied by aura; and 47.5% of migraine patients without aura).
This was an “interesting” finding, said Dr. Lev. “We didn’t know what to expect; people with migraine are used to pain, so the question was, will they tolerate pain better or be more afraid of pain?”
Women with migraine also experienced more assisted deliveries with increased use of vacuums and forceps.
During the 3-month postpartum period, women with migraine sought more medical consultations and used more medications compared with control patients. They also underwent more lab examinations and more brain imaging during this period.
Dr. Lev noted that some of these evaluations may have been postponed because of the pregnancy.
Women with migraine also had a greater risk for postpartum depression, which Dr. Lev found “concerning.” She noted that depression is often underreported but is treatable. Women with migraine should be monitored for depression post partum, she said.
It’s unclear which factors contribute to the increased risk for pregnancy complications in women with migraine. Dr. Lev said she doesn’t believe it’s drug related.
“Although they’re taking more medications than people who don’t have migraine, we still are giving very low doses and only safe medicines, so I don’t think these increased risks are side effects,” she said.
She noted that women with migraine have more cardiovascular complications, including stroke and myocardial infarction, although these generally affect older patients.
Dr. Lev also noted that pain, especially chronic pain, can cause depression. “We know that people with migraine have more depression and anxiety, so maybe that also affects them during their pregnancy and after,” she said.
She suggested that pregnant women with migraine be considered high risk and be managed via specialized clinics.
Room for improvement
Commenting on the research, Lauren Doyle Strauss, DO, associate professor of neurology, Wake Forest University, Winston-Salem, N.C., who has written about the management of migraine during pregnancy, said studies such as this help raise awareness about pregnancy risks in migraine patients. Dr. Strauss did not attend the live presentation but is aware of the findings.
The increased use of epidurals during delivery among migraine patients in the study makes some sense, said Dr. Strauss. “It kind of shows a comfort level with medicines.”
She expressed concern that such research may be “skewed” because it includes patients with more severe migraine. If less severe cases were included in this research, “maybe there would still be higher risks, but not as high as what we have been finding in some of our studies,” she said.
Dr. Strauss said she feels the medical community should do a better job of identifying and diagnosing migraine. She said she would like to see migraine screening become a routine part of obstetric/gynecologic care. Doctors should counsel migraine patients who wish to become pregnant about potential risks, said Dr. Strauss. “We need to be up front in telling them when to seek care and when to report symptoms and not to wait for it to become super severe,” she said.
She also believes doctors should be “proactive” in helping patients develop a treatment plan before becoming pregnant, because the limited pain control options available for pregnant patients can take time to have an effect.
Also commenting on the study findings, Nina Riggins, MD, PhD, clinical associate professor of neurology at the University of California, San Francisco, said the study raises “important questions” and has “important aims.”
She believes the study reinforces the importance of collaboration between experts in primary care, obstetrics/gynecology, and neurology. However, she was surprised at some of the investigators’ assertions that there are no differences in migraine among prepubertal children and that the course of migraine for men is stable throughout their life span.
“There is literature that supports the view that the prevalence in boys is higher in prepuberty, and studies do show that migraine prevalence decreases in older adults – men and women,” she said.
There is still not enough evidence to determine that antiemetics and triptans are safe during pregnancy or that pregnant women with migraine should be taking acetylsalicylic acid, said Dr. Riggins.
The investigators, Dr. Strauss, and Dr. Riggins have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
new research suggests. Although pregnancy is generally considered a “safe period” for women with migraine, “we actually found they have more diabetes, more hypertension, more blood clots, more complications during their delivery, and more postpartum complications,” said study investigator Nirit Lev, MD, PhD, head, department of neurology, Meir Medical Center, Kfar Saba Sackler Faculty of Medicine, Tel Aviv University.
The results highlight the need for clinicians “to take people with migraines seriously” and reinforce the idea that migraine is not “just a headache,” said Dr. Lev.
Pregnant women with migraine should be considered high risk and have specialized neurologic follow-up during pregnancy and the postpartum period, she added.
The findings were presented at the 2021 Congress of the European Academy of Neurology.
Prevalent, disabling
Migraine is one of the most prevalent and disabling neurologic disorders. Such disorders are major causes of death and disability.
In childhood, there’s no difference between the sexes in terms of migraine prevalence, but after puberty, migraine is about three times more common in women than men. Fluctuating levels of estrogen and progesterone likely explain these differences, said Dr. Lev.
The prevalence of migraine among females peaks during their reproductive years. Most female migraine patients report an improvement in headache symptoms during pregnancy, with some experiencing a “complete remission.” However, a minority report worsening of migraine when expecting a child, said Dr. Lev.
Some patients have their first aura during pregnancy. The most common migraine aura is visual, a problem with the visual field that can affect motor and sensory functioning, said Dr. Lev.
Managing migraine during pregnancy is “very complicated,” said Dr. Lev. She said the first-line treatment is paracetamol (acetaminophen) and stressed that taking opioids should be avoided.
Retrospective database study
For the study, the researchers retrospectively reviewed pregnancy and delivery records from a database of Clalit Medical Services, which has more than 4.5 million members and is the largest such database in Israel. They collected demographic data and information on mode of delivery, medical and obstetric complications, hospitalizations, emergency department visits, use of medications, laboratory reports, and medical consultations.
The study included 145,102 women who gave birth from 2014 to 2020.
Of these, 10,646 had migraine without aura, and 1,576 had migraine with aura. The migraine diagnoses, which were based on International Headache Society criteria and diagnostic codes, were made prior to pregnancy.
Dr. Lev noted that the number of patients with migraine is likely an underestimation because migraine is “not always diagnosed.”
Results showed that the risk for obstetric complications was higher among pregnant women with migraine, especially those with aura, in comparison with women without migraine. About 6.9% of patients with migraine without aura were admitted to high-risk hospital departments, compared with 6% of pregnant control patients who did not have migraine (P < .0001). For patients with migraine with aura, the risk for admissions was even higher (8.7%; P < .0001 vs. control patients and P < .03 vs. patients with migraine without aura) and was “very highly statistically significant,” said Dr. Lev.
Pregnant women with migraine were at significantly increased risk for gestational diabetes, hyperlipidemia, and being diagnosed with a psychiatric disorder (all P < .0001). These women were also more likely to experience preeclampsia and blood clots (P < .0001).
Unexpected finding
The finding that the risk for diabetes was higher was “unexpected,” inasmuch as older women with migraine are typically at increased risk for metabolic syndrome and higher body mass index, said Dr. Lev.
Migraine patients had significantly more consultations with family physicians, gynecologists, and neurologists (P < .0001). In addition, they were more likely to utilize emergency services; take more medications, mostly analgesics; and undergo more laboratory studies and brain imaging.
Those with aura had significantly more specialist consultations and took more medications compared with migraine patients without aura.
There was a statistically significant increase in the use of epidural anesthesia for migraine patients (40.5% of women without migraine; 45.7% of those with migraine accompanied by aura; and 47.5% of migraine patients without aura).
This was an “interesting” finding, said Dr. Lev. “We didn’t know what to expect; people with migraine are used to pain, so the question was, will they tolerate pain better or be more afraid of pain?”
Women with migraine also experienced more assisted deliveries with increased use of vacuums and forceps.
During the 3-month postpartum period, women with migraine sought more medical consultations and used more medications compared with control patients. They also underwent more lab examinations and more brain imaging during this period.
Dr. Lev noted that some of these evaluations may have been postponed because of the pregnancy.
Women with migraine also had a greater risk for postpartum depression, which Dr. Lev found “concerning.” She noted that depression is often underreported but is treatable. Women with migraine should be monitored for depression post partum, she said.
It’s unclear which factors contribute to the increased risk for pregnancy complications in women with migraine. Dr. Lev said she doesn’t believe it’s drug related.
“Although they’re taking more medications than people who don’t have migraine, we still are giving very low doses and only safe medicines, so I don’t think these increased risks are side effects,” she said.
She noted that women with migraine have more cardiovascular complications, including stroke and myocardial infarction, although these generally affect older patients.
Dr. Lev also noted that pain, especially chronic pain, can cause depression. “We know that people with migraine have more depression and anxiety, so maybe that also affects them during their pregnancy and after,” she said.
She suggested that pregnant women with migraine be considered high risk and be managed via specialized clinics.
Room for improvement
Commenting on the research, Lauren Doyle Strauss, DO, associate professor of neurology, Wake Forest University, Winston-Salem, N.C., who has written about the management of migraine during pregnancy, said studies such as this help raise awareness about pregnancy risks in migraine patients. Dr. Strauss did not attend the live presentation but is aware of the findings.
The increased use of epidurals during delivery among migraine patients in the study makes some sense, said Dr. Strauss. “It kind of shows a comfort level with medicines.”
She expressed concern that such research may be “skewed” because it includes patients with more severe migraine. If less severe cases were included in this research, “maybe there would still be higher risks, but not as high as what we have been finding in some of our studies,” she said.
Dr. Strauss said she feels the medical community should do a better job of identifying and diagnosing migraine. She said she would like to see migraine screening become a routine part of obstetric/gynecologic care. Doctors should counsel migraine patients who wish to become pregnant about potential risks, said Dr. Strauss. “We need to be up front in telling them when to seek care and when to report symptoms and not to wait for it to become super severe,” she said.
She also believes doctors should be “proactive” in helping patients develop a treatment plan before becoming pregnant, because the limited pain control options available for pregnant patients can take time to have an effect.
Also commenting on the study findings, Nina Riggins, MD, PhD, clinical associate professor of neurology at the University of California, San Francisco, said the study raises “important questions” and has “important aims.”
She believes the study reinforces the importance of collaboration between experts in primary care, obstetrics/gynecology, and neurology. However, she was surprised at some of the investigators’ assertions that there are no differences in migraine among prepubertal children and that the course of migraine for men is stable throughout their life span.
“There is literature that supports the view that the prevalence in boys is higher in prepuberty, and studies do show that migraine prevalence decreases in older adults – men and women,” she said.
There is still not enough evidence to determine that antiemetics and triptans are safe during pregnancy or that pregnant women with migraine should be taking acetylsalicylic acid, said Dr. Riggins.
The investigators, Dr. Strauss, and Dr. Riggins have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
new research suggests. Although pregnancy is generally considered a “safe period” for women with migraine, “we actually found they have more diabetes, more hypertension, more blood clots, more complications during their delivery, and more postpartum complications,” said study investigator Nirit Lev, MD, PhD, head, department of neurology, Meir Medical Center, Kfar Saba Sackler Faculty of Medicine, Tel Aviv University.
The results highlight the need for clinicians “to take people with migraines seriously” and reinforce the idea that migraine is not “just a headache,” said Dr. Lev.
Pregnant women with migraine should be considered high risk and have specialized neurologic follow-up during pregnancy and the postpartum period, she added.
The findings were presented at the 2021 Congress of the European Academy of Neurology.
Prevalent, disabling
Migraine is one of the most prevalent and disabling neurologic disorders. Such disorders are major causes of death and disability.
In childhood, there’s no difference between the sexes in terms of migraine prevalence, but after puberty, migraine is about three times more common in women than men. Fluctuating levels of estrogen and progesterone likely explain these differences, said Dr. Lev.
The prevalence of migraine among females peaks during their reproductive years. Most female migraine patients report an improvement in headache symptoms during pregnancy, with some experiencing a “complete remission.” However, a minority report worsening of migraine when expecting a child, said Dr. Lev.
Some patients have their first aura during pregnancy. The most common migraine aura is visual, a problem with the visual field that can affect motor and sensory functioning, said Dr. Lev.
Managing migraine during pregnancy is “very complicated,” said Dr. Lev. She said the first-line treatment is paracetamol (acetaminophen) and stressed that taking opioids should be avoided.
Retrospective database study
For the study, the researchers retrospectively reviewed pregnancy and delivery records from a database of Clalit Medical Services, which has more than 4.5 million members and is the largest such database in Israel. They collected demographic data and information on mode of delivery, medical and obstetric complications, hospitalizations, emergency department visits, use of medications, laboratory reports, and medical consultations.
The study included 145,102 women who gave birth from 2014 to 2020.
Of these, 10,646 had migraine without aura, and 1,576 had migraine with aura. The migraine diagnoses, which were based on International Headache Society criteria and diagnostic codes, were made prior to pregnancy.
Dr. Lev noted that the number of patients with migraine is likely an underestimation because migraine is “not always diagnosed.”
Results showed that the risk for obstetric complications was higher among pregnant women with migraine, especially those with aura, in comparison with women without migraine. About 6.9% of patients with migraine without aura were admitted to high-risk hospital departments, compared with 6% of pregnant control patients who did not have migraine (P < .0001). For patients with migraine with aura, the risk for admissions was even higher (8.7%; P < .0001 vs. control patients and P < .03 vs. patients with migraine without aura) and was “very highly statistically significant,” said Dr. Lev.
Pregnant women with migraine were at significantly increased risk for gestational diabetes, hyperlipidemia, and being diagnosed with a psychiatric disorder (all P < .0001). These women were also more likely to experience preeclampsia and blood clots (P < .0001).
Unexpected finding
The finding that the risk for diabetes was higher was “unexpected,” inasmuch as older women with migraine are typically at increased risk for metabolic syndrome and higher body mass index, said Dr. Lev.
Migraine patients had significantly more consultations with family physicians, gynecologists, and neurologists (P < .0001). In addition, they were more likely to utilize emergency services; take more medications, mostly analgesics; and undergo more laboratory studies and brain imaging.
Those with aura had significantly more specialist consultations and took more medications compared with migraine patients without aura.
There was a statistically significant increase in the use of epidural anesthesia for migraine patients (40.5% of women without migraine; 45.7% of those with migraine accompanied by aura; and 47.5% of migraine patients without aura).
This was an “interesting” finding, said Dr. Lev. “We didn’t know what to expect; people with migraine are used to pain, so the question was, will they tolerate pain better or be more afraid of pain?”
Women with migraine also experienced more assisted deliveries with increased use of vacuums and forceps.
During the 3-month postpartum period, women with migraine sought more medical consultations and used more medications compared with control patients. They also underwent more lab examinations and more brain imaging during this period.
Dr. Lev noted that some of these evaluations may have been postponed because of the pregnancy.
Women with migraine also had a greater risk for postpartum depression, which Dr. Lev found “concerning.” She noted that depression is often underreported but is treatable. Women with migraine should be monitored for depression post partum, she said.
It’s unclear which factors contribute to the increased risk for pregnancy complications in women with migraine. Dr. Lev said she doesn’t believe it’s drug related.
“Although they’re taking more medications than people who don’t have migraine, we still are giving very low doses and only safe medicines, so I don’t think these increased risks are side effects,” she said.
She noted that women with migraine have more cardiovascular complications, including stroke and myocardial infarction, although these generally affect older patients.
Dr. Lev also noted that pain, especially chronic pain, can cause depression. “We know that people with migraine have more depression and anxiety, so maybe that also affects them during their pregnancy and after,” she said.
She suggested that pregnant women with migraine be considered high risk and be managed via specialized clinics.
Room for improvement
Commenting on the research, Lauren Doyle Strauss, DO, associate professor of neurology, Wake Forest University, Winston-Salem, N.C., who has written about the management of migraine during pregnancy, said studies such as this help raise awareness about pregnancy risks in migraine patients. Dr. Strauss did not attend the live presentation but is aware of the findings.
The increased use of epidurals during delivery among migraine patients in the study makes some sense, said Dr. Strauss. “It kind of shows a comfort level with medicines.”
She expressed concern that such research may be “skewed” because it includes patients with more severe migraine. If less severe cases were included in this research, “maybe there would still be higher risks, but not as high as what we have been finding in some of our studies,” she said.
Dr. Strauss said she feels the medical community should do a better job of identifying and diagnosing migraine. She said she would like to see migraine screening become a routine part of obstetric/gynecologic care. Doctors should counsel migraine patients who wish to become pregnant about potential risks, said Dr. Strauss. “We need to be up front in telling them when to seek care and when to report symptoms and not to wait for it to become super severe,” she said.
She also believes doctors should be “proactive” in helping patients develop a treatment plan before becoming pregnant, because the limited pain control options available for pregnant patients can take time to have an effect.
Also commenting on the study findings, Nina Riggins, MD, PhD, clinical associate professor of neurology at the University of California, San Francisco, said the study raises “important questions” and has “important aims.”
She believes the study reinforces the importance of collaboration between experts in primary care, obstetrics/gynecology, and neurology. However, she was surprised at some of the investigators’ assertions that there are no differences in migraine among prepubertal children and that the course of migraine for men is stable throughout their life span.
“There is literature that supports the view that the prevalence in boys is higher in prepuberty, and studies do show that migraine prevalence decreases in older adults – men and women,” she said.
There is still not enough evidence to determine that antiemetics and triptans are safe during pregnancy or that pregnant women with migraine should be taking acetylsalicylic acid, said Dr. Riggins.
The investigators, Dr. Strauss, and Dr. Riggins have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
From EAN 2021
U.S. study find no association with ‘COVID toes’ and COVID-19
A diagnosis, despite an unprecedented number of new chilblain cases reported in 2020.
This study follows a report published almost 2 weeks earlier, of 17 adolescents in Italy with chilblain lesions of the toes. That report indicated that the lesions were not related to current or past infections, and that lifestyle changes may have been a contributing factor .
Early last year, clinicians in Europe and the United States began reporting an unusually high number of chilblain cases, but few of the patients described in the cases were positive for SARS-CoV-2 or its antibodies. The possible connection was explored in studies and featured extensively in the lay press. After all, viral infections, including SARS-CoV-2, are known to be associated with skin rashes. Plus, SARS-CoV-2 infections are known to exhibit a number of dermatological manifestations, such as urticarial and morbilliform eruptions, and vesicular eruptions. More than 150 papers have been published on the spectrum of cutaneous reactions to this virus.
In the new study, led by Patrick E. McCleskey, MD, a dermatologist with Kaiser Permanente Oakland (Calif.) Medical Center, a review of chilblain cases from six Bay Area counties in Northern California found a weak correlation confirming 2% of chilblain cases as potentially secondary to COVID-19.
“While chilblains do seem to follow COVID-19 infection in some cases, most cases of chilblains in our study were not shown to be related to SARS-CoV-2 infection,” Dr. McCleskey said in an interview.
“We think the increase in cases probably had more to do with changes in behavior as children and adults were at home instead of work and school. The highest incidence in chilblains were seen in children ages 13-19, who were staying home from school. Only 6% in our study said they wear shoes at home, and half of our patients don’t have home heating in northern California,” he said.
The condition of chilblains primarily affects the dorsal feet or hands and is almost uniquely associated with spending an inordinate amount of time in damp and cold conditions. There are some medical conditions associated with chilblains, such as Raynaud’s disease, systemic lupus erythematosus, antiphospholipid syndrome, rheumatoid arthritis, hyperhidrosis, and lymphomas and leukemias. And, as with COVID-19, chilblains affect more women than men.
Northern California study
The retrospective cohort study evaluated 780 patients (464 female; mean age 36.8 years) from six Bay Area counties in Northern California, who were treated for chilblains between April and December 2020 when stay-at-home orders were in effect in California. Of the 780 patients, 456 were tested for SARS-CoV-2, and 17 patients (3.7%) tested positive for the virus. In nine patients (2%), a COVID-19 infection was preceded by 6 weeks of chilblains. By September, testing for the COVID-19 virus was more reliable. Testing showed that of 97 chilblains cases, 1% were positive for the virus.
“The finding that some patients with COVID-19 developed chilblains at the same time, or subsequent to the infection, is suggestive of secondary chilblains due to COVID-19,” Dr. McCleskey said.
The 2020 cases were compared with 539 patients (mean age 44.7 years) with chilblains who were treated during the same period in 2016, 2017, 2018 and 2019. During these years, the annual incidence of chilblains was 5.2 (95% confidence interval, 4.8-5.6) per 100,000 person years, compared with 28.6 (95% CI, 26.8-30.4) in 2020, during the pandemic.
Possible explanations
The authors suggest there are several explanations for the increased reports of chilblains in 2020. First, the lack of shoes: During the pandemic, children between the ages of 13 and 19 years had more cases of chilblains than any other age group despite the fact that teenagers have a low-risk of contracting SARS-CoV-2. Six percent of teenagers with newly diagnosed chilblains wore shoes at home during the study period in 2020.
Chilblains was almost three times more common in Asian American (42.5; 95% CI, 37.7-47.8) and White individuals (35.7; 95% CI, 32.6-39.1), compared with Black (11.6; 95% CI, 7.8-17.3) and Latinx (12.5; 95% CI, 10.1-15.4) individuals. But the authors noted that the Latinx community had the highest number of COVID-19 cases (62.5; 95% CI, 61.9-63.1), three times more than Asian Americans (19.0; 95% CI, 18.6-19.3) and White individuals (17.9; 95% CI, 17.7-18.2) and two times more than in Black individuals (29.2; 95% CI, 28.4-29.9).
“Latinx patients had the highest rates of COVID-19 infections in our population, but the lowest rates of chilblains. Groups in Northern California who were more likely to stay home during the pandemic because they could work from home – White and Asian American and White patients – had much higher rates of chilblains than groups who were more likely to have to work outside the home – Latinx and African American patients,” Dr. McCleskey said.
A report by the Bay Area Council in December 2020 found that Asian Americans and Whites were more likely to work from home during the pandemic (52% and 50% respectively) compared with Black and Latinx workers (33% and 30% respectively). While Latinx individuals made up 46% of all COVID-19 cases, they accounted for 9% of chilblain cases in 2020 (but cases may have been underreported), the authors wrote.
And while there may have been more cases of chilblains during the pandemic in 2020, they did not occur in cities with higher rates of COVID-19. “If chilblains were occurring in the same communities where COVID-19 cases were occurring, the Spearman coefficient would be closer to 1,” wrote the authors, referring to the measure used to rank correlation in the study. In this case, the Spearman coefficient was 0.18.
Another explanation for the increase in chilblain cases could be that more patients sought care in response to news reports about ‘COVID toes.’
“The exact cause of chilblains is still elusive. Some publications coming out of the pandemic suggest an interferon response is part of the pathophysiology of chilblains, but this was not the focus of our research,” Dr. McCleskey said.
The authors hypothesized that in affected individuals, particularly younger patients, the immune response to SARS-CoV-2 contributed to chilblains in asymptomatic individuals. “It is possible that some patients with chilblains were exposed to SARS-CoV-2 but produced such a robust innate immune response that it was later difficult to find any evidence of infection,” they wrote.
They suggested that better testing may help identify past exposure to SARS-CoV-2 and secondary chilblains.
The strengths of this study included its size, community base, a control group dating back to 2016, validation by medical records review, and the ability to control for geographic variation allowing investigators to track weather, which can be a factor in chilblain cases. The authors noted several limitations to the study, including the lack of reliable antibody testing early in the year and the lack of IgA antibody testing.
The authors had no disclosures. The study was funded by The Permanente Medical Group Delivery Science and Applied Research initiative.
A diagnosis, despite an unprecedented number of new chilblain cases reported in 2020.
This study follows a report published almost 2 weeks earlier, of 17 adolescents in Italy with chilblain lesions of the toes. That report indicated that the lesions were not related to current or past infections, and that lifestyle changes may have been a contributing factor .
Early last year, clinicians in Europe and the United States began reporting an unusually high number of chilblain cases, but few of the patients described in the cases were positive for SARS-CoV-2 or its antibodies. The possible connection was explored in studies and featured extensively in the lay press. After all, viral infections, including SARS-CoV-2, are known to be associated with skin rashes. Plus, SARS-CoV-2 infections are known to exhibit a number of dermatological manifestations, such as urticarial and morbilliform eruptions, and vesicular eruptions. More than 150 papers have been published on the spectrum of cutaneous reactions to this virus.
In the new study, led by Patrick E. McCleskey, MD, a dermatologist with Kaiser Permanente Oakland (Calif.) Medical Center, a review of chilblain cases from six Bay Area counties in Northern California found a weak correlation confirming 2% of chilblain cases as potentially secondary to COVID-19.
“While chilblains do seem to follow COVID-19 infection in some cases, most cases of chilblains in our study were not shown to be related to SARS-CoV-2 infection,” Dr. McCleskey said in an interview.
“We think the increase in cases probably had more to do with changes in behavior as children and adults were at home instead of work and school. The highest incidence in chilblains were seen in children ages 13-19, who were staying home from school. Only 6% in our study said they wear shoes at home, and half of our patients don’t have home heating in northern California,” he said.
The condition of chilblains primarily affects the dorsal feet or hands and is almost uniquely associated with spending an inordinate amount of time in damp and cold conditions. There are some medical conditions associated with chilblains, such as Raynaud’s disease, systemic lupus erythematosus, antiphospholipid syndrome, rheumatoid arthritis, hyperhidrosis, and lymphomas and leukemias. And, as with COVID-19, chilblains affect more women than men.
Northern California study
The retrospective cohort study evaluated 780 patients (464 female; mean age 36.8 years) from six Bay Area counties in Northern California, who were treated for chilblains between April and December 2020 when stay-at-home orders were in effect in California. Of the 780 patients, 456 were tested for SARS-CoV-2, and 17 patients (3.7%) tested positive for the virus. In nine patients (2%), a COVID-19 infection was preceded by 6 weeks of chilblains. By September, testing for the COVID-19 virus was more reliable. Testing showed that of 97 chilblains cases, 1% were positive for the virus.
“The finding that some patients with COVID-19 developed chilblains at the same time, or subsequent to the infection, is suggestive of secondary chilblains due to COVID-19,” Dr. McCleskey said.
The 2020 cases were compared with 539 patients (mean age 44.7 years) with chilblains who were treated during the same period in 2016, 2017, 2018 and 2019. During these years, the annual incidence of chilblains was 5.2 (95% confidence interval, 4.8-5.6) per 100,000 person years, compared with 28.6 (95% CI, 26.8-30.4) in 2020, during the pandemic.
Possible explanations
The authors suggest there are several explanations for the increased reports of chilblains in 2020. First, the lack of shoes: During the pandemic, children between the ages of 13 and 19 years had more cases of chilblains than any other age group despite the fact that teenagers have a low-risk of contracting SARS-CoV-2. Six percent of teenagers with newly diagnosed chilblains wore shoes at home during the study period in 2020.
Chilblains was almost three times more common in Asian American (42.5; 95% CI, 37.7-47.8) and White individuals (35.7; 95% CI, 32.6-39.1), compared with Black (11.6; 95% CI, 7.8-17.3) and Latinx (12.5; 95% CI, 10.1-15.4) individuals. But the authors noted that the Latinx community had the highest number of COVID-19 cases (62.5; 95% CI, 61.9-63.1), three times more than Asian Americans (19.0; 95% CI, 18.6-19.3) and White individuals (17.9; 95% CI, 17.7-18.2) and two times more than in Black individuals (29.2; 95% CI, 28.4-29.9).
“Latinx patients had the highest rates of COVID-19 infections in our population, but the lowest rates of chilblains. Groups in Northern California who were more likely to stay home during the pandemic because they could work from home – White and Asian American and White patients – had much higher rates of chilblains than groups who were more likely to have to work outside the home – Latinx and African American patients,” Dr. McCleskey said.
A report by the Bay Area Council in December 2020 found that Asian Americans and Whites were more likely to work from home during the pandemic (52% and 50% respectively) compared with Black and Latinx workers (33% and 30% respectively). While Latinx individuals made up 46% of all COVID-19 cases, they accounted for 9% of chilblain cases in 2020 (but cases may have been underreported), the authors wrote.
And while there may have been more cases of chilblains during the pandemic in 2020, they did not occur in cities with higher rates of COVID-19. “If chilblains were occurring in the same communities where COVID-19 cases were occurring, the Spearman coefficient would be closer to 1,” wrote the authors, referring to the measure used to rank correlation in the study. In this case, the Spearman coefficient was 0.18.
Another explanation for the increase in chilblain cases could be that more patients sought care in response to news reports about ‘COVID toes.’
“The exact cause of chilblains is still elusive. Some publications coming out of the pandemic suggest an interferon response is part of the pathophysiology of chilblains, but this was not the focus of our research,” Dr. McCleskey said.
The authors hypothesized that in affected individuals, particularly younger patients, the immune response to SARS-CoV-2 contributed to chilblains in asymptomatic individuals. “It is possible that some patients with chilblains were exposed to SARS-CoV-2 but produced such a robust innate immune response that it was later difficult to find any evidence of infection,” they wrote.
They suggested that better testing may help identify past exposure to SARS-CoV-2 and secondary chilblains.
The strengths of this study included its size, community base, a control group dating back to 2016, validation by medical records review, and the ability to control for geographic variation allowing investigators to track weather, which can be a factor in chilblain cases. The authors noted several limitations to the study, including the lack of reliable antibody testing early in the year and the lack of IgA antibody testing.
The authors had no disclosures. The study was funded by The Permanente Medical Group Delivery Science and Applied Research initiative.
A diagnosis, despite an unprecedented number of new chilblain cases reported in 2020.
This study follows a report published almost 2 weeks earlier, of 17 adolescents in Italy with chilblain lesions of the toes. That report indicated that the lesions were not related to current or past infections, and that lifestyle changes may have been a contributing factor .
Early last year, clinicians in Europe and the United States began reporting an unusually high number of chilblain cases, but few of the patients described in the cases were positive for SARS-CoV-2 or its antibodies. The possible connection was explored in studies and featured extensively in the lay press. After all, viral infections, including SARS-CoV-2, are known to be associated with skin rashes. Plus, SARS-CoV-2 infections are known to exhibit a number of dermatological manifestations, such as urticarial and morbilliform eruptions, and vesicular eruptions. More than 150 papers have been published on the spectrum of cutaneous reactions to this virus.
In the new study, led by Patrick E. McCleskey, MD, a dermatologist with Kaiser Permanente Oakland (Calif.) Medical Center, a review of chilblain cases from six Bay Area counties in Northern California found a weak correlation confirming 2% of chilblain cases as potentially secondary to COVID-19.
“While chilblains do seem to follow COVID-19 infection in some cases, most cases of chilblains in our study were not shown to be related to SARS-CoV-2 infection,” Dr. McCleskey said in an interview.
“We think the increase in cases probably had more to do with changes in behavior as children and adults were at home instead of work and school. The highest incidence in chilblains were seen in children ages 13-19, who were staying home from school. Only 6% in our study said they wear shoes at home, and half of our patients don’t have home heating in northern California,” he said.
The condition of chilblains primarily affects the dorsal feet or hands and is almost uniquely associated with spending an inordinate amount of time in damp and cold conditions. There are some medical conditions associated with chilblains, such as Raynaud’s disease, systemic lupus erythematosus, antiphospholipid syndrome, rheumatoid arthritis, hyperhidrosis, and lymphomas and leukemias. And, as with COVID-19, chilblains affect more women than men.
Northern California study
The retrospective cohort study evaluated 780 patients (464 female; mean age 36.8 years) from six Bay Area counties in Northern California, who were treated for chilblains between April and December 2020 when stay-at-home orders were in effect in California. Of the 780 patients, 456 were tested for SARS-CoV-2, and 17 patients (3.7%) tested positive for the virus. In nine patients (2%), a COVID-19 infection was preceded by 6 weeks of chilblains. By September, testing for the COVID-19 virus was more reliable. Testing showed that of 97 chilblains cases, 1% were positive for the virus.
“The finding that some patients with COVID-19 developed chilblains at the same time, or subsequent to the infection, is suggestive of secondary chilblains due to COVID-19,” Dr. McCleskey said.
The 2020 cases were compared with 539 patients (mean age 44.7 years) with chilblains who were treated during the same period in 2016, 2017, 2018 and 2019. During these years, the annual incidence of chilblains was 5.2 (95% confidence interval, 4.8-5.6) per 100,000 person years, compared with 28.6 (95% CI, 26.8-30.4) in 2020, during the pandemic.
Possible explanations
The authors suggest there are several explanations for the increased reports of chilblains in 2020. First, the lack of shoes: During the pandemic, children between the ages of 13 and 19 years had more cases of chilblains than any other age group despite the fact that teenagers have a low-risk of contracting SARS-CoV-2. Six percent of teenagers with newly diagnosed chilblains wore shoes at home during the study period in 2020.
Chilblains was almost three times more common in Asian American (42.5; 95% CI, 37.7-47.8) and White individuals (35.7; 95% CI, 32.6-39.1), compared with Black (11.6; 95% CI, 7.8-17.3) and Latinx (12.5; 95% CI, 10.1-15.4) individuals. But the authors noted that the Latinx community had the highest number of COVID-19 cases (62.5; 95% CI, 61.9-63.1), three times more than Asian Americans (19.0; 95% CI, 18.6-19.3) and White individuals (17.9; 95% CI, 17.7-18.2) and two times more than in Black individuals (29.2; 95% CI, 28.4-29.9).
“Latinx patients had the highest rates of COVID-19 infections in our population, but the lowest rates of chilblains. Groups in Northern California who were more likely to stay home during the pandemic because they could work from home – White and Asian American and White patients – had much higher rates of chilblains than groups who were more likely to have to work outside the home – Latinx and African American patients,” Dr. McCleskey said.
A report by the Bay Area Council in December 2020 found that Asian Americans and Whites were more likely to work from home during the pandemic (52% and 50% respectively) compared with Black and Latinx workers (33% and 30% respectively). While Latinx individuals made up 46% of all COVID-19 cases, they accounted for 9% of chilblain cases in 2020 (but cases may have been underreported), the authors wrote.
And while there may have been more cases of chilblains during the pandemic in 2020, they did not occur in cities with higher rates of COVID-19. “If chilblains were occurring in the same communities where COVID-19 cases were occurring, the Spearman coefficient would be closer to 1,” wrote the authors, referring to the measure used to rank correlation in the study. In this case, the Spearman coefficient was 0.18.
Another explanation for the increase in chilblain cases could be that more patients sought care in response to news reports about ‘COVID toes.’
“The exact cause of chilblains is still elusive. Some publications coming out of the pandemic suggest an interferon response is part of the pathophysiology of chilblains, but this was not the focus of our research,” Dr. McCleskey said.
The authors hypothesized that in affected individuals, particularly younger patients, the immune response to SARS-CoV-2 contributed to chilblains in asymptomatic individuals. “It is possible that some patients with chilblains were exposed to SARS-CoV-2 but produced such a robust innate immune response that it was later difficult to find any evidence of infection,” they wrote.
They suggested that better testing may help identify past exposure to SARS-CoV-2 and secondary chilblains.
The strengths of this study included its size, community base, a control group dating back to 2016, validation by medical records review, and the ability to control for geographic variation allowing investigators to track weather, which can be a factor in chilblain cases. The authors noted several limitations to the study, including the lack of reliable antibody testing early in the year and the lack of IgA antibody testing.
The authors had no disclosures. The study was funded by The Permanente Medical Group Delivery Science and Applied Research initiative.