User login
Bringing you the latest news, research and reviews, exclusive interviews, podcasts, quizzes, and more.
div[contains(@class, 'read-next-article')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
nav[contains(@class, 'nav-ce-stack nav-ce-stack__large-screen')]
header[@id='header']
div[contains(@class, 'header__large-screen')]
div[contains(@class, 'read-next-article')]
div[contains(@class, 'main-prefix')]
div[contains(@class, 'nav-primary')]
nav[contains(@class, 'nav-primary')]
section[contains(@class, 'footer-nav-section-wrapper')]
footer[@id='footer']
section[contains(@class, 'nav-hidden')]
div[contains(@class, 'ce-card-content')]
nav[contains(@class, 'nav-ce-stack')]
div[contains(@class, 'view-medstat-quiz-listing-panes')]
div[contains(@class, 'pane-article-sidebar-latest-news')]
ILD on the rise: Doctors offer tips for diagnosing deadly disease
“There is definitely a delay from the time of symptom onset to the time that they are even evaluated for ILD,” said Dr. Kulkarni of the department of pulmonary, allergy and critical care medicine at the University of Alabama, Birmingham. “Some patients have had a significant loss of lung function by the time they come to see us. By that point we are limited by what treatment options we can offer.”
Interstitial lung disease is an umbrella term for a group of disorders involving progressive scarring of the lungs – typically irreversible – usually caused by long-term exposure to hazardous materials or by autoimmune effects. It includes idiopathic pulmonary fibrosis (IPF), a disease that is fairly rare but which has therapy options that can be effective if caught early enough. The term pulmonary fibrosis refers to lung scarring. Another type of ILD is pulmonary sarcoidosis, in which small clumps of immune cells form in the lungs in an immune response sometimes following an environmental trigger, and can lead to lung scarring if it doesn’t resolve.
Cases of ILD appear to be on the rise, and COVID-19 has made diagnosing it more complicated. One study found the prevalence of ILD and pulmonary sarcoidosis in high-income countries was about 122 of every 100,000 people in 1990 and rose to about 198 of every 100,000 people in 2017. The data were pulled from the Global Burden of Diseases, Injuries, and Risk Factors Study 2017. Globally, the researchers found a prevalence of 62 per 100,000 in 1990, compared with 82 per 100,000 in 2017.
If all of a patient’s symptoms have appeared post COVID and a physician is seeing a patient within 4-6 weeks of COVID symptoms, it is likely that the symptoms are COVID related. But a full work-up is recommended if a patient has lung crackles, which are an indicator of lung scarring, she said.
“The patterns that are seen on CT scan for COVID pneumonia are very distinct from what we expect to see with idiopathic pulmonary fibrosis,” Dr. Kulkarni said. “Putting all this information together is what is important to differentiate it from COVID pneumonia, as well as other types of ILD.”
A study published earlier this year found similarities between COVID-19 and IPF in gene expression, their IL-15-heavy cytokine storms, and the type of damage to alveolar cells. Both might be driven by endoplasmic reticulum stress, they found.
“COVID-19 resembles IPF at a fundamental level,” they wrote.
Jeffrey Horowitz, MD, a pulmonologist and professor of medicine at the Ohio State University, said the need for early diagnosis is in part a function of the therapies available for ILD.
“They don’t make the lung function better,” he said. “So delays in diagnosis mean that there’s the possibility of underlying progression for months, or sometimes years, before the diagnosis is recognized.”
In an area in which diagnosis is delayed and the prognosis is dire – 3-5 years in untreated patients after diagnosis – “there’s a tremendous amount of nihilism out there” among patients, he said.
He said patients with long-term shortness of breath and unexplained cough are often told they have asthma and are prescribed inhalers, but then further assessment isn’t performed when those don’t work.
Diagnosing ILD in primary care
Many primary care physicians feel ill-equipped to discuss IPF. More than a dozen physicians contacted for this piece to talk about ILD either did not respond, or said they felt unqualified to respond to questions on the disease.
“Not my area of expertise” and “I don’t think I’m the right person for this discussion” were two of the responses provided to this news organization.
“For some reason, in the world of primary care, it seems like there’s an impediment to getting pulmonary function studies,” Dr. Horowitz said. “Anybody who has a persistent ongoing prolonged unexplained shortness of breath and cough should have pulmonary function studies done.”
Listening to the lungs alone might not be enough, he said. There might be no clear sign in the case of early pulmonary fibrosis, he said.
“There’s the textbook description of these Velcro-sounding crackles, but sometimes it’s very subtle,” he said. “And unless you’re listening very carefully it can easily be missed by somebody who has a busy practice, or it’s loud.”
William E. Golden, MD, professor of medicine and public health at the University of Arkansas, Little Rock, is the sole primary care physician contacted for this piece who spoke with authority on ILD.
For cases of suspected ILD, internist Dr. Golden, who also serves on the editorial advisory board of Internal Medicine News, suggested ordering a test for diffusing capacity for carbon monoxide (DLCO), which will be low in the case of IPF, along with a fine-cut lung CT scan to assess ongoing fibrotic changes.
It’s “not that difficult, but you need to have an index of suspicion for the diagnosis,” he said.
New initiative for helping diagnose ILD
Dr. Kulkarni is a committee member for a new effort under way to try to get patients with ILD diagnosed earlier.
The initiative, called Bridging Specialties: Timely Diagnosis for ILD Patients, has already produced an introductory podcast and a white paper on the effort, and its rationale is expected to be released soon, according to Dr. Kulkarni and her fellow committee members.
The American College of Chest Physicians and the Three Lakes Foundation – a foundation dedicated to pulmonary fibrosis awareness and research – are working together on this initiative. They plan to put together a suite of resources, to be gradually rolled out on the college’s website, to raise awareness about the importance of early diagnosis of ILD.
The full toolkit, expected to be rolled out over the next 12 months, will include a series of podcasts and resources on how to get patients diagnosed earlier and steps to take in cases of suspected ILD, Dr. Kulkarni said.
“The goal would be to try to increase awareness about the disease so that people start thinking more about it up front – and not after we’ve ruled out everything else,” she said. The main audience will be primary care providers, but patients and community pulmonologists would likely also benefit from the resources, the committee members said.
The urgency of the initiative stems from the way ILD treatments work. They are antifibrotic, meaning they help prevent scar tissue from forming, but they can’t reverse scar tissue that has already formed. If scarring is severe, the only option might be a lung transplant, and, since the average age at ILD diagnosis is in the 60s, many patients have comorbidities that make them ineligible for transplant. According to the Global Burden of Disease Study mentioned earlier, the death rate per 100,000 people with ILD was 1.93 in 2017.
“The longer we take to diagnose it, the more chance that inflammation will become scar tissue,” Dr. Kularni explained.
William Lago, MD, another member of the committee and a family physician, said identifying ILD early is not a straightforward matter .
“When they first present, it’s hard to pick up,” said Dr. Lago, who is also a staff physician at Cleveland Clinic’s Wooster Family Health Center and medical director of the COVID Recover Clinic there. “Many of them, even themselves, will discount the symptoms.”
Dr. Lago said that patients might resist having a work-up even when a primary care physician identifies symptoms as possible ILD. In rural settings, they might have to travel quite a distance for a CT scan or other necessary evaluations, or they might just not think the symptoms are serious enough.
“Most of the time when I’ve picked up some of my pulmonary fibrosis patients, it’s been incidentally while they’re in the office for other things,” he said. He often has to “push the issue” for further work-up, he said.
The overlap of shortness of breath and cough with other, much more common disorders, such as heart disease or chronic obstructive pulmonary disease (COPD), make ILD diagnosis a challenge, he said.
“For most of us, we’ve got sometimes 10 or 15 minutes with a patient who’s presenting with 5-6 different problems. And the shortness of breath or the occasional cough – that they think is nothing – is probably the least of those,” Dr. Lago said.
Dr. Golden said he suspected a tool like the one being developed by CHEST to be useful for some and not useful for others. He added that “no one has the time to spend on that kind of thing.”
Instead, he suggested just reinforcing what the core symptoms are and what the core testing is, “to make people think about it.”
Dr. Horowitiz seemed more optimistic about the likelihood of the CHEST tool being utilized to diagnose ILD.
Whether and how he would use the CHEST resource will depend on the final form it takes, Dr. Horowitz said. It’s encouraging that it’s being put together by a credible source, he added.
Dr. Kulkarni reported financial relationships with Boehringer Ingelheim, Aluda Pharmaceuticals and PureTech Lyt-100 Inc. Dr. Lago, Dr. Horowitz, and Dr. Golden reported no relevant disclosures.
Katie Lennon contributed to this report.
“There is definitely a delay from the time of symptom onset to the time that they are even evaluated for ILD,” said Dr. Kulkarni of the department of pulmonary, allergy and critical care medicine at the University of Alabama, Birmingham. “Some patients have had a significant loss of lung function by the time they come to see us. By that point we are limited by what treatment options we can offer.”
Interstitial lung disease is an umbrella term for a group of disorders involving progressive scarring of the lungs – typically irreversible – usually caused by long-term exposure to hazardous materials or by autoimmune effects. It includes idiopathic pulmonary fibrosis (IPF), a disease that is fairly rare but which has therapy options that can be effective if caught early enough. The term pulmonary fibrosis refers to lung scarring. Another type of ILD is pulmonary sarcoidosis, in which small clumps of immune cells form in the lungs in an immune response sometimes following an environmental trigger, and can lead to lung scarring if it doesn’t resolve.
Cases of ILD appear to be on the rise, and COVID-19 has made diagnosing it more complicated. One study found the prevalence of ILD and pulmonary sarcoidosis in high-income countries was about 122 of every 100,000 people in 1990 and rose to about 198 of every 100,000 people in 2017. The data were pulled from the Global Burden of Diseases, Injuries, and Risk Factors Study 2017. Globally, the researchers found a prevalence of 62 per 100,000 in 1990, compared with 82 per 100,000 in 2017.
If all of a patient’s symptoms have appeared post COVID and a physician is seeing a patient within 4-6 weeks of COVID symptoms, it is likely that the symptoms are COVID related. But a full work-up is recommended if a patient has lung crackles, which are an indicator of lung scarring, she said.
“The patterns that are seen on CT scan for COVID pneumonia are very distinct from what we expect to see with idiopathic pulmonary fibrosis,” Dr. Kulkarni said. “Putting all this information together is what is important to differentiate it from COVID pneumonia, as well as other types of ILD.”
A study published earlier this year found similarities between COVID-19 and IPF in gene expression, their IL-15-heavy cytokine storms, and the type of damage to alveolar cells. Both might be driven by endoplasmic reticulum stress, they found.
“COVID-19 resembles IPF at a fundamental level,” they wrote.
Jeffrey Horowitz, MD, a pulmonologist and professor of medicine at the Ohio State University, said the need for early diagnosis is in part a function of the therapies available for ILD.
“They don’t make the lung function better,” he said. “So delays in diagnosis mean that there’s the possibility of underlying progression for months, or sometimes years, before the diagnosis is recognized.”
In an area in which diagnosis is delayed and the prognosis is dire – 3-5 years in untreated patients after diagnosis – “there’s a tremendous amount of nihilism out there” among patients, he said.
He said patients with long-term shortness of breath and unexplained cough are often told they have asthma and are prescribed inhalers, but then further assessment isn’t performed when those don’t work.
Diagnosing ILD in primary care
Many primary care physicians feel ill-equipped to discuss IPF. More than a dozen physicians contacted for this piece to talk about ILD either did not respond, or said they felt unqualified to respond to questions on the disease.
“Not my area of expertise” and “I don’t think I’m the right person for this discussion” were two of the responses provided to this news organization.
“For some reason, in the world of primary care, it seems like there’s an impediment to getting pulmonary function studies,” Dr. Horowitz said. “Anybody who has a persistent ongoing prolonged unexplained shortness of breath and cough should have pulmonary function studies done.”
Listening to the lungs alone might not be enough, he said. There might be no clear sign in the case of early pulmonary fibrosis, he said.
“There’s the textbook description of these Velcro-sounding crackles, but sometimes it’s very subtle,” he said. “And unless you’re listening very carefully it can easily be missed by somebody who has a busy practice, or it’s loud.”
William E. Golden, MD, professor of medicine and public health at the University of Arkansas, Little Rock, is the sole primary care physician contacted for this piece who spoke with authority on ILD.
For cases of suspected ILD, internist Dr. Golden, who also serves on the editorial advisory board of Internal Medicine News, suggested ordering a test for diffusing capacity for carbon monoxide (DLCO), which will be low in the case of IPF, along with a fine-cut lung CT scan to assess ongoing fibrotic changes.
It’s “not that difficult, but you need to have an index of suspicion for the diagnosis,” he said.
New initiative for helping diagnose ILD
Dr. Kulkarni is a committee member for a new effort under way to try to get patients with ILD diagnosed earlier.
The initiative, called Bridging Specialties: Timely Diagnosis for ILD Patients, has already produced an introductory podcast and a white paper on the effort, and its rationale is expected to be released soon, according to Dr. Kulkarni and her fellow committee members.
The American College of Chest Physicians and the Three Lakes Foundation – a foundation dedicated to pulmonary fibrosis awareness and research – are working together on this initiative. They plan to put together a suite of resources, to be gradually rolled out on the college’s website, to raise awareness about the importance of early diagnosis of ILD.
The full toolkit, expected to be rolled out over the next 12 months, will include a series of podcasts and resources on how to get patients diagnosed earlier and steps to take in cases of suspected ILD, Dr. Kulkarni said.
“The goal would be to try to increase awareness about the disease so that people start thinking more about it up front – and not after we’ve ruled out everything else,” she said. The main audience will be primary care providers, but patients and community pulmonologists would likely also benefit from the resources, the committee members said.
The urgency of the initiative stems from the way ILD treatments work. They are antifibrotic, meaning they help prevent scar tissue from forming, but they can’t reverse scar tissue that has already formed. If scarring is severe, the only option might be a lung transplant, and, since the average age at ILD diagnosis is in the 60s, many patients have comorbidities that make them ineligible for transplant. According to the Global Burden of Disease Study mentioned earlier, the death rate per 100,000 people with ILD was 1.93 in 2017.
“The longer we take to diagnose it, the more chance that inflammation will become scar tissue,” Dr. Kularni explained.
William Lago, MD, another member of the committee and a family physician, said identifying ILD early is not a straightforward matter .
“When they first present, it’s hard to pick up,” said Dr. Lago, who is also a staff physician at Cleveland Clinic’s Wooster Family Health Center and medical director of the COVID Recover Clinic there. “Many of them, even themselves, will discount the symptoms.”
Dr. Lago said that patients might resist having a work-up even when a primary care physician identifies symptoms as possible ILD. In rural settings, they might have to travel quite a distance for a CT scan or other necessary evaluations, or they might just not think the symptoms are serious enough.
“Most of the time when I’ve picked up some of my pulmonary fibrosis patients, it’s been incidentally while they’re in the office for other things,” he said. He often has to “push the issue” for further work-up, he said.
The overlap of shortness of breath and cough with other, much more common disorders, such as heart disease or chronic obstructive pulmonary disease (COPD), make ILD diagnosis a challenge, he said.
“For most of us, we’ve got sometimes 10 or 15 minutes with a patient who’s presenting with 5-6 different problems. And the shortness of breath or the occasional cough – that they think is nothing – is probably the least of those,” Dr. Lago said.
Dr. Golden said he suspected a tool like the one being developed by CHEST to be useful for some and not useful for others. He added that “no one has the time to spend on that kind of thing.”
Instead, he suggested just reinforcing what the core symptoms are and what the core testing is, “to make people think about it.”
Dr. Horowitiz seemed more optimistic about the likelihood of the CHEST tool being utilized to diagnose ILD.
Whether and how he would use the CHEST resource will depend on the final form it takes, Dr. Horowitz said. It’s encouraging that it’s being put together by a credible source, he added.
Dr. Kulkarni reported financial relationships with Boehringer Ingelheim, Aluda Pharmaceuticals and PureTech Lyt-100 Inc. Dr. Lago, Dr. Horowitz, and Dr. Golden reported no relevant disclosures.
Katie Lennon contributed to this report.
“There is definitely a delay from the time of symptom onset to the time that they are even evaluated for ILD,” said Dr. Kulkarni of the department of pulmonary, allergy and critical care medicine at the University of Alabama, Birmingham. “Some patients have had a significant loss of lung function by the time they come to see us. By that point we are limited by what treatment options we can offer.”
Interstitial lung disease is an umbrella term for a group of disorders involving progressive scarring of the lungs – typically irreversible – usually caused by long-term exposure to hazardous materials or by autoimmune effects. It includes idiopathic pulmonary fibrosis (IPF), a disease that is fairly rare but which has therapy options that can be effective if caught early enough. The term pulmonary fibrosis refers to lung scarring. Another type of ILD is pulmonary sarcoidosis, in which small clumps of immune cells form in the lungs in an immune response sometimes following an environmental trigger, and can lead to lung scarring if it doesn’t resolve.
Cases of ILD appear to be on the rise, and COVID-19 has made diagnosing it more complicated. One study found the prevalence of ILD and pulmonary sarcoidosis in high-income countries was about 122 of every 100,000 people in 1990 and rose to about 198 of every 100,000 people in 2017. The data were pulled from the Global Burden of Diseases, Injuries, and Risk Factors Study 2017. Globally, the researchers found a prevalence of 62 per 100,000 in 1990, compared with 82 per 100,000 in 2017.
If all of a patient’s symptoms have appeared post COVID and a physician is seeing a patient within 4-6 weeks of COVID symptoms, it is likely that the symptoms are COVID related. But a full work-up is recommended if a patient has lung crackles, which are an indicator of lung scarring, she said.
“The patterns that are seen on CT scan for COVID pneumonia are very distinct from what we expect to see with idiopathic pulmonary fibrosis,” Dr. Kulkarni said. “Putting all this information together is what is important to differentiate it from COVID pneumonia, as well as other types of ILD.”
A study published earlier this year found similarities between COVID-19 and IPF in gene expression, their IL-15-heavy cytokine storms, and the type of damage to alveolar cells. Both might be driven by endoplasmic reticulum stress, they found.
“COVID-19 resembles IPF at a fundamental level,” they wrote.
Jeffrey Horowitz, MD, a pulmonologist and professor of medicine at the Ohio State University, said the need for early diagnosis is in part a function of the therapies available for ILD.
“They don’t make the lung function better,” he said. “So delays in diagnosis mean that there’s the possibility of underlying progression for months, or sometimes years, before the diagnosis is recognized.”
In an area in which diagnosis is delayed and the prognosis is dire – 3-5 years in untreated patients after diagnosis – “there’s a tremendous amount of nihilism out there” among patients, he said.
He said patients with long-term shortness of breath and unexplained cough are often told they have asthma and are prescribed inhalers, but then further assessment isn’t performed when those don’t work.
Diagnosing ILD in primary care
Many primary care physicians feel ill-equipped to discuss IPF. More than a dozen physicians contacted for this piece to talk about ILD either did not respond, or said they felt unqualified to respond to questions on the disease.
“Not my area of expertise” and “I don’t think I’m the right person for this discussion” were two of the responses provided to this news organization.
“For some reason, in the world of primary care, it seems like there’s an impediment to getting pulmonary function studies,” Dr. Horowitz said. “Anybody who has a persistent ongoing prolonged unexplained shortness of breath and cough should have pulmonary function studies done.”
Listening to the lungs alone might not be enough, he said. There might be no clear sign in the case of early pulmonary fibrosis, he said.
“There’s the textbook description of these Velcro-sounding crackles, but sometimes it’s very subtle,” he said. “And unless you’re listening very carefully it can easily be missed by somebody who has a busy practice, or it’s loud.”
William E. Golden, MD, professor of medicine and public health at the University of Arkansas, Little Rock, is the sole primary care physician contacted for this piece who spoke with authority on ILD.
For cases of suspected ILD, internist Dr. Golden, who also serves on the editorial advisory board of Internal Medicine News, suggested ordering a test for diffusing capacity for carbon monoxide (DLCO), which will be low in the case of IPF, along with a fine-cut lung CT scan to assess ongoing fibrotic changes.
It’s “not that difficult, but you need to have an index of suspicion for the diagnosis,” he said.
New initiative for helping diagnose ILD
Dr. Kulkarni is a committee member for a new effort under way to try to get patients with ILD diagnosed earlier.
The initiative, called Bridging Specialties: Timely Diagnosis for ILD Patients, has already produced an introductory podcast and a white paper on the effort, and its rationale is expected to be released soon, according to Dr. Kulkarni and her fellow committee members.
The American College of Chest Physicians and the Three Lakes Foundation – a foundation dedicated to pulmonary fibrosis awareness and research – are working together on this initiative. They plan to put together a suite of resources, to be gradually rolled out on the college’s website, to raise awareness about the importance of early diagnosis of ILD.
The full toolkit, expected to be rolled out over the next 12 months, will include a series of podcasts and resources on how to get patients diagnosed earlier and steps to take in cases of suspected ILD, Dr. Kulkarni said.
“The goal would be to try to increase awareness about the disease so that people start thinking more about it up front – and not after we’ve ruled out everything else,” she said. The main audience will be primary care providers, but patients and community pulmonologists would likely also benefit from the resources, the committee members said.
The urgency of the initiative stems from the way ILD treatments work. They are antifibrotic, meaning they help prevent scar tissue from forming, but they can’t reverse scar tissue that has already formed. If scarring is severe, the only option might be a lung transplant, and, since the average age at ILD diagnosis is in the 60s, many patients have comorbidities that make them ineligible for transplant. According to the Global Burden of Disease Study mentioned earlier, the death rate per 100,000 people with ILD was 1.93 in 2017.
“The longer we take to diagnose it, the more chance that inflammation will become scar tissue,” Dr. Kularni explained.
William Lago, MD, another member of the committee and a family physician, said identifying ILD early is not a straightforward matter .
“When they first present, it’s hard to pick up,” said Dr. Lago, who is also a staff physician at Cleveland Clinic’s Wooster Family Health Center and medical director of the COVID Recover Clinic there. “Many of them, even themselves, will discount the symptoms.”
Dr. Lago said that patients might resist having a work-up even when a primary care physician identifies symptoms as possible ILD. In rural settings, they might have to travel quite a distance for a CT scan or other necessary evaluations, or they might just not think the symptoms are serious enough.
“Most of the time when I’ve picked up some of my pulmonary fibrosis patients, it’s been incidentally while they’re in the office for other things,” he said. He often has to “push the issue” for further work-up, he said.
The overlap of shortness of breath and cough with other, much more common disorders, such as heart disease or chronic obstructive pulmonary disease (COPD), make ILD diagnosis a challenge, he said.
“For most of us, we’ve got sometimes 10 or 15 minutes with a patient who’s presenting with 5-6 different problems. And the shortness of breath or the occasional cough – that they think is nothing – is probably the least of those,” Dr. Lago said.
Dr. Golden said he suspected a tool like the one being developed by CHEST to be useful for some and not useful for others. He added that “no one has the time to spend on that kind of thing.”
Instead, he suggested just reinforcing what the core symptoms are and what the core testing is, “to make people think about it.”
Dr. Horowitiz seemed more optimistic about the likelihood of the CHEST tool being utilized to diagnose ILD.
Whether and how he would use the CHEST resource will depend on the final form it takes, Dr. Horowitz said. It’s encouraging that it’s being put together by a credible source, he added.
Dr. Kulkarni reported financial relationships with Boehringer Ingelheim, Aluda Pharmaceuticals and PureTech Lyt-100 Inc. Dr. Lago, Dr. Horowitz, and Dr. Golden reported no relevant disclosures.
Katie Lennon contributed to this report.
Crystal bone algorithm predicts early fractures, uses ICD codes
The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.
The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.
The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.
The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).
Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.
The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.
“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.
At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”
However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.
‘A very useful advance’ but needs further validation
Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”
With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.
“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.
The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).
“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”
“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”
Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”
Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.
“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.
“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”
“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.
“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
Algorithm validated in 106,328 patients aged 50 and older
Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.
The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.
The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.
In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.
The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.
Many of these patients had no prior diagnosis of osteoporosis
For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).
The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.
The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.
A version of this article first appeared on Medscape.com.
The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.
The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.
The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.
The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).
Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.
The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.
“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.
At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”
However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.
‘A very useful advance’ but needs further validation
Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”
With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.
“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.
The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).
“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”
“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”
Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”
Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.
“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.
“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”
“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.
“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
Algorithm validated in 106,328 patients aged 50 and older
Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.
The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.
The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.
In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.
The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.
Many of these patients had no prior diagnosis of osteoporosis
For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).
The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.
The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.
A version of this article first appeared on Medscape.com.
The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.
The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.
The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.
The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).
Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.
The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.
“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.
At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”
However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.
‘A very useful advance’ but needs further validation
Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”
With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.
“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.
The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).
“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”
“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”
Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”
Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.
“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.
“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”
“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.
“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
Algorithm validated in 106,328 patients aged 50 and older
Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.
The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.
The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.
In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.
The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.
Many of these patients had no prior diagnosis of osteoporosis
For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).
The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.
The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.
A version of this article first appeared on Medscape.com.
FROM ASBMR 2022
Prior psychological distress tied to ‘long-COVID’ conditions
In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.
Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.
“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.
“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.
The findings were published online in JAMA Psychiatry.
‘Poorly understood’
Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.
Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.
Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.
To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).
They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.
Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.
The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.
Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
Inflammation, immune dysregulation implicated?
The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).
Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.
The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.
For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.
During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.
Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.
The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).
Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.
In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).
Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).
Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.
However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
Contributes to the field
Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.
Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.
He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”
Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.
More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.
The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.
Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.
“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.
“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.
The findings were published online in JAMA Psychiatry.
‘Poorly understood’
Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.
Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.
Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.
To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).
They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.
Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.
The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.
Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
Inflammation, immune dysregulation implicated?
The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).
Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.
The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.
For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.
During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.
Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.
The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).
Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.
In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).
Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).
Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.
However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
Contributes to the field
Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.
Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.
He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”
Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.
More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.
The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.
Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.
“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.
“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.
The findings were published online in JAMA Psychiatry.
‘Poorly understood’
Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.
Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.
Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.
To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).
They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.
Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.
The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.
Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
Inflammation, immune dysregulation implicated?
The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).
Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.
The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.
For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.
During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.
Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.
The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).
Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.
In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).
Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).
Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.
However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
Contributes to the field
Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.
Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.
He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”
Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.
More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.
The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA PSYCHIATRY
A ‘big breakfast’ diet affects hunger, not weight loss
, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.
“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”
Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.
Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.
All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
No optimum time to eat for weight loss
Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.
The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.
“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.
“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
Meal timing reduces hunger but does not affect weight loss
However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”
“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.
“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.
“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
‘Major finding’ for chrono-nutrition
Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.
“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”
It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.
“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
Great variability in individual responses to diets
Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.
“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.
“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.
“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.
“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”
This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.
A version of this article first appeared on Medscape.co.uk.
, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.
“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”
Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.
Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.
All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
No optimum time to eat for weight loss
Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.
The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.
“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.
“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
Meal timing reduces hunger but does not affect weight loss
However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”
“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.
“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.
“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
‘Major finding’ for chrono-nutrition
Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.
“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”
It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.
“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
Great variability in individual responses to diets
Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.
“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.
“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.
“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.
“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”
This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.
A version of this article first appeared on Medscape.co.uk.
, published in Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.
“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”
Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.
Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.
All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
No optimum time to eat for weight loss
Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.
The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.
“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.
“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
Meal timing reduces hunger but does not affect weight loss
However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”
“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.
“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.
“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
‘Major finding’ for chrono-nutrition
Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.
“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”
It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.
“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
Great variability in individual responses to diets
Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.
“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.
“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.
“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.
“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”
This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.
A version of this article first appeared on Medscape.co.uk.
FROM CELL METABOLISM
How does salt intake relate to mortality?
Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.
Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered:
Cardiovascular disease and death
Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.
In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).
In addition, the researchers made the following observations:
- For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
- Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.
The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
Salt sensitivity
Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).
The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
The effect of potassium
Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.
In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).
The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.
Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.
A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.
In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.
This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.
Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.
Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered:
Cardiovascular disease and death
Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.
In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).
In addition, the researchers made the following observations:
- For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
- Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.
The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
Salt sensitivity
Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).
The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
The effect of potassium
Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.
In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).
The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.
Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.
A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.
In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.
This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.
Intake of salt is a biological necessity, inextricably woven into physiologic systems. However, excessive salt intake is associated with high blood pressure. Hypertension is linked to increased cardiovascular morbidity and mortality, and it is estimated that excessive salt intake causes approximately 5 million deaths per year worldwide. Reducing salt intake lowers blood pressure, but processed foods contain “hidden” salt, which makes dietary control of salt difficult. This problem is compounded by growing inequalities in food systems, which present another hurdle to sustaining individual dietary control of salt intake.
Of the 87 risk factors included in the Global Burden of Diseases, Injuries, and Risk Factors Study 2019, high systolic blood pressure was identified as the leading risk factor for disease burden at the global level and for its effect on human health. A range of strategies, including primary care management and reduction in sodium intake, are known to reduce the burden of this critical risk factor. Two questions remain unanswered:
Cardiovascular disease and death
Because dietary sodium intake has been identified as a risk factor for cardiovascular disease and premature death, high sodium intake can be expected to curtail life span. A study tested this hypothesis by analyzing the relationship between sodium intake and life expectancy and survival in 181 countries. Sodium intake correlated positively with life expectancy and inversely with all-cause mortality worldwide and in high-income countries, which argues against dietary sodium intake curtailing life span or a being risk factor for premature death. These results help fuel a scientific debate about sodium intake, life expectancy, and mortality. The debate requires interpreting composite data of positive linear, J-shaped, or inverse linear correlations, which underscores the uncertainty regarding this issue.
In a prospective study of 501,379 participants from the UK Biobank, researchers found that higher frequency of adding salt to foods was significantly associated with a higher risk of premature mortality and lower life expectancy independently of diet, lifestyle, socioeconomic level, and preexisting diseases. They found that the positive association appeared to be attenuated with increasing intake of high-potassium foods (vegetables and fruits).
In addition, the researchers made the following observations:
- For cause-specific premature mortality, they found that higher frequency of adding salt to foods was significantly associated with a higher risk of cardiovascular disease mortality and cancer mortality (P-trend < .001 and P-trend < .001, respectively).
- Always adding salt to foods was associated with the lower life expectancy at the age of 50 years by 1.50 (95% confidence interval, 0.72-2.30) and 2.28 (95% CI, 1.66-2.90) years for women and men, respectively, compared with participants who never or rarely added salt to foods.
The researchers noted that adding salt to foods (usually at the table) is common and is directly related to an individual’s long-term preference for salty foods and habitual salt intake. Indeed, in the Western diet, adding salt at the table accounts for 6%-20% of total salt intake. In addition, commonly used table salt contains 97%-99% sodium chloride, minimizing the potential confounding effects of other dietary factors, including potassium. Therefore, adding salt to foods provides a way to evaluate the association between habitual sodium intake and mortality – something that is relevant, given that it has been estimated that in 2010, a total of 1.65 million deaths from cardiovascular causes were attributable to consumption of more than 2.0 g of sodium per day.
Salt sensitivity
Current evidence supports a recommendation for moderate sodium intake in the general population (3-5 g/day). Persons with hypertension should consume salt at the lower end of that range. Some dietary guidelines recommend consuming less than 2,300 mg dietary sodium per day for persons aged 14 years or older and less for persons aged 2-13 years. Although low sodium intake (< 2.0 g/day) has been achieved in short-term clinical trials, sustained low sodium intake has not been achieved in any of the longer-term clinical trials (duration > 6 months).
The controversy continues as to the relationship between low sodium intake and blood pressure or cardiovascular diseases. Most studies show that both in individuals with hypertension and those without, blood pressure is reduced by consuming less sodium. However, it is not necessarily lowered by reducing sodium intake (< 3-5 g/day). With a sodium-rich diet, most normotensive individuals experienced a minimal change in mean arterial pressure; for many individuals with hypertension, the values increased by about 4 mm Hg. In addition, among individuals with hypertension who are “salt sensitive,” arterial pressure can increase by > 10 mm Hg in response to high sodium intake.
The effect of potassium
Replacing some of the sodium chloride in regular salt with potassium chloride may mitigate some of salt’s harmful cardiovascular effects. Indeed, salt substitutes that have reduced sodium levels and increased potassium levels have been shown to lower blood pressure.
In one trial, researchers enrolled over 20,000 persons from 600 villages in rural China and compared the use of regular salt (100% sodium chloride) with the use of a salt substitute (75% sodium chloride and 25% potassium chloride by mass).
The participants were at high risk for stroke, cardiovascular events, and death. The mean duration of follow-up was 4.74 years. The results were surprising. The rate of stroke was lower with the salt substitute than with regular salt (29.14 events vs. 33.65 events per 1,000 person-years; rate ratio, 0.86; 95% CI, 0.77-0.96; P = .006), as were the rates of major cardiovascular events and death from any cause. The rate of serious adverse events attributed to hyperkalemia was not significantly higher with the salt substitute than with regular salt.
Although there is an ongoing debate about the extent of salt’s effects on the cardiovascular system, there is no doubt that in most places in the world, people are consuming more salt than the body needs.
A lot depends upon the kind of diet consumed by a particular population. Processed food is rarely used in rural areas, such as those involved in the above-mentioned trial, with dietary sodium chloride being added while preparing food at home. This is a determining factor with regard to cardiovascular outcomes, but it cannot be generalized to other social-environmental settings.
In much of the world, commercial food preservation introduces a lot of sodium chloride into the diet, and most salt intake could not be fully attributed to the use of salt substitutes. Indeed, by comparing the sodium content of cereal-based products currently sold on the Italian market with the respective benchmarks proposed by the World Health Organization, researchers found that for most items, the sodium content is much higher than the benchmarks, especially with flatbreads, leavened breads, and crackers/savory biscuits. This shows that there is work to be done to achieve the World Health Organization/United Nations objective of a 30% global reduction in sodium intake by 2025.
This article was translated from Univadis Italy. A version of this article first appeared on Medscape.com.
The potential problem(s) with a once-a-year COVID vaccine
Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.
Remarks, from “capitulation” to too few data, hit the airwaves and social media.
Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.
Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.
& we need strategy to bump uptake,” Dr. Wachter tweeted this week.
But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.
Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.
The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization.
“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
Some say annual shot premature
Other experts say it’s too soon to tell whether an annual approach will work.
“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.
A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”
Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”
William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.
“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”
He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”
They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.
Both viruses also mutate. But there the paths diverge.
“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”
For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.
Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
Just a ‘first step’ toward annual shot
The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”
Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.
Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.
“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”
However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.
COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”
What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”
Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.
Remarks, from “capitulation” to too few data, hit the airwaves and social media.
Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.
Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.
& we need strategy to bump uptake,” Dr. Wachter tweeted this week.
But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.
Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.
The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization.
“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
Some say annual shot premature
Other experts say it’s too soon to tell whether an annual approach will work.
“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.
A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”
Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”
William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.
“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”
He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”
They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.
Both viruses also mutate. But there the paths diverge.
“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”
For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.
Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
Just a ‘first step’ toward annual shot
The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”
Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.
Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.
“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”
However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.
COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”
What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”
Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Comments from the White House this week suggesting a once-a-year COVID-19 shot for most Americans, “just like your annual flu shot,” were met with backlash from many who say COVID and influenza come from different viruses and need different schedules.
Remarks, from “capitulation” to too few data, hit the airwaves and social media.
Some, however, agree with the White House vision and say that asking people to get one shot in the fall instead of periodic pushes for boosters will raise public confidence and buy-in and reduce consumer confusion.
Health leaders, including Bob Wachter, MD, chair of the department of medicine at the University of California, San Francisco, say they like the framing of the concept – that people who are not high-risk should plan each year for a COVID shot and a flu shot.
& we need strategy to bump uptake,” Dr. Wachter tweeted this week.
But the numbers of Americans seeking boosters remain low. Only one-third of all eligible people 50 years and older have gotten a second COVID booster, according to the Centers for Disease Control and Prevention. About half of those who got the original two shots got a first booster.
Meanwhile, the United States is still averaging about 70,000 new COVID cases and more than 300 deaths every day.
The suggested change in approach comes as Pfizer/BioNTech and Moderna roll out their new boosters that target Omicron subvariants BA.4 and BA.5 after the CDC recommended their use and the U.S. Food and Drug Administration approved emergency use authorization.
“As the virus continues to change, we will now be able to update our vaccines annually to target the dominant variant,” President Joe Biden said in a statement promoting the yearly approach.
Some say annual shot premature
Other experts say it’s too soon to tell whether an annual approach will work.
“We have no data to support that current vaccines, including the new BA.5 booster, will provide durable protection beyond 4-6 months. It would be good to aspire to this objective, and much longer duration or protection, but that will likely require next generation and nasal vaccines,” said Eric Topol, MD, Medscape’s editor-in-chief and founder and director of the Scripps Research Translational Institute.
A report in Nature Reviews Immunology states, “Mucosal vaccines offer the potential to trigger robust protective immune responses at the predominant sites of pathogen infection” and potentially “can prevent an infection from becoming established in the first place, rather than only curtailing infection and protecting against the development of disease symptoms.”
Dr. Topol tweeted after the White House statements, “[An annual vaccine] has the ring of Covid capitulation.”
William Schaffner, MD, an infectious disease expert at Vanderbilt University, Nashville, Tenn., told this news organization that he cautions against interpreting the White House comments as official policy.
“This is the difficulty of having public health announcements come out of Washington,” he said. “They ought to come out of the CDC.”
He says there is a reasonable analogy between COVID and influenza, but warns, “don’t push the analogy.”
They are both serious respiratory viruses that can cause much illness and death in essentially the same populations, he notes. These are the older, frail people, people who have underlying illnesses or are immunocompromised.
Both viruses also mutate. But there the paths diverge.
“We’ve gotten into a pattern of annually updating the influenza vaccine because it is such a singularly seasonal virus,” Dr. Schaffner said. “Basically it disappears during the summer. We’ve had plenty of COVID during the summers.”
For COVID, he said, “We will need a periodic booster. Could this be annually? That would certainly make it easier.” But it’s too soon to tell, he said.
Dr. Schaffner noted that several manufacturers are working on a combined flu/COVID vaccine.
Just a ‘first step’ toward annual shot
The currently updated COVID vaccine may be the first step toward an annual vaccine, but it’s only the first step, Dr. Schaffner said. “We haven’t committed to further steps yet because we’re watching this virus.”
Syra Madad, DHSc, MSc, an infectious disease epidemiologist at Harvard University’s Belfer Center for Science and International Affairs, Cambridge, Mass., and the New York City hospital system, told this news organization that arguments on both sides make sense.
Having a single message once a year can help eliminate the considerable confusion involving people on individual timelines with different levels of immunity and separate campaigns for COVID and flu shots coming at different times of the year.
“Communication around vaccines is very muddled and that shows in our overall vaccination rates, particularly booster rates,” she says. “The overall strategy is hopeful and makes sense if we’re going to progress that way based on data.”
However, she said that the data are just not there yet to show it’s time for an annual vaccine. First, scientists will need to see how long protection lasts with the Omicron-specific vaccine and how well and how long it protects against severe disease and death as well as infection.
COVID is less predictable than influenza and the influenza vaccine has been around for decades, Dr. Madad noted. With influenza, the patterns are more easily anticipated with their “ladder-like pattern,” she said. “COVID-19 is not like that.”
What is hopeful, she said, “is that we’ve been in the Omicron dynasty since November of 2021. I’m hopeful that we’ll stick with that particular variant.”
Dr. Topol, Dr. Schaffner, and Dr. Madad declared no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Overall survival dips with vitamin D deficiency in melanoma
, according to research presented at the annual congress of the European Academy of Dermatology and Venereology.
Whereas the 5-year overall survival was 90% when vitamin D serum levels were above a 10 ng/mL threshold, it was 84% when levels fell below it. Notably, the gap in overall survival between those above and below the threshold appeared to widen as time went on.
The research adds to existing evidence that “vitamin D levels can play an important and independent role in patients’ survival outcomes,” study investigator Inés Gracia-Darder, MD, told this news organization. “The important application in clinical practice would be to know if vitamin D supplementation influences the survival of melanoma patients,” said Dr. Gracia-Darder, a clinical specialist in dermatology at the Hospital Universitari Son Espases, Mallorca, Spain.
Known association, but not much data
“It is not a new finding,” but there are limited data, especially in melanoma, said Julie De Smedt, MD, of KU Leuven, Belgium, who was asked to comment on the results. Other groups have shown, certainly for cancer in general, that vitamin D can have an effect on overall survival.
“Low levels of vitamin D are associated with the pathological parameters of the melanoma, such as the thickness of the tumor,” Dr. De Smedt said in an interview, indicating that it’s not just overall survival that might be affected.
“So we assume that also has an effect on melanoma-specific survival,” she added.
That assumption, however, is not supported by the data Dr. Gracia-Darder presented, as there was no difference in melanoma-specific survival among the two groups of patients that had been studied.
Retrospective cohort analysis
Vitamin D levels had been studied in 264 patients who were included in the retrospective cohort analysis. All had invasive melanomas, and all had been seen at the Hospital Clinic of Barcelona between January 1998 and June 2021. Their mean age was 57 years, and the median follow-up was 6.7 years.
For inclusion, all patients had to have had their vitamin D levels measured after being diagnosed with melanoma; those with a 25-hydroxyvitamin D3 serum level of less than 10 ng/mL were deemed to be vitamin D deficient, whereas those with levels of 10 ng/mL and above were deemed normal or insufficient.
A measurement less than 10 ng/mL is considered vitamin D deficiency, Dr. De Smedt said. “But there is a difference between countries, and there’s also a difference between societies,” noting the cut-off used in the lab where she works is 20 ng/mL. This makes it difficult to compare studies, she said.
Independent association with overall survival
Seasonal variation in vitamin D levels were considered as a possible confounding factor, but Dr. Gracia-Darder noted that there was a similar distribution of measurements taken between October to March and April to September.
Univariate and multivariate analyses established vitamin D deficiency as being independently associated with overall survival with hazard ratios of 2.34 and 2.45, respectively.
Other predictive factors were having a higher Breslow index, as well as older age and gender.
Time to recommend vitamin D supplementation?
So should patients with melanoma have their vitamin D levels routinely checked? And what about advising them to take vitamin D supplements?
“In our practice, we analyze the vitamin D levels of our patients,” Dr. Gracia-Darder said. Patients are told to limit their exposure to the sun because of their skin cancer, so they are very likely to become vitamin D deficient.
While dietary changes or supplements might be suggested, there’s no real evidence to support upping vitamin D levels to date, so “future prospective studies are needed,” Dr. Gracia-Darder added.
Such studies have already started, including one in Italy, one in Australia, and another study that Dr. De Smedt has been involved with for the past few years.
Called the ViDMe study, it’s a multicenter, randomized, double-blind trial in which patients are being given a high-dose oral vitamin D supplement or placebo once a month for at least 1 year. About 430 patients with a first cutaneous malignant melanoma have been included in the trial, which started in December 2012.
It is hoped that the results will show that the supplementation will have had a protective effect on the risk of relapse and that there will be a correlation between vitamin D levels in the blood and vitamin D receptor immunoreactivity in the tumor.
“The study is still blinded,” Dr. De Smedt said. “We will unblind in the coming months and then at the end of the year, maybe next year, we will have the results.”
The study reported by Dr. Gracia-Darder did not receive any specific funding. Dr. Gracia-Darder disclosed that the melanoma unit where the study was performed receives many grants and funds to carry out research. She reported no other relevant financial relationships. Dr. De Smedt had no relevant financial relationships. The ViDMe study is sponsored by the Universitaire Ziekenhuizen Leuven.
A version of this article first appeared on Medscape.com.
, according to research presented at the annual congress of the European Academy of Dermatology and Venereology.
Whereas the 5-year overall survival was 90% when vitamin D serum levels were above a 10 ng/mL threshold, it was 84% when levels fell below it. Notably, the gap in overall survival between those above and below the threshold appeared to widen as time went on.
The research adds to existing evidence that “vitamin D levels can play an important and independent role in patients’ survival outcomes,” study investigator Inés Gracia-Darder, MD, told this news organization. “The important application in clinical practice would be to know if vitamin D supplementation influences the survival of melanoma patients,” said Dr. Gracia-Darder, a clinical specialist in dermatology at the Hospital Universitari Son Espases, Mallorca, Spain.
Known association, but not much data
“It is not a new finding,” but there are limited data, especially in melanoma, said Julie De Smedt, MD, of KU Leuven, Belgium, who was asked to comment on the results. Other groups have shown, certainly for cancer in general, that vitamin D can have an effect on overall survival.
“Low levels of vitamin D are associated with the pathological parameters of the melanoma, such as the thickness of the tumor,” Dr. De Smedt said in an interview, indicating that it’s not just overall survival that might be affected.
“So we assume that also has an effect on melanoma-specific survival,” she added.
That assumption, however, is not supported by the data Dr. Gracia-Darder presented, as there was no difference in melanoma-specific survival among the two groups of patients that had been studied.
Retrospective cohort analysis
Vitamin D levels had been studied in 264 patients who were included in the retrospective cohort analysis. All had invasive melanomas, and all had been seen at the Hospital Clinic of Barcelona between January 1998 and June 2021. Their mean age was 57 years, and the median follow-up was 6.7 years.
For inclusion, all patients had to have had their vitamin D levels measured after being diagnosed with melanoma; those with a 25-hydroxyvitamin D3 serum level of less than 10 ng/mL were deemed to be vitamin D deficient, whereas those with levels of 10 ng/mL and above were deemed normal or insufficient.
A measurement less than 10 ng/mL is considered vitamin D deficiency, Dr. De Smedt said. “But there is a difference between countries, and there’s also a difference between societies,” noting the cut-off used in the lab where she works is 20 ng/mL. This makes it difficult to compare studies, she said.
Independent association with overall survival
Seasonal variation in vitamin D levels were considered as a possible confounding factor, but Dr. Gracia-Darder noted that there was a similar distribution of measurements taken between October to March and April to September.
Univariate and multivariate analyses established vitamin D deficiency as being independently associated with overall survival with hazard ratios of 2.34 and 2.45, respectively.
Other predictive factors were having a higher Breslow index, as well as older age and gender.
Time to recommend vitamin D supplementation?
So should patients with melanoma have their vitamin D levels routinely checked? And what about advising them to take vitamin D supplements?
“In our practice, we analyze the vitamin D levels of our patients,” Dr. Gracia-Darder said. Patients are told to limit their exposure to the sun because of their skin cancer, so they are very likely to become vitamin D deficient.
While dietary changes or supplements might be suggested, there’s no real evidence to support upping vitamin D levels to date, so “future prospective studies are needed,” Dr. Gracia-Darder added.
Such studies have already started, including one in Italy, one in Australia, and another study that Dr. De Smedt has been involved with for the past few years.
Called the ViDMe study, it’s a multicenter, randomized, double-blind trial in which patients are being given a high-dose oral vitamin D supplement or placebo once a month for at least 1 year. About 430 patients with a first cutaneous malignant melanoma have been included in the trial, which started in December 2012.
It is hoped that the results will show that the supplementation will have had a protective effect on the risk of relapse and that there will be a correlation between vitamin D levels in the blood and vitamin D receptor immunoreactivity in the tumor.
“The study is still blinded,” Dr. De Smedt said. “We will unblind in the coming months and then at the end of the year, maybe next year, we will have the results.”
The study reported by Dr. Gracia-Darder did not receive any specific funding. Dr. Gracia-Darder disclosed that the melanoma unit where the study was performed receives many grants and funds to carry out research. She reported no other relevant financial relationships. Dr. De Smedt had no relevant financial relationships. The ViDMe study is sponsored by the Universitaire Ziekenhuizen Leuven.
A version of this article first appeared on Medscape.com.
, according to research presented at the annual congress of the European Academy of Dermatology and Venereology.
Whereas the 5-year overall survival was 90% when vitamin D serum levels were above a 10 ng/mL threshold, it was 84% when levels fell below it. Notably, the gap in overall survival between those above and below the threshold appeared to widen as time went on.
The research adds to existing evidence that “vitamin D levels can play an important and independent role in patients’ survival outcomes,” study investigator Inés Gracia-Darder, MD, told this news organization. “The important application in clinical practice would be to know if vitamin D supplementation influences the survival of melanoma patients,” said Dr. Gracia-Darder, a clinical specialist in dermatology at the Hospital Universitari Son Espases, Mallorca, Spain.
Known association, but not much data
“It is not a new finding,” but there are limited data, especially in melanoma, said Julie De Smedt, MD, of KU Leuven, Belgium, who was asked to comment on the results. Other groups have shown, certainly for cancer in general, that vitamin D can have an effect on overall survival.
“Low levels of vitamin D are associated with the pathological parameters of the melanoma, such as the thickness of the tumor,” Dr. De Smedt said in an interview, indicating that it’s not just overall survival that might be affected.
“So we assume that also has an effect on melanoma-specific survival,” she added.
That assumption, however, is not supported by the data Dr. Gracia-Darder presented, as there was no difference in melanoma-specific survival among the two groups of patients that had been studied.
Retrospective cohort analysis
Vitamin D levels had been studied in 264 patients who were included in the retrospective cohort analysis. All had invasive melanomas, and all had been seen at the Hospital Clinic of Barcelona between January 1998 and June 2021. Their mean age was 57 years, and the median follow-up was 6.7 years.
For inclusion, all patients had to have had their vitamin D levels measured after being diagnosed with melanoma; those with a 25-hydroxyvitamin D3 serum level of less than 10 ng/mL were deemed to be vitamin D deficient, whereas those with levels of 10 ng/mL and above were deemed normal or insufficient.
A measurement less than 10 ng/mL is considered vitamin D deficiency, Dr. De Smedt said. “But there is a difference between countries, and there’s also a difference between societies,” noting the cut-off used in the lab where she works is 20 ng/mL. This makes it difficult to compare studies, she said.
Independent association with overall survival
Seasonal variation in vitamin D levels were considered as a possible confounding factor, but Dr. Gracia-Darder noted that there was a similar distribution of measurements taken between October to March and April to September.
Univariate and multivariate analyses established vitamin D deficiency as being independently associated with overall survival with hazard ratios of 2.34 and 2.45, respectively.
Other predictive factors were having a higher Breslow index, as well as older age and gender.
Time to recommend vitamin D supplementation?
So should patients with melanoma have their vitamin D levels routinely checked? And what about advising them to take vitamin D supplements?
“In our practice, we analyze the vitamin D levels of our patients,” Dr. Gracia-Darder said. Patients are told to limit their exposure to the sun because of their skin cancer, so they are very likely to become vitamin D deficient.
While dietary changes or supplements might be suggested, there’s no real evidence to support upping vitamin D levels to date, so “future prospective studies are needed,” Dr. Gracia-Darder added.
Such studies have already started, including one in Italy, one in Australia, and another study that Dr. De Smedt has been involved with for the past few years.
Called the ViDMe study, it’s a multicenter, randomized, double-blind trial in which patients are being given a high-dose oral vitamin D supplement or placebo once a month for at least 1 year. About 430 patients with a first cutaneous malignant melanoma have been included in the trial, which started in December 2012.
It is hoped that the results will show that the supplementation will have had a protective effect on the risk of relapse and that there will be a correlation between vitamin D levels in the blood and vitamin D receptor immunoreactivity in the tumor.
“The study is still blinded,” Dr. De Smedt said. “We will unblind in the coming months and then at the end of the year, maybe next year, we will have the results.”
The study reported by Dr. Gracia-Darder did not receive any specific funding. Dr. Gracia-Darder disclosed that the melanoma unit where the study was performed receives many grants and funds to carry out research. She reported no other relevant financial relationships. Dr. De Smedt had no relevant financial relationships. The ViDMe study is sponsored by the Universitaire Ziekenhuizen Leuven.
A version of this article first appeared on Medscape.com.
FROM THE EADV CONGRESS
Test Lp(a) levels to inform ASCVD management: NLA statement
Lipoprotein(a) (Lp[a]) levels should be measured in clinical practice to refine risk prediction for atherosclerotic cardiovascular disease (ASCVD) and inform treatment decisions, even if they cannot yet be lowered directly, recommends the National Lipid Association (NLA) in a scientific statement.
The statement was published in the Journal of Clinical Lipidology.
Don P. Wilson, MD, department of pediatric endocrinology and diabetes, Cook Children’s Medical Center, Fort Worth, Tex., told this news organization that lipoprotein(a) is a “very timely subject.”
“The question in the scientific community is: What role does that particular biomarker play in terms of causing serious heart disease, stroke, and calcification of the aortic valve?”
“It’s pretty clear that, in and of itself, it actually can contribute and or cause any of those conditions,” he added. “The thing that’s then sort of problematic is that we don’t have a specific treatment to lower” Lp(a).
However, Dr. Wilson said that the statement underlines it is “still worth knowing” an individual’s Lp(a) concentrations because the risk with increased levels is “even higher for those people who have other conditions, such as metabolic disease or diabetes or high cholesterol.”
There are nevertheless several drugs in phase 2 and 3 clinical trials that appear to have the potential to significantly lower Lp(a) levels.
“I’m very excited,” said Dr. Wilson, noting that, so far, the drugs seem to be “quite safe,” and the currently available data suggest that they can “reduce Lp(a) levels by about 90%, which is huge.”
“That’s better than any drug we’ve got on the market.”
He cautioned, however, that it is going to take time after the drugs are approved to see the real benefits and risks once they start being used in very large populations, given that raised Lp(a) concentrations are present in about 20% of the world population.
The publication of the NLA statement coincides with a similar one from the European Atherosclerosis Society presented at the European Society of Cardiology Congress 2022 on Aug. 29, and published simultaneously in the European Heart Journal.
Coauthor of the EAS statement, Alberico L. Catapano, MD, PhD, professor of pharmacology at the University of Milan, and past president of the EAS, said that there are many areas in which the two statements are “in complete agreement.”
“However, the spirit of the documents is different,” he continued, chief among them being that the EAS statement focuses on the “global risk” of ASCVD and provides a risk calculator to help balance the risk increase with Lp(a) with that from other factors.
Another is that increased Lp(a) levels are recognized as being on a continuum in terms of their risk, such that there is no level at which raised concentrations can be deemed safe.
Dr. Wilson agreed with Dr. Capatano’s assessment, saying that the EAS statement takes current scientific observations “a step further,” in part by emphasizing that Lp(a) is “only one piece of the puzzle” for determining an individuals’ cardiovascular risk.
This will have huge implications for the conversations clinicians have with patients over shared decision-making, Dr. Wilson added.
Nevertheless, Dr. Catapano underlined to this news organization that “both documents are very important” in terms of the need to “raise awareness about a causal risk factor” for cardiovascular disease as well as that modifying Lp(a) concentrations “will probably reduce the risk.”
The statement from the NLA builds on the association’s prior Recommendations for the Patient-Centered Management of Dyslipidemia, published in two parts in 2014 and 2015, and comes to many of the same conclusions as the EAS statement.
It explains that apolipoprotein A, a component of Lp(a) attached to apolipoprotein B, has “unique” properties that promote the “initiation and progression of atherosclerosis and calcific valvular aortic stenosis, through endothelial dysfunction and proinflammatory responses, and pro-osteogenic effects promoting calcification.”
This, in turn, has the potential to cause myocardial infarction and ischemic stroke, the authors note.
This has been confirmed in meta-analyses of prospective, population-based studies showing a high risk for MI, coronary heart disease, and ischemic stroke with high Lp(a) levels, the statement adds.
Moreover, large genetic studies have confirmed that Lp(a) is a causal factor, independent of low-density lipoprotein cholesterol levels, for MI, ischemic stroke, valvular aortic stenosis, coronary artery stenosis, carotid stenosis, femoral artery stenosis, heart failure, cardiovascular mortality, and all-cause mortality.
Like the authors of the EAS statement, the NLA statement authors underline that the measurement of Lp(a) is “currently not standardized or harmonized,” and there is insufficient evidence on the utility of different cut-offs for risk based on age, gender, ethnicity, or the presence of comorbid conditions.
However, they do suggest that Lp(a) levels greater than 50 mg/dL (> 100 nmol/L) may be considered as a risk-enhancing factor favoring the initiation of statin therapy, although they note that the threshold could be threefold higher in African American individuals.
Despite these reservations, the authors say that Lp(a) testing “is reasonable” for refining the risk assessment of ASCVD in the first-degree relatives of people with premature ASCVD and those with a personal history of premature disease as well as in individuals with primary severe hypercholesterolemia.
Testing also “may be reasonable” to “aid in the clinician-patient discussion about whether to prescribe a statin” in people aged 40-75 years with borderline 10-year ASCVD risk, defined as 5%-7.4%, as well as in other equivocal clinical situations.
In terms of what to do in an individual with raised Lp(a) levels, the statement notes that lifestyle therapy and statins do not decrease Lp(a).
Although lomitapide (Juxtapid) and proprotein convertase subtilisin–kexin type 9 (PCSK9) inhibitors both lower levels of the lipoprotein, the former is “not recommended for ASCVD risk reduction,” whereas the impact of the latter on ASCVD risk reduction via Lp(a) reduction “remains undetermined.”
Several experimental agents are currently under investigation to reduce Lp(a) levels, including SLN360 (Silence Therapeutics), and AKCEA-APO(a)-LRX (Akcea Therapeutics/Ionis Pharmaceuticals).
In the meantime, the authors say it is reasonable to use Lp(a) as a “risk-enhancing factor” for the initiation of moderate- or high-intensity statins in the primary prevention of ASCVD and to consider the addition of ezetimibe and/or PCSK9 inhibitors in high- and very high–risk patients already on maximally tolerated statin therapy.
Finally, the authors recognize the need for “additional evidence” to support clinical practice. In the absence of a randomized clinical trial of Lp(a) lowering in those who are at risk for ASCVD, they note that “several important unanswered questions remain.”
These include: “Is it reasonable to recommend universal testing of Lp(a) in everyone regardless of family history or health status at least once to help encourage healthy habits and inform clinical decision-making?” “Will earlier testing and effective interventions help to improve outcomes?”
Alongside more evidence in children, the authors also emphasize that “additional data are urgently needed in Blacks, South Asians, and those of Hispanic descent.”
No funding declared. Dr. Wilson declares relationships with Osler Institute, Merck Sharp & Dohm, Novo Nordisk, and Alexion Pharmaceuticals. Other authors also declare numerous relationships. Dr. Catapano declares a relationship with Novartis.
A version of this article first appeared on Medscape.com.
Lipoprotein(a) (Lp[a]) levels should be measured in clinical practice to refine risk prediction for atherosclerotic cardiovascular disease (ASCVD) and inform treatment decisions, even if they cannot yet be lowered directly, recommends the National Lipid Association (NLA) in a scientific statement.
The statement was published in the Journal of Clinical Lipidology.
Don P. Wilson, MD, department of pediatric endocrinology and diabetes, Cook Children’s Medical Center, Fort Worth, Tex., told this news organization that lipoprotein(a) is a “very timely subject.”
“The question in the scientific community is: What role does that particular biomarker play in terms of causing serious heart disease, stroke, and calcification of the aortic valve?”
“It’s pretty clear that, in and of itself, it actually can contribute and or cause any of those conditions,” he added. “The thing that’s then sort of problematic is that we don’t have a specific treatment to lower” Lp(a).
However, Dr. Wilson said that the statement underlines it is “still worth knowing” an individual’s Lp(a) concentrations because the risk with increased levels is “even higher for those people who have other conditions, such as metabolic disease or diabetes or high cholesterol.”
There are nevertheless several drugs in phase 2 and 3 clinical trials that appear to have the potential to significantly lower Lp(a) levels.
“I’m very excited,” said Dr. Wilson, noting that, so far, the drugs seem to be “quite safe,” and the currently available data suggest that they can “reduce Lp(a) levels by about 90%, which is huge.”
“That’s better than any drug we’ve got on the market.”
He cautioned, however, that it is going to take time after the drugs are approved to see the real benefits and risks once they start being used in very large populations, given that raised Lp(a) concentrations are present in about 20% of the world population.
The publication of the NLA statement coincides with a similar one from the European Atherosclerosis Society presented at the European Society of Cardiology Congress 2022 on Aug. 29, and published simultaneously in the European Heart Journal.
Coauthor of the EAS statement, Alberico L. Catapano, MD, PhD, professor of pharmacology at the University of Milan, and past president of the EAS, said that there are many areas in which the two statements are “in complete agreement.”
“However, the spirit of the documents is different,” he continued, chief among them being that the EAS statement focuses on the “global risk” of ASCVD and provides a risk calculator to help balance the risk increase with Lp(a) with that from other factors.
Another is that increased Lp(a) levels are recognized as being on a continuum in terms of their risk, such that there is no level at which raised concentrations can be deemed safe.
Dr. Wilson agreed with Dr. Capatano’s assessment, saying that the EAS statement takes current scientific observations “a step further,” in part by emphasizing that Lp(a) is “only one piece of the puzzle” for determining an individuals’ cardiovascular risk.
This will have huge implications for the conversations clinicians have with patients over shared decision-making, Dr. Wilson added.
Nevertheless, Dr. Catapano underlined to this news organization that “both documents are very important” in terms of the need to “raise awareness about a causal risk factor” for cardiovascular disease as well as that modifying Lp(a) concentrations “will probably reduce the risk.”
The statement from the NLA builds on the association’s prior Recommendations for the Patient-Centered Management of Dyslipidemia, published in two parts in 2014 and 2015, and comes to many of the same conclusions as the EAS statement.
It explains that apolipoprotein A, a component of Lp(a) attached to apolipoprotein B, has “unique” properties that promote the “initiation and progression of atherosclerosis and calcific valvular aortic stenosis, through endothelial dysfunction and proinflammatory responses, and pro-osteogenic effects promoting calcification.”
This, in turn, has the potential to cause myocardial infarction and ischemic stroke, the authors note.
This has been confirmed in meta-analyses of prospective, population-based studies showing a high risk for MI, coronary heart disease, and ischemic stroke with high Lp(a) levels, the statement adds.
Moreover, large genetic studies have confirmed that Lp(a) is a causal factor, independent of low-density lipoprotein cholesterol levels, for MI, ischemic stroke, valvular aortic stenosis, coronary artery stenosis, carotid stenosis, femoral artery stenosis, heart failure, cardiovascular mortality, and all-cause mortality.
Like the authors of the EAS statement, the NLA statement authors underline that the measurement of Lp(a) is “currently not standardized or harmonized,” and there is insufficient evidence on the utility of different cut-offs for risk based on age, gender, ethnicity, or the presence of comorbid conditions.
However, they do suggest that Lp(a) levels greater than 50 mg/dL (> 100 nmol/L) may be considered as a risk-enhancing factor favoring the initiation of statin therapy, although they note that the threshold could be threefold higher in African American individuals.
Despite these reservations, the authors say that Lp(a) testing “is reasonable” for refining the risk assessment of ASCVD in the first-degree relatives of people with premature ASCVD and those with a personal history of premature disease as well as in individuals with primary severe hypercholesterolemia.
Testing also “may be reasonable” to “aid in the clinician-patient discussion about whether to prescribe a statin” in people aged 40-75 years with borderline 10-year ASCVD risk, defined as 5%-7.4%, as well as in other equivocal clinical situations.
In terms of what to do in an individual with raised Lp(a) levels, the statement notes that lifestyle therapy and statins do not decrease Lp(a).
Although lomitapide (Juxtapid) and proprotein convertase subtilisin–kexin type 9 (PCSK9) inhibitors both lower levels of the lipoprotein, the former is “not recommended for ASCVD risk reduction,” whereas the impact of the latter on ASCVD risk reduction via Lp(a) reduction “remains undetermined.”
Several experimental agents are currently under investigation to reduce Lp(a) levels, including SLN360 (Silence Therapeutics), and AKCEA-APO(a)-LRX (Akcea Therapeutics/Ionis Pharmaceuticals).
In the meantime, the authors say it is reasonable to use Lp(a) as a “risk-enhancing factor” for the initiation of moderate- or high-intensity statins in the primary prevention of ASCVD and to consider the addition of ezetimibe and/or PCSK9 inhibitors in high- and very high–risk patients already on maximally tolerated statin therapy.
Finally, the authors recognize the need for “additional evidence” to support clinical practice. In the absence of a randomized clinical trial of Lp(a) lowering in those who are at risk for ASCVD, they note that “several important unanswered questions remain.”
These include: “Is it reasonable to recommend universal testing of Lp(a) in everyone regardless of family history or health status at least once to help encourage healthy habits and inform clinical decision-making?” “Will earlier testing and effective interventions help to improve outcomes?”
Alongside more evidence in children, the authors also emphasize that “additional data are urgently needed in Blacks, South Asians, and those of Hispanic descent.”
No funding declared. Dr. Wilson declares relationships with Osler Institute, Merck Sharp & Dohm, Novo Nordisk, and Alexion Pharmaceuticals. Other authors also declare numerous relationships. Dr. Catapano declares a relationship with Novartis.
A version of this article first appeared on Medscape.com.
Lipoprotein(a) (Lp[a]) levels should be measured in clinical practice to refine risk prediction for atherosclerotic cardiovascular disease (ASCVD) and inform treatment decisions, even if they cannot yet be lowered directly, recommends the National Lipid Association (NLA) in a scientific statement.
The statement was published in the Journal of Clinical Lipidology.
Don P. Wilson, MD, department of pediatric endocrinology and diabetes, Cook Children’s Medical Center, Fort Worth, Tex., told this news organization that lipoprotein(a) is a “very timely subject.”
“The question in the scientific community is: What role does that particular biomarker play in terms of causing serious heart disease, stroke, and calcification of the aortic valve?”
“It’s pretty clear that, in and of itself, it actually can contribute and or cause any of those conditions,” he added. “The thing that’s then sort of problematic is that we don’t have a specific treatment to lower” Lp(a).
However, Dr. Wilson said that the statement underlines it is “still worth knowing” an individual’s Lp(a) concentrations because the risk with increased levels is “even higher for those people who have other conditions, such as metabolic disease or diabetes or high cholesterol.”
There are nevertheless several drugs in phase 2 and 3 clinical trials that appear to have the potential to significantly lower Lp(a) levels.
“I’m very excited,” said Dr. Wilson, noting that, so far, the drugs seem to be “quite safe,” and the currently available data suggest that they can “reduce Lp(a) levels by about 90%, which is huge.”
“That’s better than any drug we’ve got on the market.”
He cautioned, however, that it is going to take time after the drugs are approved to see the real benefits and risks once they start being used in very large populations, given that raised Lp(a) concentrations are present in about 20% of the world population.
The publication of the NLA statement coincides with a similar one from the European Atherosclerosis Society presented at the European Society of Cardiology Congress 2022 on Aug. 29, and published simultaneously in the European Heart Journal.
Coauthor of the EAS statement, Alberico L. Catapano, MD, PhD, professor of pharmacology at the University of Milan, and past president of the EAS, said that there are many areas in which the two statements are “in complete agreement.”
“However, the spirit of the documents is different,” he continued, chief among them being that the EAS statement focuses on the “global risk” of ASCVD and provides a risk calculator to help balance the risk increase with Lp(a) with that from other factors.
Another is that increased Lp(a) levels are recognized as being on a continuum in terms of their risk, such that there is no level at which raised concentrations can be deemed safe.
Dr. Wilson agreed with Dr. Capatano’s assessment, saying that the EAS statement takes current scientific observations “a step further,” in part by emphasizing that Lp(a) is “only one piece of the puzzle” for determining an individuals’ cardiovascular risk.
This will have huge implications for the conversations clinicians have with patients over shared decision-making, Dr. Wilson added.
Nevertheless, Dr. Catapano underlined to this news organization that “both documents are very important” in terms of the need to “raise awareness about a causal risk factor” for cardiovascular disease as well as that modifying Lp(a) concentrations “will probably reduce the risk.”
The statement from the NLA builds on the association’s prior Recommendations for the Patient-Centered Management of Dyslipidemia, published in two parts in 2014 and 2015, and comes to many of the same conclusions as the EAS statement.
It explains that apolipoprotein A, a component of Lp(a) attached to apolipoprotein B, has “unique” properties that promote the “initiation and progression of atherosclerosis and calcific valvular aortic stenosis, through endothelial dysfunction and proinflammatory responses, and pro-osteogenic effects promoting calcification.”
This, in turn, has the potential to cause myocardial infarction and ischemic stroke, the authors note.
This has been confirmed in meta-analyses of prospective, population-based studies showing a high risk for MI, coronary heart disease, and ischemic stroke with high Lp(a) levels, the statement adds.
Moreover, large genetic studies have confirmed that Lp(a) is a causal factor, independent of low-density lipoprotein cholesterol levels, for MI, ischemic stroke, valvular aortic stenosis, coronary artery stenosis, carotid stenosis, femoral artery stenosis, heart failure, cardiovascular mortality, and all-cause mortality.
Like the authors of the EAS statement, the NLA statement authors underline that the measurement of Lp(a) is “currently not standardized or harmonized,” and there is insufficient evidence on the utility of different cut-offs for risk based on age, gender, ethnicity, or the presence of comorbid conditions.
However, they do suggest that Lp(a) levels greater than 50 mg/dL (> 100 nmol/L) may be considered as a risk-enhancing factor favoring the initiation of statin therapy, although they note that the threshold could be threefold higher in African American individuals.
Despite these reservations, the authors say that Lp(a) testing “is reasonable” for refining the risk assessment of ASCVD in the first-degree relatives of people with premature ASCVD and those with a personal history of premature disease as well as in individuals with primary severe hypercholesterolemia.
Testing also “may be reasonable” to “aid in the clinician-patient discussion about whether to prescribe a statin” in people aged 40-75 years with borderline 10-year ASCVD risk, defined as 5%-7.4%, as well as in other equivocal clinical situations.
In terms of what to do in an individual with raised Lp(a) levels, the statement notes that lifestyle therapy and statins do not decrease Lp(a).
Although lomitapide (Juxtapid) and proprotein convertase subtilisin–kexin type 9 (PCSK9) inhibitors both lower levels of the lipoprotein, the former is “not recommended for ASCVD risk reduction,” whereas the impact of the latter on ASCVD risk reduction via Lp(a) reduction “remains undetermined.”
Several experimental agents are currently under investigation to reduce Lp(a) levels, including SLN360 (Silence Therapeutics), and AKCEA-APO(a)-LRX (Akcea Therapeutics/Ionis Pharmaceuticals).
In the meantime, the authors say it is reasonable to use Lp(a) as a “risk-enhancing factor” for the initiation of moderate- or high-intensity statins in the primary prevention of ASCVD and to consider the addition of ezetimibe and/or PCSK9 inhibitors in high- and very high–risk patients already on maximally tolerated statin therapy.
Finally, the authors recognize the need for “additional evidence” to support clinical practice. In the absence of a randomized clinical trial of Lp(a) lowering in those who are at risk for ASCVD, they note that “several important unanswered questions remain.”
These include: “Is it reasonable to recommend universal testing of Lp(a) in everyone regardless of family history or health status at least once to help encourage healthy habits and inform clinical decision-making?” “Will earlier testing and effective interventions help to improve outcomes?”
Alongside more evidence in children, the authors also emphasize that “additional data are urgently needed in Blacks, South Asians, and those of Hispanic descent.”
No funding declared. Dr. Wilson declares relationships with Osler Institute, Merck Sharp & Dohm, Novo Nordisk, and Alexion Pharmaceuticals. Other authors also declare numerous relationships. Dr. Catapano declares a relationship with Novartis.
A version of this article first appeared on Medscape.com.
Why some infectious disease docs are ‘encouraged’ by new bivalent COVID vaccines
A panel of infectious disease experts shared their take recently on the importance of the newly approved bivalent COVID-19 vaccines, why authorization without human data is not for them a cause for alarm, and what they are most optimistic about at this stage of the pandemic.
“I’m very encouraged by this new development,” Kathryn M. Edwards, MD, said during a media briefing sponsored by the Infectious Diseases Society of America (IDSA).
, she said. “It does seem that if you have a circulating strain BA.4 and BA.5, hitting it with the appropriate vaccine targeted for that is most immunogenic, certainly. We will hopefully see that in terms of effectiveness.”
Changing the vaccines at this point is appropriate, Walter A. Orenstein, MD, said. “One of our challenges is that this virus mutates. Our immune response is focused on an area of the virus that can change and be evaded,” said Dr. Orenstein, professor and associate director of the Emory Vaccine Center at Emory University, Atlanta.
“This is different than measles or polio,” he said. “But for influenza and now with SARS-CoV-2 ... we have to update our vaccines, because the virus changes.”
Man versus mouse
Dr. Edwards addressed the controversy over a lack of human data specific to these next-generation Pfizer/BioNTech and Moderna vaccines. “I do not want people to be unhappy or worried that the bivalent vaccine will act in a different way than the ones that we have been administering for the past 2 years.”
The Food and Drug Administration emergency use authorization may have relied primarily on animal studies, she said, but mice given a vaccine specific to BA.4 and BA.5 “have a much more robust immune response,” compared with those given a BA.1 vaccine.
Also, “over and over and over again we have seen with these SARS-CoV-2 vaccines that the mouse responses mirror the human responses,” said Dr. Edwards, scientific director of the Vanderbilt Vaccine Research Program at Vanderbilt University, Nashville, Tenn., and an IDSA fellow.
“Human data will be coming very soon to look at the immunogenicity,” she said.
A ‘glass half full’ perspective
When asked what they are most optimistic about at this point in the COVID-19 pandemic, Dr. Orenstein said, “I’m really positive in the sense that the vaccines we have are already very effective against severe disease, death, and hospitalization. I feel really good about that. And we have great tools.
“The bottom line for me is, I want to get it myself,” he said regarding the bivalent vaccine.
“There are a lot of things to be happy with,” Dr. Edwards said. “I’m kind of a glass-half-full kind of person.”
Dr. Edwards is confident that the surveillance systems now in place can accurately detect major changes in the virus, including new variants. She is also optimistic about the mRNA technology that allows rapid updates to COVID-19 vaccines.
Furthermore, “I’m happy that we’re beginning to open up – that we can go do different things that we have done in the past and feel much more comfortable,” she said.
More motivational messaging needed
Now is also a good time to renew efforts to get people vaccinated.
“We invested a lot into developing these vaccines, but I think we also need to invest in what I call ‘implementation science research,’ ” Dr. Orenstein said, the goal being to convince people to get vaccinated.
He pointed out that it’s vaccinations, not vaccines, that saves lives. “Vaccine doses that remain in the vial are 0% effective.
“When I was director of the United States’ immunization program at the CDC,” Dr. Orenstein said, “my director of communications used to say that you need the right message delivered by the right messenger through the right communications channel.”
Dr. Edwards agreed that listening to people’s concerns and respecting their questions are important. “We also need to make sure that we use the proper messenger, just as Walt said. Maybe the proper messenger isn’t an old gray-haired lady,” she said, referring to herself, “but it’s someone that lives in your community or is your primary care doctor who has taken care of you or your children for many years.”
Research on how to better motivate people to get vaccinated is warranted, Dr. Edwards said, as well as on “how to make sure that this is really a medical issue and not a political issue. That’s been a really big problem.”
A version of this article first appeared on Medscape.com.
A panel of infectious disease experts shared their take recently on the importance of the newly approved bivalent COVID-19 vaccines, why authorization without human data is not for them a cause for alarm, and what they are most optimistic about at this stage of the pandemic.
“I’m very encouraged by this new development,” Kathryn M. Edwards, MD, said during a media briefing sponsored by the Infectious Diseases Society of America (IDSA).
, she said. “It does seem that if you have a circulating strain BA.4 and BA.5, hitting it with the appropriate vaccine targeted for that is most immunogenic, certainly. We will hopefully see that in terms of effectiveness.”
Changing the vaccines at this point is appropriate, Walter A. Orenstein, MD, said. “One of our challenges is that this virus mutates. Our immune response is focused on an area of the virus that can change and be evaded,” said Dr. Orenstein, professor and associate director of the Emory Vaccine Center at Emory University, Atlanta.
“This is different than measles or polio,” he said. “But for influenza and now with SARS-CoV-2 ... we have to update our vaccines, because the virus changes.”
Man versus mouse
Dr. Edwards addressed the controversy over a lack of human data specific to these next-generation Pfizer/BioNTech and Moderna vaccines. “I do not want people to be unhappy or worried that the bivalent vaccine will act in a different way than the ones that we have been administering for the past 2 years.”
The Food and Drug Administration emergency use authorization may have relied primarily on animal studies, she said, but mice given a vaccine specific to BA.4 and BA.5 “have a much more robust immune response,” compared with those given a BA.1 vaccine.
Also, “over and over and over again we have seen with these SARS-CoV-2 vaccines that the mouse responses mirror the human responses,” said Dr. Edwards, scientific director of the Vanderbilt Vaccine Research Program at Vanderbilt University, Nashville, Tenn., and an IDSA fellow.
“Human data will be coming very soon to look at the immunogenicity,” she said.
A ‘glass half full’ perspective
When asked what they are most optimistic about at this point in the COVID-19 pandemic, Dr. Orenstein said, “I’m really positive in the sense that the vaccines we have are already very effective against severe disease, death, and hospitalization. I feel really good about that. And we have great tools.
“The bottom line for me is, I want to get it myself,” he said regarding the bivalent vaccine.
“There are a lot of things to be happy with,” Dr. Edwards said. “I’m kind of a glass-half-full kind of person.”
Dr. Edwards is confident that the surveillance systems now in place can accurately detect major changes in the virus, including new variants. She is also optimistic about the mRNA technology that allows rapid updates to COVID-19 vaccines.
Furthermore, “I’m happy that we’re beginning to open up – that we can go do different things that we have done in the past and feel much more comfortable,” she said.
More motivational messaging needed
Now is also a good time to renew efforts to get people vaccinated.
“We invested a lot into developing these vaccines, but I think we also need to invest in what I call ‘implementation science research,’ ” Dr. Orenstein said, the goal being to convince people to get vaccinated.
He pointed out that it’s vaccinations, not vaccines, that saves lives. “Vaccine doses that remain in the vial are 0% effective.
“When I was director of the United States’ immunization program at the CDC,” Dr. Orenstein said, “my director of communications used to say that you need the right message delivered by the right messenger through the right communications channel.”
Dr. Edwards agreed that listening to people’s concerns and respecting their questions are important. “We also need to make sure that we use the proper messenger, just as Walt said. Maybe the proper messenger isn’t an old gray-haired lady,” she said, referring to herself, “but it’s someone that lives in your community or is your primary care doctor who has taken care of you or your children for many years.”
Research on how to better motivate people to get vaccinated is warranted, Dr. Edwards said, as well as on “how to make sure that this is really a medical issue and not a political issue. That’s been a really big problem.”
A version of this article first appeared on Medscape.com.
A panel of infectious disease experts shared their take recently on the importance of the newly approved bivalent COVID-19 vaccines, why authorization without human data is not for them a cause for alarm, and what they are most optimistic about at this stage of the pandemic.
“I’m very encouraged by this new development,” Kathryn M. Edwards, MD, said during a media briefing sponsored by the Infectious Diseases Society of America (IDSA).
, she said. “It does seem that if you have a circulating strain BA.4 and BA.5, hitting it with the appropriate vaccine targeted for that is most immunogenic, certainly. We will hopefully see that in terms of effectiveness.”
Changing the vaccines at this point is appropriate, Walter A. Orenstein, MD, said. “One of our challenges is that this virus mutates. Our immune response is focused on an area of the virus that can change and be evaded,” said Dr. Orenstein, professor and associate director of the Emory Vaccine Center at Emory University, Atlanta.
“This is different than measles or polio,” he said. “But for influenza and now with SARS-CoV-2 ... we have to update our vaccines, because the virus changes.”
Man versus mouse
Dr. Edwards addressed the controversy over a lack of human data specific to these next-generation Pfizer/BioNTech and Moderna vaccines. “I do not want people to be unhappy or worried that the bivalent vaccine will act in a different way than the ones that we have been administering for the past 2 years.”
The Food and Drug Administration emergency use authorization may have relied primarily on animal studies, she said, but mice given a vaccine specific to BA.4 and BA.5 “have a much more robust immune response,” compared with those given a BA.1 vaccine.
Also, “over and over and over again we have seen with these SARS-CoV-2 vaccines that the mouse responses mirror the human responses,” said Dr. Edwards, scientific director of the Vanderbilt Vaccine Research Program at Vanderbilt University, Nashville, Tenn., and an IDSA fellow.
“Human data will be coming very soon to look at the immunogenicity,” she said.
A ‘glass half full’ perspective
When asked what they are most optimistic about at this point in the COVID-19 pandemic, Dr. Orenstein said, “I’m really positive in the sense that the vaccines we have are already very effective against severe disease, death, and hospitalization. I feel really good about that. And we have great tools.
“The bottom line for me is, I want to get it myself,” he said regarding the bivalent vaccine.
“There are a lot of things to be happy with,” Dr. Edwards said. “I’m kind of a glass-half-full kind of person.”
Dr. Edwards is confident that the surveillance systems now in place can accurately detect major changes in the virus, including new variants. She is also optimistic about the mRNA technology that allows rapid updates to COVID-19 vaccines.
Furthermore, “I’m happy that we’re beginning to open up – that we can go do different things that we have done in the past and feel much more comfortable,” she said.
More motivational messaging needed
Now is also a good time to renew efforts to get people vaccinated.
“We invested a lot into developing these vaccines, but I think we also need to invest in what I call ‘implementation science research,’ ” Dr. Orenstein said, the goal being to convince people to get vaccinated.
He pointed out that it’s vaccinations, not vaccines, that saves lives. “Vaccine doses that remain in the vial are 0% effective.
“When I was director of the United States’ immunization program at the CDC,” Dr. Orenstein said, “my director of communications used to say that you need the right message delivered by the right messenger through the right communications channel.”
Dr. Edwards agreed that listening to people’s concerns and respecting their questions are important. “We also need to make sure that we use the proper messenger, just as Walt said. Maybe the proper messenger isn’t an old gray-haired lady,” she said, referring to herself, “but it’s someone that lives in your community or is your primary care doctor who has taken care of you or your children for many years.”
Research on how to better motivate people to get vaccinated is warranted, Dr. Edwards said, as well as on “how to make sure that this is really a medical issue and not a political issue. That’s been a really big problem.”
A version of this article first appeared on Medscape.com.
Nocturnally pruritic rash
A 74-YEAR-OLD WOMAN presented with a 3-day history of an intensely pruritic rash that was localized to her upper arms, upper chest between her breasts, and upper back. The pruritus was much worse at night while the patient was in bed. Symptoms did not improve with over-the-counter topical corticosteroids.
The patient had a history of atrial fibrillation (for which she was receiving chronic anticoagulation therapy), hypertension, an implanted pacemaker, depression, and Parkinson disease. Her medications included carbidopa-levodopa, fluoxetine, hydrochlorothiazide, metoprolol tartrate, naproxen, and warfarin. She had no known allergies. She reported that she was a nonsmoker and drank 1 glass of wine per week.
There were no recent changes in soaps, detergents, lotions, or makeup, nor did the patient have any bug bites or plant exposure. She shared a home with her spouse and several pets: a dog, a cat, and a Bantam-breed chicken. The patient’s husband, who slept in a different bedroom, had no rash. Recently, the cat had been bringing its captured prey of rabbits into the home.
Review of systems was negative for fever, chills, shortness of breath, cough, throat swelling, and rhinorrhea. Physical examination revealed red/pink macules and papules scattered over the upper arms (FIGURE 1), chest, and upper back. Many lesions were excoriated but had no active bleeding or vesicles. Under dermatoscope, no burrowing was found; however, a small (< 1 mm) creature was seen moving rapidly across the skin surface. The physician (CTW) captured and isolated the creature using a sterile lab cup.

WHAT IS YOUR DIAGNOSIS?
HOW WOULD YOU TREAT THIS PATIENT?
Diagnosis: Gamasoidosis
The collected sample (FIGURE 2) was examined and identified as an avian mite by a colleague who specializes in entomology, confirming the diagnosis of gamasoidosis.

Two genera of avian mites are responsible: Dermanyssus and Ornithonyssus. The most common culprits are the red poultry mite (D gallinae) and the northern fowl mite (O bursa). These small mites parasitize birds, such as poultry livestock, domesticated birds, and wild game birds. When unfed, the mite appears translucent brown and measures 0.3 to 0.7 mm in length, but after a blood meal, it appears red and increases in size to 1 mm. The mites tend to be active and feed at night and hide during the day.2 This explained the severe nighttime pruritus in this case.
Human infestation, although infrequent, can be a concern for those who work with poultry, or during the spring and summer seasons when young birds leave their nests and the mites migrate to find alternative hosts.3 The 1- to 2-mm erythematous maculopapules are often found with excoriations in covered areas.3,4 Unlike scabies, the genitalia and interdigital areas are spared.3,5
Differential for arthropod dermatoses
The differential diagnosis includes cimicosis, pulicosis, pediculosis corporis, and scabies.
Cimicosis is caused by bed bugs (from the insect Cimex genus). Bed bugs are oval and reddish brown, have 6 legs, and range in size from 1 to 7 mm. Most bed bugs hide in cracks or crevices of furniture and other surfaces (eg, bed frames, headboards, seams or holes of box springs or mattresses, or behind wallpaper, switch plates, and picture frames) by day and come out at night to feed on a sleeping host. Commonly, bed bugs will leave a series of bites grouped in rows (described as “breakfast, lunch, and dinner”). The bites can mimic urticaria, and bullous reactions may also occur.2
Continue to: Pulicosis
Pulicosis results from bites caused by a variety of flea species including, but not limited to, human, dog, oriental rat, sticktight, mouse, and chicken fleas. Fleas are small brown insects measuring about 2.5 mm in length, with flat sides and long hind legs. Their bites are most often arranged in a zigzag pattern around a host’s legs and waist. Hypersensitivity reactions may appear as papular urticaria, nodules, or bullae.2
Pediculosis corporis is caused by body lice. The adult louse is 2.5 to 3.5 mm in size, has 6 legs, and is a tan to greyish white color.6 Lice live in clothing, lay their eggs within the seams, and obtain blood meals from the host. Symptoms include generalized itching. The erythematous blue- and copper-colored macules, wheals, and lichenification can occur throughout the body, but spare the hands and feet. Secondary impetigo and furunculosis commonly occur.2
Scabies is caused by an oval mite that is ventrally flat, with dorsal spines. The mite is < 0.5 mm in size, appearing as a pinpoint of white. It burrows into its host’s skin, where it lives and lays eggs, causing pruritic papular lesions and ensuing excoriations. The mite burrows with a predilection for the finger web spaces, wrists, axillae, areolae, umbilicus, lower abdomen, genitals, and buttocks.2
Treatment involves a 3-step process
The mainstay of treatment is removal of the infested bird, decontamination of bedding and clothing, and use of oral antihistamines and topical corticosteroids.1,3,5 Bedding and clothing should be washed. Carpets, rugs, and curtains should be vacuumed and the vacuum bag placed in a sealed bag in the freezer for several hours before it can be thrown away. Eggs, larvae, nymphs, and adults are killed at 55 to 60 °F. Because humans are only incidental hosts and mites do not reproduce on them, the use of scabicidal agents, such as permethrin, is controversial.
Our patient was treated with permethrin cream before definitive identification of the mite. Once the mite was identified, the chicken was removed from the home and the patient’s bedding and clothing were decontaminated. The patient continued to apply over-the-counter topical steroids and take oral antihistamines for several more days after the chicken was removed from the home.
ACKNOWLEDGEMENT
The authors would like to acknowledge Patrick Liesch of the University of Wisconsin-Madison’s Department of Entomology, Insect Diagnostic Lab, for his help in identifying the avian mite.
1. Leib AE, Anderson BE. Pruritic dermatitis caused by bird mite infestation. Cutis. 2016;97:E6-E8.
2. Collgros H, Iglesias-Sancho M, Aldunce MJ, et al. Dermanyssus gallinae (chicken mite): an underdiagnosed environmental infestation. Clin Exp Dermatol. 2013;38:374-377. doi: 10.1111/j.1365-2230.2012.04434.x
3. Baselga E, Drolet BA, Esterly NB. Avian mite dermatitis. Pediatrics. 1996;97:743-745.
4. James WD, Elston DM, Treat J, et al, eds. Andrews Diseases of the Skin: Clinical Dermatology. 13th ed. Elsevier; 2020.
Dermanyssus gallinae infestation: an unusual cause of scalp pruritus treated with permethrin shampoo. J Dermatolog Treat. 2010;21:319-321. doi: 10.3109/09546630903287437
6. Centers for Disease Control and Prevention. Parasites. Reviewed September 12, 2019. Accessed August 4, 2022. www.cdc.gov/parasites/lice/body/biology.html
A 74-YEAR-OLD WOMAN presented with a 3-day history of an intensely pruritic rash that was localized to her upper arms, upper chest between her breasts, and upper back. The pruritus was much worse at night while the patient was in bed. Symptoms did not improve with over-the-counter topical corticosteroids.
The patient had a history of atrial fibrillation (for which she was receiving chronic anticoagulation therapy), hypertension, an implanted pacemaker, depression, and Parkinson disease. Her medications included carbidopa-levodopa, fluoxetine, hydrochlorothiazide, metoprolol tartrate, naproxen, and warfarin. She had no known allergies. She reported that she was a nonsmoker and drank 1 glass of wine per week.
There were no recent changes in soaps, detergents, lotions, or makeup, nor did the patient have any bug bites or plant exposure. She shared a home with her spouse and several pets: a dog, a cat, and a Bantam-breed chicken. The patient’s husband, who slept in a different bedroom, had no rash. Recently, the cat had been bringing its captured prey of rabbits into the home.
Review of systems was negative for fever, chills, shortness of breath, cough, throat swelling, and rhinorrhea. Physical examination revealed red/pink macules and papules scattered over the upper arms (FIGURE 1), chest, and upper back. Many lesions were excoriated but had no active bleeding or vesicles. Under dermatoscope, no burrowing was found; however, a small (< 1 mm) creature was seen moving rapidly across the skin surface. The physician (CTW) captured and isolated the creature using a sterile lab cup.

WHAT IS YOUR DIAGNOSIS?
HOW WOULD YOU TREAT THIS PATIENT?
Diagnosis: Gamasoidosis
The collected sample (FIGURE 2) was examined and identified as an avian mite by a colleague who specializes in entomology, confirming the diagnosis of gamasoidosis.

Two genera of avian mites are responsible: Dermanyssus and Ornithonyssus. The most common culprits are the red poultry mite (D gallinae) and the northern fowl mite (O bursa). These small mites parasitize birds, such as poultry livestock, domesticated birds, and wild game birds. When unfed, the mite appears translucent brown and measures 0.3 to 0.7 mm in length, but after a blood meal, it appears red and increases in size to 1 mm. The mites tend to be active and feed at night and hide during the day.2 This explained the severe nighttime pruritus in this case.
Human infestation, although infrequent, can be a concern for those who work with poultry, or during the spring and summer seasons when young birds leave their nests and the mites migrate to find alternative hosts.3 The 1- to 2-mm erythematous maculopapules are often found with excoriations in covered areas.3,4 Unlike scabies, the genitalia and interdigital areas are spared.3,5
Differential for arthropod dermatoses
The differential diagnosis includes cimicosis, pulicosis, pediculosis corporis, and scabies.
Cimicosis is caused by bed bugs (from the insect Cimex genus). Bed bugs are oval and reddish brown, have 6 legs, and range in size from 1 to 7 mm. Most bed bugs hide in cracks or crevices of furniture and other surfaces (eg, bed frames, headboards, seams or holes of box springs or mattresses, or behind wallpaper, switch plates, and picture frames) by day and come out at night to feed on a sleeping host. Commonly, bed bugs will leave a series of bites grouped in rows (described as “breakfast, lunch, and dinner”). The bites can mimic urticaria, and bullous reactions may also occur.2
Continue to: Pulicosis
Pulicosis results from bites caused by a variety of flea species including, but not limited to, human, dog, oriental rat, sticktight, mouse, and chicken fleas. Fleas are small brown insects measuring about 2.5 mm in length, with flat sides and long hind legs. Their bites are most often arranged in a zigzag pattern around a host’s legs and waist. Hypersensitivity reactions may appear as papular urticaria, nodules, or bullae.2
Pediculosis corporis is caused by body lice. The adult louse is 2.5 to 3.5 mm in size, has 6 legs, and is a tan to greyish white color.6 Lice live in clothing, lay their eggs within the seams, and obtain blood meals from the host. Symptoms include generalized itching. The erythematous blue- and copper-colored macules, wheals, and lichenification can occur throughout the body, but spare the hands and feet. Secondary impetigo and furunculosis commonly occur.2
Scabies is caused by an oval mite that is ventrally flat, with dorsal spines. The mite is < 0.5 mm in size, appearing as a pinpoint of white. It burrows into its host’s skin, where it lives and lays eggs, causing pruritic papular lesions and ensuing excoriations. The mite burrows with a predilection for the finger web spaces, wrists, axillae, areolae, umbilicus, lower abdomen, genitals, and buttocks.2
Treatment involves a 3-step process
The mainstay of treatment is removal of the infested bird, decontamination of bedding and clothing, and use of oral antihistamines and topical corticosteroids.1,3,5 Bedding and clothing should be washed. Carpets, rugs, and curtains should be vacuumed and the vacuum bag placed in a sealed bag in the freezer for several hours before it can be thrown away. Eggs, larvae, nymphs, and adults are killed at 55 to 60 °F. Because humans are only incidental hosts and mites do not reproduce on them, the use of scabicidal agents, such as permethrin, is controversial.
Our patient was treated with permethrin cream before definitive identification of the mite. Once the mite was identified, the chicken was removed from the home and the patient’s bedding and clothing were decontaminated. The patient continued to apply over-the-counter topical steroids and take oral antihistamines for several more days after the chicken was removed from the home.
ACKNOWLEDGEMENT
The authors would like to acknowledge Patrick Liesch of the University of Wisconsin-Madison’s Department of Entomology, Insect Diagnostic Lab, for his help in identifying the avian mite.
A 74-YEAR-OLD WOMAN presented with a 3-day history of an intensely pruritic rash that was localized to her upper arms, upper chest between her breasts, and upper back. The pruritus was much worse at night while the patient was in bed. Symptoms did not improve with over-the-counter topical corticosteroids.
The patient had a history of atrial fibrillation (for which she was receiving chronic anticoagulation therapy), hypertension, an implanted pacemaker, depression, and Parkinson disease. Her medications included carbidopa-levodopa, fluoxetine, hydrochlorothiazide, metoprolol tartrate, naproxen, and warfarin. She had no known allergies. She reported that she was a nonsmoker and drank 1 glass of wine per week.
There were no recent changes in soaps, detergents, lotions, or makeup, nor did the patient have any bug bites or plant exposure. She shared a home with her spouse and several pets: a dog, a cat, and a Bantam-breed chicken. The patient’s husband, who slept in a different bedroom, had no rash. Recently, the cat had been bringing its captured prey of rabbits into the home.
Review of systems was negative for fever, chills, shortness of breath, cough, throat swelling, and rhinorrhea. Physical examination revealed red/pink macules and papules scattered over the upper arms (FIGURE 1), chest, and upper back. Many lesions were excoriated but had no active bleeding or vesicles. Under dermatoscope, no burrowing was found; however, a small (< 1 mm) creature was seen moving rapidly across the skin surface. The physician (CTW) captured and isolated the creature using a sterile lab cup.

WHAT IS YOUR DIAGNOSIS?
HOW WOULD YOU TREAT THIS PATIENT?
Diagnosis: Gamasoidosis
The collected sample (FIGURE 2) was examined and identified as an avian mite by a colleague who specializes in entomology, confirming the diagnosis of gamasoidosis.

Two genera of avian mites are responsible: Dermanyssus and Ornithonyssus. The most common culprits are the red poultry mite (D gallinae) and the northern fowl mite (O bursa). These small mites parasitize birds, such as poultry livestock, domesticated birds, and wild game birds. When unfed, the mite appears translucent brown and measures 0.3 to 0.7 mm in length, but after a blood meal, it appears red and increases in size to 1 mm. The mites tend to be active and feed at night and hide during the day.2 This explained the severe nighttime pruritus in this case.
Human infestation, although infrequent, can be a concern for those who work with poultry, or during the spring and summer seasons when young birds leave their nests and the mites migrate to find alternative hosts.3 The 1- to 2-mm erythematous maculopapules are often found with excoriations in covered areas.3,4 Unlike scabies, the genitalia and interdigital areas are spared.3,5
Differential for arthropod dermatoses
The differential diagnosis includes cimicosis, pulicosis, pediculosis corporis, and scabies.
Cimicosis is caused by bed bugs (from the insect Cimex genus). Bed bugs are oval and reddish brown, have 6 legs, and range in size from 1 to 7 mm. Most bed bugs hide in cracks or crevices of furniture and other surfaces (eg, bed frames, headboards, seams or holes of box springs or mattresses, or behind wallpaper, switch plates, and picture frames) by day and come out at night to feed on a sleeping host. Commonly, bed bugs will leave a series of bites grouped in rows (described as “breakfast, lunch, and dinner”). The bites can mimic urticaria, and bullous reactions may also occur.2
Continue to: Pulicosis
Pulicosis results from bites caused by a variety of flea species including, but not limited to, human, dog, oriental rat, sticktight, mouse, and chicken fleas. Fleas are small brown insects measuring about 2.5 mm in length, with flat sides and long hind legs. Their bites are most often arranged in a zigzag pattern around a host’s legs and waist. Hypersensitivity reactions may appear as papular urticaria, nodules, or bullae.2
Pediculosis corporis is caused by body lice. The adult louse is 2.5 to 3.5 mm in size, has 6 legs, and is a tan to greyish white color.6 Lice live in clothing, lay their eggs within the seams, and obtain blood meals from the host. Symptoms include generalized itching. The erythematous blue- and copper-colored macules, wheals, and lichenification can occur throughout the body, but spare the hands and feet. Secondary impetigo and furunculosis commonly occur.2
Scabies is caused by an oval mite that is ventrally flat, with dorsal spines. The mite is < 0.5 mm in size, appearing as a pinpoint of white. It burrows into its host’s skin, where it lives and lays eggs, causing pruritic papular lesions and ensuing excoriations. The mite burrows with a predilection for the finger web spaces, wrists, axillae, areolae, umbilicus, lower abdomen, genitals, and buttocks.2
Treatment involves a 3-step process
The mainstay of treatment is removal of the infested bird, decontamination of bedding and clothing, and use of oral antihistamines and topical corticosteroids.1,3,5 Bedding and clothing should be washed. Carpets, rugs, and curtains should be vacuumed and the vacuum bag placed in a sealed bag in the freezer for several hours before it can be thrown away. Eggs, larvae, nymphs, and adults are killed at 55 to 60 °F. Because humans are only incidental hosts and mites do not reproduce on them, the use of scabicidal agents, such as permethrin, is controversial.
Our patient was treated with permethrin cream before definitive identification of the mite. Once the mite was identified, the chicken was removed from the home and the patient’s bedding and clothing were decontaminated. The patient continued to apply over-the-counter topical steroids and take oral antihistamines for several more days after the chicken was removed from the home.
ACKNOWLEDGEMENT
The authors would like to acknowledge Patrick Liesch of the University of Wisconsin-Madison’s Department of Entomology, Insect Diagnostic Lab, for his help in identifying the avian mite.
1. Leib AE, Anderson BE. Pruritic dermatitis caused by bird mite infestation. Cutis. 2016;97:E6-E8.
2. Collgros H, Iglesias-Sancho M, Aldunce MJ, et al. Dermanyssus gallinae (chicken mite): an underdiagnosed environmental infestation. Clin Exp Dermatol. 2013;38:374-377. doi: 10.1111/j.1365-2230.2012.04434.x
3. Baselga E, Drolet BA, Esterly NB. Avian mite dermatitis. Pediatrics. 1996;97:743-745.
4. James WD, Elston DM, Treat J, et al, eds. Andrews Diseases of the Skin: Clinical Dermatology. 13th ed. Elsevier; 2020.
Dermanyssus gallinae infestation: an unusual cause of scalp pruritus treated with permethrin shampoo. J Dermatolog Treat. 2010;21:319-321. doi: 10.3109/09546630903287437
6. Centers for Disease Control and Prevention. Parasites. Reviewed September 12, 2019. Accessed August 4, 2022. www.cdc.gov/parasites/lice/body/biology.html
1. Leib AE, Anderson BE. Pruritic dermatitis caused by bird mite infestation. Cutis. 2016;97:E6-E8.
2. Collgros H, Iglesias-Sancho M, Aldunce MJ, et al. Dermanyssus gallinae (chicken mite): an underdiagnosed environmental infestation. Clin Exp Dermatol. 2013;38:374-377. doi: 10.1111/j.1365-2230.2012.04434.x
3. Baselga E, Drolet BA, Esterly NB. Avian mite dermatitis. Pediatrics. 1996;97:743-745.
4. James WD, Elston DM, Treat J, et al, eds. Andrews Diseases of the Skin: Clinical Dermatology. 13th ed. Elsevier; 2020.
Dermanyssus gallinae infestation: an unusual cause of scalp pruritus treated with permethrin shampoo. J Dermatolog Treat. 2010;21:319-321. doi: 10.3109/09546630903287437
6. Centers for Disease Control and Prevention. Parasites. Reviewed September 12, 2019. Accessed August 4, 2022. www.cdc.gov/parasites/lice/body/biology.html



