User login
ILD on the rise: Doctors offer tips for diagnosing deadly disease
“There is definitely a delay from the time of symptom onset to the time that they are even evaluated for ILD,” said Dr. Kulkarni of the department of pulmonary, allergy and critical care medicine at the University of Alabama, Birmingham. “Some patients have had a significant loss of lung function by the time they come to see us. By that point we are limited by what treatment options we can offer.”
Interstitial lung disease is an umbrella term for a group of disorders involving progressive scarring of the lungs – typically irreversible – usually caused by long-term exposure to hazardous materials or by autoimmune effects. It includes idiopathic pulmonary fibrosis (IPF), a disease that is fairly rare but which has therapy options that can be effective if caught early enough. The term pulmonary fibrosis refers to lung scarring. Another type of ILD is pulmonary sarcoidosis, in which small clumps of immune cells form in the lungs in an immune response sometimes following an environmental trigger, and can lead to lung scarring if it doesn’t resolve.
Cases of ILD appear to be on the rise, and COVID-19 has made diagnosing it more complicated. One study found the prevalence of ILD and pulmonary sarcoidosis in high-income countries was about 122 of every 100,000 people in 1990 and rose to about 198 of every 100,000 people in 2017. The data were pulled from the Global Burden of Diseases, Injuries, and Risk Factors Study 2017. Globally, the researchers found a prevalence of 62 per 100,000 in 1990, compared with 82 per 100,000 in 2017.
If all of a patient’s symptoms have appeared post COVID and a physician is seeing a patient within 4-6 weeks of COVID symptoms, it is likely that the symptoms are COVID related. But a full work-up is recommended if a patient has lung crackles, which are an indicator of lung scarring, she said.
“The patterns that are seen on CT scan for COVID pneumonia are very distinct from what we expect to see with idiopathic pulmonary fibrosis,” Dr. Kulkarni said. “Putting all this information together is what is important to differentiate it from COVID pneumonia, as well as other types of ILD.”
A study published earlier this year found similarities between COVID-19 and IPF in gene expression, their IL-15-heavy cytokine storms, and the type of damage to alveolar cells. Both might be driven by endoplasmic reticulum stress, they found.
“COVID-19 resembles IPF at a fundamental level,” they wrote.
Jeffrey Horowitz, MD, a pulmonologist and professor of medicine at the Ohio State University, said the need for early diagnosis is in part a function of the therapies available for ILD.
“They don’t make the lung function better,” he said. “So delays in diagnosis mean that there’s the possibility of underlying progression for months, or sometimes years, before the diagnosis is recognized.”
In an area in which diagnosis is delayed and the prognosis is dire – 3-5 years in untreated patients after diagnosis – “there’s a tremendous amount of nihilism out there” among patients, he said.
He said patients with long-term shortness of breath and unexplained cough are often told they have asthma and are prescribed inhalers, but then further assessment isn’t performed when those don’t work.
Diagnosing ILD in primary care
Many primary care physicians feel ill-equipped to discuss IPF. More than a dozen physicians contacted for this piece to talk about ILD either did not respond, or said they felt unqualified to respond to questions on the disease.
“Not my area of expertise” and “I don’t think I’m the right person for this discussion” were two of the responses provided to this news organization.
“For some reason, in the world of primary care, it seems like there’s an impediment to getting pulmonary function studies,” Dr. Horowitz said. “Anybody who has a persistent ongoing prolonged unexplained shortness of breath and cough should have pulmonary function studies done.”
Listening to the lungs alone might not be enough, he said. There might be no clear sign in the case of early pulmonary fibrosis, he said.
“There’s the textbook description of these Velcro-sounding crackles, but sometimes it’s very subtle,” he said. “And unless you’re listening very carefully it can easily be missed by somebody who has a busy practice, or it’s loud.”
William E. Golden, MD, professor of medicine and public health at the University of Arkansas, Little Rock, is the sole primary care physician contacted for this piece who spoke with authority on ILD.
For cases of suspected ILD, internist Dr. Golden, who also serves on the editorial advisory board of Internal Medicine News, suggested ordering a test for diffusing capacity for carbon monoxide (DLCO), which will be low in the case of IPF, along with a fine-cut lung CT scan to assess ongoing fibrotic changes.
It’s “not that difficult, but you need to have an index of suspicion for the diagnosis,” he said.
New initiative for helping diagnose ILD
Dr. Kulkarni is a committee member for a new effort under way to try to get patients with ILD diagnosed earlier.
The initiative, called Bridging Specialties: Timely Diagnosis for ILD Patients, has already produced an introductory podcast and a white paper on the effort, and its rationale is expected to be released soon, according to Dr. Kulkarni and her fellow committee members.
The American College of Chest Physicians and the Three Lakes Foundation – a foundation dedicated to pulmonary fibrosis awareness and research – are working together on this initiative. They plan to put together a suite of resources, to be gradually rolled out on the college’s website, to raise awareness about the importance of early diagnosis of ILD.
The full toolkit, expected to be rolled out over the next 12 months, will include a series of podcasts and resources on how to get patients diagnosed earlier and steps to take in cases of suspected ILD, Dr. Kulkarni said.
“The goal would be to try to increase awareness about the disease so that people start thinking more about it up front – and not after we’ve ruled out everything else,” she said. The main audience will be primary care providers, but patients and community pulmonologists would likely also benefit from the resources, the committee members said.
The urgency of the initiative stems from the way ILD treatments work. They are antifibrotic, meaning they help prevent scar tissue from forming, but they can’t reverse scar tissue that has already formed. If scarring is severe, the only option might be a lung transplant, and, since the average age at ILD diagnosis is in the 60s, many patients have comorbidities that make them ineligible for transplant. According to the Global Burden of Disease Study mentioned earlier, the death rate per 100,000 people with ILD was 1.93 in 2017.
“The longer we take to diagnose it, the more chance that inflammation will become scar tissue,” Dr. Kularni explained.
William Lago, MD, another member of the committee and a family physician, said identifying ILD early is not a straightforward matter .
“When they first present, it’s hard to pick up,” said Dr. Lago, who is also a staff physician at Cleveland Clinic’s Wooster Family Health Center and medical director of the COVID Recover Clinic there. “Many of them, even themselves, will discount the symptoms.”
Dr. Lago said that patients might resist having a work-up even when a primary care physician identifies symptoms as possible ILD. In rural settings, they might have to travel quite a distance for a CT scan or other necessary evaluations, or they might just not think the symptoms are serious enough.
“Most of the time when I’ve picked up some of my pulmonary fibrosis patients, it’s been incidentally while they’re in the office for other things,” he said. He often has to “push the issue” for further work-up, he said.
The overlap of shortness of breath and cough with other, much more common disorders, such as heart disease or chronic obstructive pulmonary disease (COPD), make ILD diagnosis a challenge, he said.
“For most of us, we’ve got sometimes 10 or 15 minutes with a patient who’s presenting with 5-6 different problems. And the shortness of breath or the occasional cough – that they think is nothing – is probably the least of those,” Dr. Lago said.
Dr. Golden said he suspected a tool like the one being developed by CHEST to be useful for some and not useful for others. He added that “no one has the time to spend on that kind of thing.”
Instead, he suggested just reinforcing what the core symptoms are and what the core testing is, “to make people think about it.”
Dr. Horowitiz seemed more optimistic about the likelihood of the CHEST tool being utilized to diagnose ILD.
Whether and how he would use the CHEST resource will depend on the final form it takes, Dr. Horowitz said. It’s encouraging that it’s being put together by a credible source, he added.
Dr. Kulkarni reported financial relationships with Boehringer Ingelheim, Aluda Pharmaceuticals and PureTech Lyt-100 Inc. Dr. Lago, Dr. Horowitz, and Dr. Golden reported no relevant disclosures.
Katie Lennon contributed to this report.
“There is definitely a delay from the time of symptom onset to the time that they are even evaluated for ILD,” said Dr. Kulkarni of the department of pulmonary, allergy and critical care medicine at the University of Alabama, Birmingham. “Some patients have had a significant loss of lung function by the time they come to see us. By that point we are limited by what treatment options we can offer.”
Interstitial lung disease is an umbrella term for a group of disorders involving progressive scarring of the lungs – typically irreversible – usually caused by long-term exposure to hazardous materials or by autoimmune effects. It includes idiopathic pulmonary fibrosis (IPF), a disease that is fairly rare but which has therapy options that can be effective if caught early enough. The term pulmonary fibrosis refers to lung scarring. Another type of ILD is pulmonary sarcoidosis, in which small clumps of immune cells form in the lungs in an immune response sometimes following an environmental trigger, and can lead to lung scarring if it doesn’t resolve.
Cases of ILD appear to be on the rise, and COVID-19 has made diagnosing it more complicated. One study found the prevalence of ILD and pulmonary sarcoidosis in high-income countries was about 122 of every 100,000 people in 1990 and rose to about 198 of every 100,000 people in 2017. The data were pulled from the Global Burden of Diseases, Injuries, and Risk Factors Study 2017. Globally, the researchers found a prevalence of 62 per 100,000 in 1990, compared with 82 per 100,000 in 2017.
If all of a patient’s symptoms have appeared post COVID and a physician is seeing a patient within 4-6 weeks of COVID symptoms, it is likely that the symptoms are COVID related. But a full work-up is recommended if a patient has lung crackles, which are an indicator of lung scarring, she said.
“The patterns that are seen on CT scan for COVID pneumonia are very distinct from what we expect to see with idiopathic pulmonary fibrosis,” Dr. Kulkarni said. “Putting all this information together is what is important to differentiate it from COVID pneumonia, as well as other types of ILD.”
A study published earlier this year found similarities between COVID-19 and IPF in gene expression, their IL-15-heavy cytokine storms, and the type of damage to alveolar cells. Both might be driven by endoplasmic reticulum stress, they found.
“COVID-19 resembles IPF at a fundamental level,” they wrote.
Jeffrey Horowitz, MD, a pulmonologist and professor of medicine at the Ohio State University, said the need for early diagnosis is in part a function of the therapies available for ILD.
“They don’t make the lung function better,” he said. “So delays in diagnosis mean that there’s the possibility of underlying progression for months, or sometimes years, before the diagnosis is recognized.”
In an area in which diagnosis is delayed and the prognosis is dire – 3-5 years in untreated patients after diagnosis – “there’s a tremendous amount of nihilism out there” among patients, he said.
He said patients with long-term shortness of breath and unexplained cough are often told they have asthma and are prescribed inhalers, but then further assessment isn’t performed when those don’t work.
Diagnosing ILD in primary care
Many primary care physicians feel ill-equipped to discuss IPF. More than a dozen physicians contacted for this piece to talk about ILD either did not respond, or said they felt unqualified to respond to questions on the disease.
“Not my area of expertise” and “I don’t think I’m the right person for this discussion” were two of the responses provided to this news organization.
“For some reason, in the world of primary care, it seems like there’s an impediment to getting pulmonary function studies,” Dr. Horowitz said. “Anybody who has a persistent ongoing prolonged unexplained shortness of breath and cough should have pulmonary function studies done.”
Listening to the lungs alone might not be enough, he said. There might be no clear sign in the case of early pulmonary fibrosis, he said.
“There’s the textbook description of these Velcro-sounding crackles, but sometimes it’s very subtle,” he said. “And unless you’re listening very carefully it can easily be missed by somebody who has a busy practice, or it’s loud.”
William E. Golden, MD, professor of medicine and public health at the University of Arkansas, Little Rock, is the sole primary care physician contacted for this piece who spoke with authority on ILD.
For cases of suspected ILD, internist Dr. Golden, who also serves on the editorial advisory board of Internal Medicine News, suggested ordering a test for diffusing capacity for carbon monoxide (DLCO), which will be low in the case of IPF, along with a fine-cut lung CT scan to assess ongoing fibrotic changes.
It’s “not that difficult, but you need to have an index of suspicion for the diagnosis,” he said.
New initiative for helping diagnose ILD
Dr. Kulkarni is a committee member for a new effort under way to try to get patients with ILD diagnosed earlier.
The initiative, called Bridging Specialties: Timely Diagnosis for ILD Patients, has already produced an introductory podcast and a white paper on the effort, and its rationale is expected to be released soon, according to Dr. Kulkarni and her fellow committee members.
The American College of Chest Physicians and the Three Lakes Foundation – a foundation dedicated to pulmonary fibrosis awareness and research – are working together on this initiative. They plan to put together a suite of resources, to be gradually rolled out on the college’s website, to raise awareness about the importance of early diagnosis of ILD.
The full toolkit, expected to be rolled out over the next 12 months, will include a series of podcasts and resources on how to get patients diagnosed earlier and steps to take in cases of suspected ILD, Dr. Kulkarni said.
“The goal would be to try to increase awareness about the disease so that people start thinking more about it up front – and not after we’ve ruled out everything else,” she said. The main audience will be primary care providers, but patients and community pulmonologists would likely also benefit from the resources, the committee members said.
The urgency of the initiative stems from the way ILD treatments work. They are antifibrotic, meaning they help prevent scar tissue from forming, but they can’t reverse scar tissue that has already formed. If scarring is severe, the only option might be a lung transplant, and, since the average age at ILD diagnosis is in the 60s, many patients have comorbidities that make them ineligible for transplant. According to the Global Burden of Disease Study mentioned earlier, the death rate per 100,000 people with ILD was 1.93 in 2017.
“The longer we take to diagnose it, the more chance that inflammation will become scar tissue,” Dr. Kularni explained.
William Lago, MD, another member of the committee and a family physician, said identifying ILD early is not a straightforward matter .
“When they first present, it’s hard to pick up,” said Dr. Lago, who is also a staff physician at Cleveland Clinic’s Wooster Family Health Center and medical director of the COVID Recover Clinic there. “Many of them, even themselves, will discount the symptoms.”
Dr. Lago said that patients might resist having a work-up even when a primary care physician identifies symptoms as possible ILD. In rural settings, they might have to travel quite a distance for a CT scan or other necessary evaluations, or they might just not think the symptoms are serious enough.
“Most of the time when I’ve picked up some of my pulmonary fibrosis patients, it’s been incidentally while they’re in the office for other things,” he said. He often has to “push the issue” for further work-up, he said.
The overlap of shortness of breath and cough with other, much more common disorders, such as heart disease or chronic obstructive pulmonary disease (COPD), make ILD diagnosis a challenge, he said.
“For most of us, we’ve got sometimes 10 or 15 minutes with a patient who’s presenting with 5-6 different problems. And the shortness of breath or the occasional cough – that they think is nothing – is probably the least of those,” Dr. Lago said.
Dr. Golden said he suspected a tool like the one being developed by CHEST to be useful for some and not useful for others. He added that “no one has the time to spend on that kind of thing.”
Instead, he suggested just reinforcing what the core symptoms are and what the core testing is, “to make people think about it.”
Dr. Horowitiz seemed more optimistic about the likelihood of the CHEST tool being utilized to diagnose ILD.
Whether and how he would use the CHEST resource will depend on the final form it takes, Dr. Horowitz said. It’s encouraging that it’s being put together by a credible source, he added.
Dr. Kulkarni reported financial relationships with Boehringer Ingelheim, Aluda Pharmaceuticals and PureTech Lyt-100 Inc. Dr. Lago, Dr. Horowitz, and Dr. Golden reported no relevant disclosures.
Katie Lennon contributed to this report.
“There is definitely a delay from the time of symptom onset to the time that they are even evaluated for ILD,” said Dr. Kulkarni of the department of pulmonary, allergy and critical care medicine at the University of Alabama, Birmingham. “Some patients have had a significant loss of lung function by the time they come to see us. By that point we are limited by what treatment options we can offer.”
Interstitial lung disease is an umbrella term for a group of disorders involving progressive scarring of the lungs – typically irreversible – usually caused by long-term exposure to hazardous materials or by autoimmune effects. It includes idiopathic pulmonary fibrosis (IPF), a disease that is fairly rare but which has therapy options that can be effective if caught early enough. The term pulmonary fibrosis refers to lung scarring. Another type of ILD is pulmonary sarcoidosis, in which small clumps of immune cells form in the lungs in an immune response sometimes following an environmental trigger, and can lead to lung scarring if it doesn’t resolve.
Cases of ILD appear to be on the rise, and COVID-19 has made diagnosing it more complicated. One study found the prevalence of ILD and pulmonary sarcoidosis in high-income countries was about 122 of every 100,000 people in 1990 and rose to about 198 of every 100,000 people in 2017. The data were pulled from the Global Burden of Diseases, Injuries, and Risk Factors Study 2017. Globally, the researchers found a prevalence of 62 per 100,000 in 1990, compared with 82 per 100,000 in 2017.
If all of a patient’s symptoms have appeared post COVID and a physician is seeing a patient within 4-6 weeks of COVID symptoms, it is likely that the symptoms are COVID related. But a full work-up is recommended if a patient has lung crackles, which are an indicator of lung scarring, she said.
“The patterns that are seen on CT scan for COVID pneumonia are very distinct from what we expect to see with idiopathic pulmonary fibrosis,” Dr. Kulkarni said. “Putting all this information together is what is important to differentiate it from COVID pneumonia, as well as other types of ILD.”
A study published earlier this year found similarities between COVID-19 and IPF in gene expression, their IL-15-heavy cytokine storms, and the type of damage to alveolar cells. Both might be driven by endoplasmic reticulum stress, they found.
“COVID-19 resembles IPF at a fundamental level,” they wrote.
Jeffrey Horowitz, MD, a pulmonologist and professor of medicine at the Ohio State University, said the need for early diagnosis is in part a function of the therapies available for ILD.
“They don’t make the lung function better,” he said. “So delays in diagnosis mean that there’s the possibility of underlying progression for months, or sometimes years, before the diagnosis is recognized.”
In an area in which diagnosis is delayed and the prognosis is dire – 3-5 years in untreated patients after diagnosis – “there’s a tremendous amount of nihilism out there” among patients, he said.
He said patients with long-term shortness of breath and unexplained cough are often told they have asthma and are prescribed inhalers, but then further assessment isn’t performed when those don’t work.
Diagnosing ILD in primary care
Many primary care physicians feel ill-equipped to discuss IPF. More than a dozen physicians contacted for this piece to talk about ILD either did not respond, or said they felt unqualified to respond to questions on the disease.
“Not my area of expertise” and “I don’t think I’m the right person for this discussion” were two of the responses provided to this news organization.
“For some reason, in the world of primary care, it seems like there’s an impediment to getting pulmonary function studies,” Dr. Horowitz said. “Anybody who has a persistent ongoing prolonged unexplained shortness of breath and cough should have pulmonary function studies done.”
Listening to the lungs alone might not be enough, he said. There might be no clear sign in the case of early pulmonary fibrosis, he said.
“There’s the textbook description of these Velcro-sounding crackles, but sometimes it’s very subtle,” he said. “And unless you’re listening very carefully it can easily be missed by somebody who has a busy practice, or it’s loud.”
William E. Golden, MD, professor of medicine and public health at the University of Arkansas, Little Rock, is the sole primary care physician contacted for this piece who spoke with authority on ILD.
For cases of suspected ILD, internist Dr. Golden, who also serves on the editorial advisory board of Internal Medicine News, suggested ordering a test for diffusing capacity for carbon monoxide (DLCO), which will be low in the case of IPF, along with a fine-cut lung CT scan to assess ongoing fibrotic changes.
It’s “not that difficult, but you need to have an index of suspicion for the diagnosis,” he said.
New initiative for helping diagnose ILD
Dr. Kulkarni is a committee member for a new effort under way to try to get patients with ILD diagnosed earlier.
The initiative, called Bridging Specialties: Timely Diagnosis for ILD Patients, has already produced an introductory podcast and a white paper on the effort, and its rationale is expected to be released soon, according to Dr. Kulkarni and her fellow committee members.
The American College of Chest Physicians and the Three Lakes Foundation – a foundation dedicated to pulmonary fibrosis awareness and research – are working together on this initiative. They plan to put together a suite of resources, to be gradually rolled out on the college’s website, to raise awareness about the importance of early diagnosis of ILD.
The full toolkit, expected to be rolled out over the next 12 months, will include a series of podcasts and resources on how to get patients diagnosed earlier and steps to take in cases of suspected ILD, Dr. Kulkarni said.
“The goal would be to try to increase awareness about the disease so that people start thinking more about it up front – and not after we’ve ruled out everything else,” she said. The main audience will be primary care providers, but patients and community pulmonologists would likely also benefit from the resources, the committee members said.
The urgency of the initiative stems from the way ILD treatments work. They are antifibrotic, meaning they help prevent scar tissue from forming, but they can’t reverse scar tissue that has already formed. If scarring is severe, the only option might be a lung transplant, and, since the average age at ILD diagnosis is in the 60s, many patients have comorbidities that make them ineligible for transplant. According to the Global Burden of Disease Study mentioned earlier, the death rate per 100,000 people with ILD was 1.93 in 2017.
“The longer we take to diagnose it, the more chance that inflammation will become scar tissue,” Dr. Kularni explained.
William Lago, MD, another member of the committee and a family physician, said identifying ILD early is not a straightforward matter .
“When they first present, it’s hard to pick up,” said Dr. Lago, who is also a staff physician at Cleveland Clinic’s Wooster Family Health Center and medical director of the COVID Recover Clinic there. “Many of them, even themselves, will discount the symptoms.”
Dr. Lago said that patients might resist having a work-up even when a primary care physician identifies symptoms as possible ILD. In rural settings, they might have to travel quite a distance for a CT scan or other necessary evaluations, or they might just not think the symptoms are serious enough.
“Most of the time when I’ve picked up some of my pulmonary fibrosis patients, it’s been incidentally while they’re in the office for other things,” he said. He often has to “push the issue” for further work-up, he said.
The overlap of shortness of breath and cough with other, much more common disorders, such as heart disease or chronic obstructive pulmonary disease (COPD), make ILD diagnosis a challenge, he said.
“For most of us, we’ve got sometimes 10 or 15 minutes with a patient who’s presenting with 5-6 different problems. And the shortness of breath or the occasional cough – that they think is nothing – is probably the least of those,” Dr. Lago said.
Dr. Golden said he suspected a tool like the one being developed by CHEST to be useful for some and not useful for others. He added that “no one has the time to spend on that kind of thing.”
Instead, he suggested just reinforcing what the core symptoms are and what the core testing is, “to make people think about it.”
Dr. Horowitiz seemed more optimistic about the likelihood of the CHEST tool being utilized to diagnose ILD.
Whether and how he would use the CHEST resource will depend on the final form it takes, Dr. Horowitz said. It’s encouraging that it’s being put together by a credible source, he added.
Dr. Kulkarni reported financial relationships with Boehringer Ingelheim, Aluda Pharmaceuticals and PureTech Lyt-100 Inc. Dr. Lago, Dr. Horowitz, and Dr. Golden reported no relevant disclosures.
Katie Lennon contributed to this report.
Crystal bone algorithm predicts early fractures, uses ICD codes
The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.
The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.
The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.
The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).
Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.
The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.
“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.
At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”
However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.
‘A very useful advance’ but needs further validation
Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”
With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.
“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.
The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).
“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”
“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”
Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”
Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.
“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.
“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”
“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.
“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
Algorithm validated in 106,328 patients aged 50 and older
Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.
The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.
The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.
In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.
The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.
Many of these patients had no prior diagnosis of osteoporosis
For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).
The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.
The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.
A version of this article first appeared on Medscape.com.
The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.
The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.
The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.
The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).
Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.
The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.
“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.
At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”
However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.
‘A very useful advance’ but needs further validation
Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”
With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.
“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.
The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).
“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”
“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”
Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”
Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.
“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.
“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”
“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.
“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
Algorithm validated in 106,328 patients aged 50 and older
Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.
The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.
The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.
In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.
The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.
Many of these patients had no prior diagnosis of osteoporosis
For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).
The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.
The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.
A version of this article first appeared on Medscape.com.
The Crystal Bone (Amgen) novel algorithm predicted 2-year risk of osteoporotic fractures in a large dataset with an accuracy that was consistent with FRAX 10-year risk predictions, researchers report.
The algorithm was built using machine learning and artificial intelligence to predict fracture risk based on International Classification of Diseases (ICD) codes, as described in an article published in the Journal of Medical Internet Research.
The current validation study was presented September 9 as a poster at the annual meeting of the American Society for Bone and Mineral Research.
The scientists validated the algorithm in more than 100,000 patients aged 50 and older (that is, at risk of fracture) who were part of the Reliant Medical Group dataset (a subset of Optum Care).
Importantly, the algorithm predicted increased fracture in many patients who did not have a diagnosis of osteoporosis.
The next steps are validation in other datasets to support the generalizability of Crystal Bone across U.S. health care systems, Elinor Mody, MD, Reliant Medical Group, and colleagues report.
“Implementation research, in which patients identified by Crystal Bone undergo a bone health assessment and receive ongoing management, will help inform the clinical utility of this novel algorithm,” they conclude.
At the poster session, Tina Kelley, Optum Life Sciences, explained: “It’s a screening tool that says: ‘These are your patients that maybe you should spend a little extra time with, ask a few extra questions.’ ”
However, further study is needed before it should be used in clinical practice, she emphasized to this news organization.
‘A very useful advance’ but needs further validation
Invited to comment, Peter R. Ebeling, MD, outgoing president of the ASBMR, noted that “many clinicians now use FRAX to calculate absolute fracture risk and select patients who should initiate anti-osteoporosis drugs.”
With FRAX, clinicians input a patient’s age, sex, weight, height, previous fracture, [history of] parent with fractured hip, current smoking status, glucocorticoids, rheumatoid arthritis, secondary osteoporosis, alcohol (3 units/day or more), and bone mineral density (by DXA at the femoral neck) into the tool, to obtain a 10-year probability of fracture.
“Crystal Bone takes a different approach,” Dr. Ebeling, from Monash University, Melbourne, who was not involved with the research but who disclosed receiving funding from Amgen, told this news organization in an email.
The algorithm uses electronic health records (EHRs) to identify patients who are likely to have a fracture within the next 2 years, he explained, based on diagnoses and medications associated with osteoporosis and fractures. These include ICD-10 codes for fractures at various sites and secondary causes of osteoporosis (such as rheumatoid and other inflammatory arthritis, chronic obstructive pulmonary disease, asthma, celiac disease, and inflammatory bowel disease).
“This is a very useful advance,” Dr. Ebeling summarized, “in that it would alert the clinician to patients in their practice who have a high fracture risk and need to be investigated for osteoporosis and initiated on treatment. Otherwise, the patients would be missed, as currently often occurs.”
“It would need to be adaptable to other [EMR] systems and to be validated in a large separate population to be ready to enter clinical practice,” he said, “but these data look very promising with a good [positive predictive value (PPV)].”
Similarly, Juliet Compston, MD, said: “It provides a novel, fully automated approach to population-based screening for osteoporosis using EHRs to identify people at high imminent risk of fracture.”
Dr. Compston, emeritus professor of bone medicine, University of Cambridge, England, who was not involved with the research but who also disclosed being a consultant for Amgen, selected the study as one of the top clinical science highlights abstracts at the meeting.
“The algorithm looks at ICD codes for previous history of fracture, medications that have adverse effects on bone – for example glucocorticoids, aromatase inhibitors, and anti-androgens – as well as chronic diseases that increase the risk of fracture,” she explained.
“FRAX is the most commonly used tool to estimate fracture probability in clinical practice and to guide treatment decisions,” she noted. However, “currently it requires human input of data into the FRAX website and is generally only performed on individuals who are selected on the basis of clinical risk factors.”
“The Crystal Bone algorithm offers the potential for fully automated population-based screening in older adults to identify those at high risk of fracture, for whom effective therapies are available to reduce fracture risk,” she summarized.
“It needs further validation,” she noted, “and implementation into clinical practice requires the availability of high-quality EHRs.”
Algorithm validated in 106,328 patients aged 50 and older
Despite guidelines that recommend screening for osteoporosis in women aged 65 and older, men older than 70, and adults aged 50-79 with risk factors, real-world data suggest such screening is low, the researchers note.
The current validation study identified 106,328 patients aged 50 and older who had at least 2 years of consecutive medical history with the Reliant Medical Group from December 2014 to November 2020 as well as at least two EHR codes.
The accuracy of predicting a fracture within 2 years, expressed as area under the receiver operating characteristic (AUROC), was 0.77, where 1 is perfect, 0.5 is no better than random selection, 0.7 to 0.8 is acceptable, and 0.8 to 0.9 indicates excellent predictive accuracy.
In the entire Optum Reliant population older than 50, the risk of fracture within 2 years was 1.95%.
The algorithm identified four groups with a greater risk: 19,100 patients had a threefold higher risk of fracture within 2 years, 9,246 patients had a fourfold higher risk, 3,533 patients had a sevenfold higher risk, and 1,735 patients had a ninefold higher risk.
Many of these patients had no prior diagnosis of osteoporosis
For example, in the 19,100 patients with a threefold greater risk of fracture in 2 years, 69% of patients had not been diagnosed with osteoporosis (49% of them had no history of fracture and 20% did have a history of fracture).
The algorithm had a positive predictive value of 6%-18%, a negative predictive value of 98%-99%, a specificity of 81%-98%, and a sensitivity of 18%-59%, for the four groups.
The study was funded by Amgen. Dr. Mody and another author are Reliant Medical Group employees. Ms. Kelley and another author are Optum Life Sciences employees. One author is an employee at Landing AI. Two authors are Amgen employees and own Amgen stock. Dr. Ebeling has disclosed receiving research funding from Amgen, Sanofi, and Alexion, and his institution has received honoraria from Amgen and Kyowa Kirin. Dr. Compston has disclosed receiving speaking and consultancy fees from Amgen and UCB.
A version of this article first appeared on Medscape.com.
FROM ASBMR 2022
Usefulness of IBD biomarker may vary based on microbiome
Levels of calprotectin, a biomarker used to detect intestinal inflammation, may vary in fecal samples based on an individual’s microbiome composition, according to researchers. The results, if confirmed, might help refine its use in monitoring inflammatory bowel disease (IBD).
Researchers used a new ex vivo functional assay to identify specific bacteria that degrade calprotectin and may play a role in variations found in vivo. “Microbiome-based calibration could improve sensitivity and specificity of fecal calprotectin readouts, thereby facilitating more reliable real-time monitoring and ultimately enabling more timely interventions,” the authors wrote in their research letter, which was published in Gastro Hep Advances.
The standard for diagnosing ulcerative colitis (UC) and Crohn’s disease (CD) is endoscopy and biopsy because it allows both visual and histological examination of the severity and extent of intestinal inflammation, but this cannot be used to monitor patients on an ongoing basis.
Calprotectin is a promising biomarker for intestinal inflammation, but a meta-analysis found that it has a pooled sensitivity of 85% and a pooled specificity of 75% for the diagnosis of endoscopically active inflammatory bowel disease.
The researchers investigated whether an individual’s microbiome can metabolize calprotectin, which would complicate measurement of fecal calprotectin. They recruited 22 individuals with IBD (64% female, 73% with colonic disease), who provided stool samples. They completed a symptom questionnaire in advance of a colonoscopy. Overall, 64% had endoscopically inactive disease, and 82% had clinically inactive disease.
At a cutoff of 50 mcg/g, 9 patients had normal fecal calprotectin levels, and 13 had elevated levels. Those with clinically or endoscopically active disease had higher levels of fecal calprotectin (P < .0001).
There was a significant but poor correlation between disease activity measures and fecal calprotectin levels in CD (r = 0.62; P = .008), but there was no statistically significant association for UC (r = –0.29; P = .6). Endoscopic disease activity was also significantly correlated with fecal calprotectin in CD (r = 0.83; P < .001), but not UC (r = 0.50; P = .4).
The researchers created an ex vivo functional assay to measure calprotectin metabolism by the microbiome. They anaerobically cultured fecal samples in the presence of calprotectin and measured levels of calprotectin after 24 hours. Control samples were grown without microbes, without calprotectin, or lacking both. The researchers tested samples in both standard media and in media with low levels of amino acids, reasoning that the latter condition might encourage catabolism of calprotectin. The cultures with low amino acid content had lower calprotectin levels than those with normal amino acid content (P < .0007).
The researchers found greater calprotectin degradation in the low amino acid media, and the difference was more pronounced among samples taken from individuals with UC than CD (P < .02).
They used metagenomic sequencing data from fecal samples to identify bacterial species associated with calprotectin metabolism. Similarly to previous reports, the researchers found that Firmicutes was dominant, while Subdoligranulum correlated with calprotectin degradation in low amino acid media (P = .04).
For 5 days, they cultured Subdoligranulum variabile in low amino acid media that also contained calprotectin. Calprotectin levels were lower than a control sample with cultured Akkermansia muciniphila, which was previously shown to not be associated with calprotectin degradation in low amino acid media (P = .03). Because Subdoligranulum species were not detectable in 5 of 22 fecal samples, the authors say they are unlikely to be the only species capable of metabolizing calprotectin.
Among IBD patients, only one had both endoscopically active colitis and a low fecal calprotectin level. The patient’s micobiome had Subdoligranulum present, and their fecal sample was able to metabolize calprotectin in the functional assay.
The study was limited by its small sample size, which prevented development of a calibration model for fecal calprotectin, and the researchers called for additional studies among individuals with active colitis.
The search continues for reliable, noninvasive methods for monitoring disease activity in inflammatory bowel disease (IBD). Noninvasive disease activity measures improve quality of care by facilitating more frequent assessment of therapeutic efficacy, which for IBD otherwise depends on periodic endoscopic evaluation and biopsy. Available tools such as fecal calprotectin are valuable and widely used but are imperfect.
Of note, a patient with endoscopically active colitis and a relatively low fecal calprotectin level harbored Subdoligranulum species, which – when isolated and assayed ex vivo – degraded calprotectin. These studies suggest that individualized, patient microbiome–based calibration assays might help improve the sensitivity and specificity of fecal calprotectin levels for monitoring disease activity. As the authors note, more patients need to be studied, especially focusing on those with active disease and paradoxically low calprotectin levels.
Deborah C. Rubin, MD, AGAF, is the William B. Kountz Professor of Medicine and professor of developmental biology in the division of gastroenterology at Washington University, St. Louis. She had no conflicts of interest to disclose.
The search continues for reliable, noninvasive methods for monitoring disease activity in inflammatory bowel disease (IBD). Noninvasive disease activity measures improve quality of care by facilitating more frequent assessment of therapeutic efficacy, which for IBD otherwise depends on periodic endoscopic evaluation and biopsy. Available tools such as fecal calprotectin are valuable and widely used but are imperfect.
Of note, a patient with endoscopically active colitis and a relatively low fecal calprotectin level harbored Subdoligranulum species, which – when isolated and assayed ex vivo – degraded calprotectin. These studies suggest that individualized, patient microbiome–based calibration assays might help improve the sensitivity and specificity of fecal calprotectin levels for monitoring disease activity. As the authors note, more patients need to be studied, especially focusing on those with active disease and paradoxically low calprotectin levels.
Deborah C. Rubin, MD, AGAF, is the William B. Kountz Professor of Medicine and professor of developmental biology in the division of gastroenterology at Washington University, St. Louis. She had no conflicts of interest to disclose.
The search continues for reliable, noninvasive methods for monitoring disease activity in inflammatory bowel disease (IBD). Noninvasive disease activity measures improve quality of care by facilitating more frequent assessment of therapeutic efficacy, which for IBD otherwise depends on periodic endoscopic evaluation and biopsy. Available tools such as fecal calprotectin are valuable and widely used but are imperfect.
Of note, a patient with endoscopically active colitis and a relatively low fecal calprotectin level harbored Subdoligranulum species, which – when isolated and assayed ex vivo – degraded calprotectin. These studies suggest that individualized, patient microbiome–based calibration assays might help improve the sensitivity and specificity of fecal calprotectin levels for monitoring disease activity. As the authors note, more patients need to be studied, especially focusing on those with active disease and paradoxically low calprotectin levels.
Deborah C. Rubin, MD, AGAF, is the William B. Kountz Professor of Medicine and professor of developmental biology in the division of gastroenterology at Washington University, St. Louis. She had no conflicts of interest to disclose.
Levels of calprotectin, a biomarker used to detect intestinal inflammation, may vary in fecal samples based on an individual’s microbiome composition, according to researchers. The results, if confirmed, might help refine its use in monitoring inflammatory bowel disease (IBD).
Researchers used a new ex vivo functional assay to identify specific bacteria that degrade calprotectin and may play a role in variations found in vivo. “Microbiome-based calibration could improve sensitivity and specificity of fecal calprotectin readouts, thereby facilitating more reliable real-time monitoring and ultimately enabling more timely interventions,” the authors wrote in their research letter, which was published in Gastro Hep Advances.
The standard for diagnosing ulcerative colitis (UC) and Crohn’s disease (CD) is endoscopy and biopsy because it allows both visual and histological examination of the severity and extent of intestinal inflammation, but this cannot be used to monitor patients on an ongoing basis.
Calprotectin is a promising biomarker for intestinal inflammation, but a meta-analysis found that it has a pooled sensitivity of 85% and a pooled specificity of 75% for the diagnosis of endoscopically active inflammatory bowel disease.
The researchers investigated whether an individual’s microbiome can metabolize calprotectin, which would complicate measurement of fecal calprotectin. They recruited 22 individuals with IBD (64% female, 73% with colonic disease), who provided stool samples. They completed a symptom questionnaire in advance of a colonoscopy. Overall, 64% had endoscopically inactive disease, and 82% had clinically inactive disease.
At a cutoff of 50 mcg/g, 9 patients had normal fecal calprotectin levels, and 13 had elevated levels. Those with clinically or endoscopically active disease had higher levels of fecal calprotectin (P < .0001).
There was a significant but poor correlation between disease activity measures and fecal calprotectin levels in CD (r = 0.62; P = .008), but there was no statistically significant association for UC (r = –0.29; P = .6). Endoscopic disease activity was also significantly correlated with fecal calprotectin in CD (r = 0.83; P < .001), but not UC (r = 0.50; P = .4).
The researchers created an ex vivo functional assay to measure calprotectin metabolism by the microbiome. They anaerobically cultured fecal samples in the presence of calprotectin and measured levels of calprotectin after 24 hours. Control samples were grown without microbes, without calprotectin, or lacking both. The researchers tested samples in both standard media and in media with low levels of amino acids, reasoning that the latter condition might encourage catabolism of calprotectin. The cultures with low amino acid content had lower calprotectin levels than those with normal amino acid content (P < .0007).
The researchers found greater calprotectin degradation in the low amino acid media, and the difference was more pronounced among samples taken from individuals with UC than CD (P < .02).
They used metagenomic sequencing data from fecal samples to identify bacterial species associated with calprotectin metabolism. Similarly to previous reports, the researchers found that Firmicutes was dominant, while Subdoligranulum correlated with calprotectin degradation in low amino acid media (P = .04).
For 5 days, they cultured Subdoligranulum variabile in low amino acid media that also contained calprotectin. Calprotectin levels were lower than a control sample with cultured Akkermansia muciniphila, which was previously shown to not be associated with calprotectin degradation in low amino acid media (P = .03). Because Subdoligranulum species were not detectable in 5 of 22 fecal samples, the authors say they are unlikely to be the only species capable of metabolizing calprotectin.
Among IBD patients, only one had both endoscopically active colitis and a low fecal calprotectin level. The patient’s micobiome had Subdoligranulum present, and their fecal sample was able to metabolize calprotectin in the functional assay.
The study was limited by its small sample size, which prevented development of a calibration model for fecal calprotectin, and the researchers called for additional studies among individuals with active colitis.
Levels of calprotectin, a biomarker used to detect intestinal inflammation, may vary in fecal samples based on an individual’s microbiome composition, according to researchers. The results, if confirmed, might help refine its use in monitoring inflammatory bowel disease (IBD).
Researchers used a new ex vivo functional assay to identify specific bacteria that degrade calprotectin and may play a role in variations found in vivo. “Microbiome-based calibration could improve sensitivity and specificity of fecal calprotectin readouts, thereby facilitating more reliable real-time monitoring and ultimately enabling more timely interventions,” the authors wrote in their research letter, which was published in Gastro Hep Advances.
The standard for diagnosing ulcerative colitis (UC) and Crohn’s disease (CD) is endoscopy and biopsy because it allows both visual and histological examination of the severity and extent of intestinal inflammation, but this cannot be used to monitor patients on an ongoing basis.
Calprotectin is a promising biomarker for intestinal inflammation, but a meta-analysis found that it has a pooled sensitivity of 85% and a pooled specificity of 75% for the diagnosis of endoscopically active inflammatory bowel disease.
The researchers investigated whether an individual’s microbiome can metabolize calprotectin, which would complicate measurement of fecal calprotectin. They recruited 22 individuals with IBD (64% female, 73% with colonic disease), who provided stool samples. They completed a symptom questionnaire in advance of a colonoscopy. Overall, 64% had endoscopically inactive disease, and 82% had clinically inactive disease.
At a cutoff of 50 mcg/g, 9 patients had normal fecal calprotectin levels, and 13 had elevated levels. Those with clinically or endoscopically active disease had higher levels of fecal calprotectin (P < .0001).
There was a significant but poor correlation between disease activity measures and fecal calprotectin levels in CD (r = 0.62; P = .008), but there was no statistically significant association for UC (r = –0.29; P = .6). Endoscopic disease activity was also significantly correlated with fecal calprotectin in CD (r = 0.83; P < .001), but not UC (r = 0.50; P = .4).
The researchers created an ex vivo functional assay to measure calprotectin metabolism by the microbiome. They anaerobically cultured fecal samples in the presence of calprotectin and measured levels of calprotectin after 24 hours. Control samples were grown without microbes, without calprotectin, or lacking both. The researchers tested samples in both standard media and in media with low levels of amino acids, reasoning that the latter condition might encourage catabolism of calprotectin. The cultures with low amino acid content had lower calprotectin levels than those with normal amino acid content (P < .0007).
The researchers found greater calprotectin degradation in the low amino acid media, and the difference was more pronounced among samples taken from individuals with UC than CD (P < .02).
They used metagenomic sequencing data from fecal samples to identify bacterial species associated with calprotectin metabolism. Similarly to previous reports, the researchers found that Firmicutes was dominant, while Subdoligranulum correlated with calprotectin degradation in low amino acid media (P = .04).
For 5 days, they cultured Subdoligranulum variabile in low amino acid media that also contained calprotectin. Calprotectin levels were lower than a control sample with cultured Akkermansia muciniphila, which was previously shown to not be associated with calprotectin degradation in low amino acid media (P = .03). Because Subdoligranulum species were not detectable in 5 of 22 fecal samples, the authors say they are unlikely to be the only species capable of metabolizing calprotectin.
Among IBD patients, only one had both endoscopically active colitis and a low fecal calprotectin level. The patient’s micobiome had Subdoligranulum present, and their fecal sample was able to metabolize calprotectin in the functional assay.
The study was limited by its small sample size, which prevented development of a calibration model for fecal calprotectin, and the researchers called for additional studies among individuals with active colitis.
FROM GASTRO HEP ADVANCES
Prior psychological distress tied to ‘long-COVID’ conditions
In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.
Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.
“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.
“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.
The findings were published online in JAMA Psychiatry.
‘Poorly understood’
Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.
Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.
Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.
To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).
They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.
Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.
The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.
Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
Inflammation, immune dysregulation implicated?
The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).
Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.
The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.
For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.
During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.
Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.
The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).
Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.
In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).
Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).
Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.
However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
Contributes to the field
Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.
Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.
He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”
Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.
More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.
The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.
Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.
“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.
“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.
The findings were published online in JAMA Psychiatry.
‘Poorly understood’
Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.
Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.
Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.
To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).
They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.
Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.
The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.
Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
Inflammation, immune dysregulation implicated?
The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).
Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.
The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.
For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.
During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.
Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.
The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).
Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.
In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).
Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).
Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.
However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
Contributes to the field
Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.
Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.
He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”
Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.
More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.
The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
In an analysis of almost 55,000 adult participants in three ongoing studies, having depression, anxiety, worry, perceived stress, or loneliness early in the pandemic, before SARS-CoV-2 infection, was associated with a 50% increased risk for developing long COVID. These types of psychological distress were also associated with a 15% to 51% greater risk for impairment in daily life among individuals with long COVID.
Psychological distress was even more strongly associated with developing long COVID than were physical health risk factors, and the increased risk was not explained by health behaviors such as smoking or physical comorbidities, researchers note.
“Our findings suggest the need to consider psychological health in addition to physical health as risk factors of long COVID-19,” lead author Siwen Wang, MD, postdoctoral fellow, department of nutrition, Harvard T. H. Chan School of Public Health, Boston, said in an interview.
“We need to increase public awareness of the importance of mental health and focus on getting mental health care for people who need it, increasing the supply of mental health clinicians and improving access to care,” she said.
The findings were published online in JAMA Psychiatry.
‘Poorly understood’
Postacute sequelae of SARS-CoV-2 (“long COVID”), which are “signs and symptoms consistent with COVID-19 that extend beyond 4 weeks from onset of infection” constitute “an emerging health issue,” the investigators write.
Dr. Wang noted that it has been estimated that 8-23 million Americans have developed long COVID. However, “despite the high prevalence and daily life impairment associated with long COVID, it is still poorly understood, and few risk factors have been established,” she said.
Although psychological distress may be implicated in long COVID, only three previous studies investigated psychological factors as potential contributors, the researchers note. Also, no study has investigated the potential role of other common manifestations of distress that have increased during the pandemic, such as loneliness and perceived stress, they add.
To investigate these issues, the researchers turned to three large ongoing longitudinal studies: the Nurses’ Health Study II (NSHII), the Nurses’ Health study 3 (NHS3), and the Growing Up Today Study (GUTS).
They analyzed data on 54,960 total participants (96.6% women; mean age, 57.5 years). Of the full group, 38% were active health care workers.
Participants completed an online COVID-19 questionnaire from April 2020 to Sept. 1, 2020 (baseline), and monthly surveys thereafter. Beginning in August 2020, surveys were administered quarterly. The end of follow-up was in November 2021.
The COVID questionnaires included questions about positive SARS-CoV-2 test results, COVID symptoms and hospitalization since March 1, 2020, and the presence of long-term COVID symptoms, such as fatigue, respiratory problems, persistent cough, muscle/joint/chest pain, smell/taste problems, confusion/disorientation/brain fog, depression/anxiety/changes in mood, headache, and memory problems.
Participants who reported these post-COVID conditions were asked about the frequency of symptoms and the degree of impairment in daily life.
Inflammation, immune dysregulation implicated?
The Patient Health Questionnaire–4 (PHQ-4) was used to assess for anxiety and depressive symptoms in the past 2 weeks. It consists of a two-item depression measure (PHQ-2) and a two-item Generalized Anxiety Disorder Scale (GAD-2).
Non–health care providers completed two additional assessments of psychological distress: the four-item Perceived Stress Scale and the three-item UCLA Loneliness Scale.
The researchers included demographic factors, weight, smoking status, marital status, and medical conditions, including diabetes, hypertension, hypercholesterolemia, asthma, and cancer, and socioeconomic factors as covariates.
For each participant, the investigators calculated the number of types of distress experienced at a high level, including probable depression, probable anxiety, worry about COVID-19, being in the top quartile of perceived stress, and loneliness.
During the 19 months of follow-up (1-47 weeks after baseline), 6% of respondents reported a positive result on a SARS-CoV-2 antibody, antigen, or polymerase chain reaction test.
Of these, 43.9% reported long-COVID conditions, with most reporting that symptoms lasted 2 months or longer; 55.8% reported at least occasional daily life impairment.
The most common post-COVID conditions were fatigue (reported by 56%), loss of smell or taste problems (44.6%), shortness of breath (25.5%), confusion/disorientation/ brain fog (24.5%), and memory issues (21.8%).
Among patients who had been infected, there was a considerably higher rate of preinfection psychological distress after adjusting for sociodemographic factors, health behaviors, and comorbidities. Each type of distress was associated with post-COVID conditions.
In addition, participants who had experienced at least two types of distress prior to infection were at nearly 50% increased risk for post–COVID conditions (risk ratio, 1.49; 95% confidence interval, 1.23-1.80).
Among those with post-COVID conditions, all types of distress were associated with increased risk for daily life impairment (RR range, 1.15-1.51).
Senior author Andrea Roberts, PhD, senior research scientist at the Harvard T. H. Chan School of Public Health, Boston, noted that the investigators did not examine biological mechanisms potentially underlying the association they found.
However, “based on prior research, it may be that inflammation and immune dysregulation related to psychological distress play a role in the association of distress with long COVID, but we can’t be sure,” Dr. Roberts said.
Contributes to the field
Commenting for this article, Yapeng Su, PhD, a postdoctoral researcher at the Fred Hutchinson Cancer Research Center in Seattle, called the study “great work contributing to the long-COVID research field and revealing important connections” with psychological stress prior to infection.
Dr. Su, who was not involved with the study, was previously at the Institute for Systems Biology, also in Seattle, and has written about long COVID.
He noted that the “biological mechanism of such intriguing linkage is definitely the important next step, which will likely require deep phenotyping of biological specimens from these patients longitudinally.”
Dr. Wang pointed to past research suggesting that some patients with mental illness “sometimes develop autoantibodies that have also been associated with increased risk of long COVID.” In addition, depression “affects the brain in ways that may explain certain cognitive symptoms in long COVID,” she added.
More studies are now needed to understand how psychological distress increases the risk for long COVID, said Dr. Wang.
The research was supported by grants from the Eunice Kennedy Shriver National Institute of Child Health and Human Development, the National Institutes of Health, the Dean’s Fund for Scientific Advancement Acceleration Award from the Harvard T. H. Chan School of Public Health, the Massachusetts Consortium on Pathogen Readiness Evergrande COVID-19 Response Fund Award, and the Veterans Affairs Health Services Research and Development Service funds. Dr. Wang and Dr. Roberts have reported no relevant financial relationships. The other investigators’ disclosures are listed in the original article. Dr. Su reports no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA PSYCHIATRY
Barriers to System Quality Improvement in Health Care
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
Corresponding author: Ebrahim Barkoudah, MD, MPH, Department of Medicine, Brigham and Women’s Hospital, Boston, MA; [email protected]
Process improvement in any industry sector aims to increase the efficiency of resource utilization and delivery methods (cost) and the quality of the product (outcomes), with the goal of ultimately achieving continuous development.1 In the health care industry, variation in processes and outcomes along with inefficiency in resource use that result in changes in value (the product of outcomes/costs) are the general targets of quality improvement (QI) efforts employing various implementation methodologies.2 When the ultimate aim is to serve the patient (customer), best clinical practice includes both maintaining high quality (individual care delivery) and controlling costs (efficient care system delivery), leading to optimal delivery (value-based care). High-quality individual care and efficient care delivery are not competing concepts, but when working to improve both health care outcomes and cost, traditional and nontraditional barriers to system QI often arise.3
The possible scenarios after a QI intervention include backsliding (regression to the mean over time), steady-state (minimal fixed improvement that could sustain), and continuous improvement (tangible enhancement after completing the intervention with legacy effect).4 The scalability of results can be considered during the process measurement and the intervention design phases of all QI projects; however, the complex nature of barriers in the health care environment during each level of implementation should be accounted for to prevent failure in the scalability phase.5
The barriers to optimal QI outcomes leading to continuous improvement are multifactorial and are related to intrinsic and extrinsic factors.6 These factors include 3 fundamental levels: (1) individual level inertia/beliefs, prior personal knowledge, and team-related factors7,8; (2) intervention-related and process-specific barriers and clinical practice obstacles; and (3) organizational level challenges and macro-level and population-level barriers (Figure). The obstacles faced during the implementation phase will likely include 2 of these levels simultaneously, which could add complexity and hinder or prevent the implementation of a tangible successful QI process and eventually lead to backsliding or minimal fixed improvement rather than continuous improvement. Furthermore, a patient-centered approach to QI would contribute to further complexity in design and execution, given the importance of reaching sustainable, meaningful improvement by adding elements of patient’s preferences, caregiver engagement, and the shared decision-making processes.9
Overcoming these multidomain barriers and reaching resilience and sustainability requires thoughtful planning and execution through a multifaceted approach.10 A meaningful start could include addressing the clinical inertia for the individual and the team by promoting open innovation and allowing outside institutional collaborations and ideas through networks.11 On the individual level, encouraging participation and motivating health care workers in QI to reach a multidisciplinary operation approach will lead to harmony in collaboration. Concurrently, the organization should support the QI capability and scalability by removing competing priorities and establishing effective leadership that ensures resource allocation, communicates clear value-based principles, and engenders a psychological safety environment.
A continuous improvement state is the optimal QI target, a target that can be attained by removing obstacles and paving a clear pathway to implementation. Focusing on the 3 levels of barriers will position the organization for meaningful and successful QI phases to achieve continuous improvement.
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047
1. Adesola S, Baines T. Developing and evaluating a methodology for business process improvement. Business Process Manage J. 2005;11(1):37-46. doi:10.1108/14637150510578719
2. Gershon M. Choosing which process improvement methodology to implement. J Appl Business & Economics. 2010;10(5):61-69.
3. Porter ME, Teisberg EO. Redefining Health Care: Creating Value-Based Competition on Results. Harvard Business Press; 2006.
4. Holweg M, Davies J, De Meyer A, Lawson B, Schmenner RW. Process Theory: The Principles of Operations Management. Oxford University Press; 2018.
5. Shortell SM, Bennett CL, Byck GR. Assessing the impact of continuous quality improvement on clinical practice: what it will take to accelerate progress. Milbank Q. 1998;76(4):593-624. doi:10.1111/1468-0009.00107
6. Solomons NM, Spross JA. Evidence‐based practice barriers and facilitators from a continuous quality improvement perspective: an integrative review. J Nurs Manage. 2011;19(1):109-120. doi:10.1111/j.1365-2834.2010.01144.x
7. Phillips LS, Branch WT, Cook CB, et al. Clinical inertia. Ann Intern Med. 2001;135(9):825-34. doi:10.7326/0003-4819-135-9-200111060-00012
8. Stevenson K, Baker R, Farooqi A, Sorrie R, Khunti K. Features of primary health care teams associated with successful quality improvement of diabetes care: a qualitative study. Fam Pract. 2001;18(1):21-26. doi:10.1093/fampra/18.1.21
9. What is patient-centered care? NEJM Catalyst. January 1, 2017. Accessed August 31, 2022. https://catalyst.nejm.org/doi/full/10.1056/CAT.17.0559
10. Kilbourne AM, Beck K, Spaeth‐Rublee B, et al. Measuring and improving the quality of mental health care: a global perspective. World Psychiatry. 2018;17(1):30-8. doi:10.1002/wps.20482
11. Huang HC, Lai MC, Lin LH, Chen CT. Overcoming organizational inertia to strengthen business model innovation: An open innovation perspective. J Organizational Change Manage. 2013;26(6):977-1002. doi:10.1108/JOCM-04-2012-0047
Reporting Coronary Artery Calcium on Low-Dose Computed Tomography Impacts Statin Management in a Lung Cancer Screening Population
Cigarette smoking is an independent risk factor for lung cancer and atherosclerotic cardiovascular disease (ASCVD).1-3 The National Lung Screening Trial (NLST) demonstrated both lung cancer mortality reduction with the use of surveillance low-dose computed tomography (LDCT) and ASCVD as the most common cause of death among smokers.4,5 ASCVD remains the leading cause of death in the lung cancer screening (LCS) population.2,3 After publication of the NLST results, the US Preventive Services Task Force (USPSTF) established LCS eligibility among smokers and the Center for Medicare and Medicaid Services approved payment for annual LDCT in this group.1,6,7
Recently LDCT has been proposed as an adjunct diagnostic tool for detecting coronary artery calcium (CAC), which is independently associated with ASCVD and mortality.8-13 CAC scores have been recommended by the 2019 American College of Cardiology/American Heart Association cholesterol treatment guidelines and shown to be cost-effective in guiding statin therapy for patients with borderline to intermediate ASCVD risk.14-16 While CAC is conventionally quantified using electrocardiogram (ECG)-gated CT, these scans are not routinely performed in clinical practice because preventive CAC screening is neither recommended by the USPSTF nor covered by most insurance providers.17,18 LDCT, conversely, is reimbursable and a well-validated ASCVD risk predictor.18,19
In this study, we aimed to determine the validity of LDCT in identifying CAC among the military LCS population and whether it would impact statin recommendations based on 10-year ASCVD risk.
Methods
Participants were recruited from a retrospective cohort of 563 Military Health System (MHS) beneficiaries who received LCS with LDCT at Naval Medical Center Portsmouth (NMCP) in Virginia between January 1, 2019, and December 31, 2020. The 2013 USPSTF LCS guidelines were followed as the 2021 guidelines had not been published before the start of the study; thus, eligible participants included adults aged 55 to 80 years with at least a 30-pack-year smoking history and currently smoked or had quit within 15 years from the date of study consent.6,7
Between November 2020 and May 2021, study investigators screened 287 patient records and recruited 190 participants by telephone, starting with individuals who had the most recent LDCT and working backward until reaching the predetermined 170 subjects who had undergone in-office consents before ECG-gated CT scans. Since LDCT was not obtained simultaneously with the ECG-gated CT, participants were required to complete their gated CT within 24 months of their last LDCT. Of the 190 subjects initially recruited, those who were ineligible for LCS (n = 4), had a history of angioplasty, stent, or bypass revascularization procedure (n = 4), did not complete their ECG-gated CT within the specified time frame (n = 8), or withdrew from the study (n = 4) were excluded. While gated CT scans were scored for CAC in the present time, LDCT (previously only read for general lung pathology) was not scored until after participant consent. Patients were peripherally followed, via health record reviews, for 3 months after their gated CT to document any additional imaging ordered by their primary care practitioners. The study was approved by the NMCP Institutional Review Board.
Coronary Artery Calcification Scoring
We performed CT scans using Siemens SOMATOM Flash, a second-generation dual-source scanner; and GE LightSpeed VCT, a single-source, 64-slice scanner. A step-and-shoot prospective trigger technique was used, and contiguous axial images were reconstructed at 2.5-mm or 3-mm intervals for CAC quantification using the Agatston method.20 ECG-gated CT scans were electrocardiographically triggered at mid-diastole (70% of the R-R interval). Radiation dose reduction techniques involved adjustments of the mA according to body mass index and iterative reconstruction. LDCT scans were performed without ECG gating. We reconstructed contiguous axial images at 1-mm intervals for evaluation of the lung parenchyma. Similar dose-reduction techniques were used, to limit radiation exposure for each LDCT scan to < 1.5 mSv, per established guidelines.21 CAC on LDCT was also scored using the Agatston method. CAC was scored on the 2 scan types by different blinded reviewers.
Covariates
We reviewed outpatient health records to obtain participants’ age, sex, medical history, statin use, smoking status (current or former), and pack-years. International Classification of Diseases, Tenth Revision codes within medical encounters were used to document prevalent hypertension, hyperlipidemia, and diabetes mellitus. Participants’ most recent low-density lipoprotein value (within 24 months of ECG-gated CT) was recorded and 10-year ASCVD risk scores were calculated using the pooled cohorts equation.
Statistical Analysis
A power analysis performed before study initiation determined that a prospective sample size of 170 would be sufficient to provide strength of correlation between CAC scores calculated from ECG-gated CT and LDCT and achieve a statistical power of at least 80%. The Wilcoxon rank sum and Fisher exact tests were used to evaluate differences in continuous and categorical CAC scores, respectively. Given skewed distributions, Spearman rank correlations and Kendall W coefficient of concordance were respectively used to evaluate correlation and concordance of CAC scores between the 2 scan types. κ statistics were used to rate agreement between categorical CAC scores. Bland-Altman analysis was performed to determine the bias and limits of agreement between ECG-gated CT and LDCT.22 For categorical CAC score analysis, participants were categorized into 5 groups according to standard Agatston score cut-off points. We defined the 5 categories of CAC for both scan types based on previous analysis from Rumberger and colleagues: CAC = 0 (absent), CAC = 1-10 (minimal), CAC = 11-100 (mild), CAC = 101-400 (moderate), CAC > 400 (severe).23 Of note, LDCT reports at NMCP include a visual CAC score using these qualitative descriptors that were available to LDCT reviewers. Analyses were conducted using SAS version 9.4 and Microsoft Excel; P values < .05 were considered statistically significant.
Results
The 170 participants had a mean (SD) age of 62.1 (4.6) years and were 70.6% male (Table 1). Hyperlipidemia was the most prevalent cardiac risk factor with almost 70% of participants on a statin. There was no incidence of ischemic ASCVD during follow-up, although 1 participant was later diagnosed with lung cancer after evaluation of suspicious pulmonary findings on ECG-gated CT. CAC was identified on both scan types in 126 participants; however, LDCT was discordant with gated CT in identifying CAC in 24 subjects (P < .001).
The correlation between CAC scores on ECG-gated CT and LDCT was 0.945 (P < .001) and the concordance was 0.643, indicating moderate agreement between CAC scores on the 2 different scans (Figure 1). Median CAC scores were significantly higher on ECG-gated CT when compared with LDCT (107.5 vs 48.1 Agatston units, respectively; P < .05). Table 2 shows the CAC score characteristics for both scan types. The κ statistic for agreement between categorical CAC scores on ECG-gated CT compared with LDCT was 0.49 (SEκ= 0.05; 95% CI, -0.73-1.71), and the weighted κ statistic was 0.71, indicating moderate to substantial agreement between the 2 scans using the specified cutoff points. The Bland-Altman analysis presented a mean bias of 111.45 Agatston units, with limits of agreement between -268.64 and 491.54, as shown in Figure 2, suggesting that CAC scores on ECG-gated CT were, on average, about 111 units higher than those on LDCT. Finally, there were 24 participants with CAC seen on ECG-gated CT but none identified on LDCT (P < .001); of this cohort 20 were already on a statin, and of the remaining 4 individuals, 1 met statin criteria based on a > 20% ASCVD risk score alone (regardless of CAC score), 1 with an intermediate risk score met statin criteria based on CAC score reporting, 1 did not meet criteria due to a low-risk score, and the last had no reportable ASCVD risk score.
In the study, there were 80 participants with reportable borderline to intermediate 10-year ASCVD risk scores (5% ≤ 10-year ASCVD risk < 20%), 49 of which were taking a statin. Of the remaining 31 participants not on a statin, 19 met statin criteria after CAC was identified on ECG-gated CT (of these 18 also had CAC identified on LDCT). Subsequently, the number of participants who met statin criteria after additional CAC reporting (on ECG-gated CT and LDCT) was statistically significant (P < .001 and P < .05, respectively). Of the 49 participants on a statin, only 1 individual no longer met statin criteria due to a CAC score < 1 on gated CT.
Discussion
In this study population of recruited MHS beneficiaries, there was a strong correlation and moderate to substantial agreement between CAC scores calculated from LDCT and conventional ECG-gated CT. The number of nonstatin participants who met statin criteria and would have benefited from additional CAC score reporting was statistically significant as compared to their statin counterparts who no longer met the criteria.
CAC screening using nongated CT has become an increasingly available and consistently reproducible means for stratifying ASCVD risk and guiding statin therapy in individuals with equivocal ASCVD risk scores.24-26 As has been demonstrated in previous studies, our study additionally highlights the effective use of LDCT in not only identifying CAC, but also in beneficially impacting statin decisions in the high-risk smoking population.24-26 Our results also showed LDCT missed CAC in participants, the majority of which were already on a statin, and only 1 nonstatin individual benefited from additional CAC reporting. CAC scoring on LDCT should be an adjunct, not a substitute, for ASCVD risk stratification to help guide statin management.25,27
Our results may provide cost considerate implications for preventive CAC screening. While TRICARE covers the cost of ECG-gated CT for MHS beneficiaries, the same is not true of most nonmilitary insurance providers. Concerns about cancer risk from radiation exposure may also lead to hesitation about receiving additional CTs in the smoking population. Since the LCS population already receives annual LDCT, these scans can also be used for CAC scoring to help primary care professionals risk stratify their patients, as has been previously shown.28-31 Clinicians should consider implementing CAC scoring with annual LDCT scans, which would curtail further risks and expenses from CAC-specified scans.
Although CAC is scored visually and routinely reported in the body of LDCT reports at our facility, this is not a universal practice and was performed in only 44% of subjects with known CAC by a previous study.32 In 2007, there were 600,000 CAC scoring scans and > 9 million routine chest CTs performed in the United States.33 Based on our results and the growing consensus in the existing literature, CAC scoring on nongated CT is not only valid and reliable, but also can estimate ASCVD risk and subsequent mortality.34-36 Routine chest CTs remain an available resource for providing additional ASCVD risk stratification.
As we demonstrated, median CAC scores on LDCT were on average significantly lower than those from gated CT. This could be due to slice thickness variability between the GE and Siemens scanners or CAC progression between the time of the retrospective LDCT and prospective ECG-gated CT. Aside from this potential limitation, LDCT has been shown to have a high level of agreement with gated CT in predicting CAC, both visually and by the Agatston technique.37-39 Our results further support previous recommendations of utilizing CAC score categories when determining ASCVD risk from LDCT and that establishing scoring cutoff points warrants further development for potential standardization.37-39 Readers should be mindful that LDCT may still be less sensitive and underestimate low CAC levels and that ECG-gated CT may occasionally be more optimal in determining ASCVD risk when considering the negative predictive value of CAC.40
Limitations
Our study cohort was composed of MHS beneficiaries. Compared with the general population, these individuals may have greater access to care and be more likely to receive statins after preventive screenings. Additional studies may be required to assess CAC-associated statin eligibility among the general population. As discussed previously LDCT was not performed concomitantly with the ECG-gated CT. Although there was moderate to substantial CAC agreement between the 2 scan types, the timing difference could have led to absolute differences in CAC scores across both scan types and impacted the ability to detect low-level CAC on LDCT. CAC values should be interpreted based on the respective scan type.
Conclusions
LDCT is a reliable diagnostic alternative to ECG-gated CT in predicting CAC. CAC scores from LDCT are highly correlated and concordant with those from gated CT and can help guide statin management in individuals with intermediate ASCVD risk. The proposed duality of LDCT to assess ASCVD risk in addition to lung cancer can reduce the need for unnecessary scans while optimizing preventive clinical care. While coronary calcium and elevated CAC scores can facilitate clinical decision making to initiate statin therapy for intermediate-risk patients, physicians must still determine whether additional cardiac testing is warranted to avoid unnecessary procedures and health care costs. Smokers undergoing annual LDCT may benefit from standardized CAC scoring to help further stratify ASCVD risk while limiting the expense and radiation of additional scans.
Acknowledgments
The authors thank Ms. Lorie Gower for her contributions to the study.
1. Leigh A, McEvoy JW, Garg P, et al. Coronary artery calcium scores and atherosclerotic cardiovascular disease risk stratification in smokers. JACC Cardiovasc Imaging. 2019;12(5):852-861. doi:10.1016/j.jcmg.2017.12.017
2. Lu MT, Onuma OK, Massaro JM, D’Agostino RB Sr, O’Donnell CJ, Hoffmann U. Lung cancer screening eligibility in the community: cardiovascular risk factors, coronary artery calcification, and cardiovascular events. Circulation. 2016;134(12):897-899. doi:10.1161/CIRCULATIONAHA.116.023957
3. Tailor TD, Chiles C, Yeboah J, et al. Cardiovascular risk in the lung cancer screening population: a multicenter study evaluating the association between coronary artery calcification and preventive statin prescription. J Am Coll Radiol. 2021;18(9):1258-1266. doi:10.1016/j.jacr.2021.01.015
4. National Lung Screening Trial Research Team, Church TR, Black WC, et al. Results of initial low-dose computed tomographic screening for lung cancer. N Engl J Med. 2013;368(21):1980-1991. doi:10.1056/NEJMoa1209120
5. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics—2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-e322. doi:10.1161/CIR.0000000000000152
6. Moyer VA; U.S. Preventive Services Task Force. Screening for lung cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014;160(5):330-338. doi:10.7326/M13-2771
7. US Preventive Services Task Force, Krist AH, Davidson KW, et al. Screening for lung cancer: US Preventive Services Task Force Recommendation Statement. JAMA. 2021;325(10):962-970. doi:10.1001/jama.2021.1117
8. Arcadi T, Maffei E, Sverzellati N, et al. Coronary artery calcium score on low-dose computed tomography for lung cancer screening. World J Radiol. 2014;6(6):381-387. doi:10.4329/wjr.v6.i6.381
9. Kim SM, Chung MJ, Lee KS, Choe YH, Yi CA, Choe BK. Coronary calcium screening using low-dose lung cancer screening: effectiveness of MDCT with retrospective reconstruction. AJR Am J Roentgenol. 2008;190(4):917-922. doi:10.2214/AJR.07.2979
10. Ruparel M, Quaife SL, Dickson JL, et al. Evaluation of cardiovascular risk in a lung cancer screening cohort. Thorax. 2019;74(12):1140-1146. doi:10.1136/thoraxjnl-2018-212812
11. Jacobs PC, Gondrie MJ, van der Graaf Y, et al. Coronary artery calcium can predict all-cause mortality and cardiovascular events on low-dose CT screening for lung cancer. AJR Am J Roentgenol. 2012;198(3):505-511. doi:10.2214/AJR.10.5577
12. Fan L, Fan K. Lung cancer screening CT-based coronary artery calcification in predicting cardiovascular events: A systematic review and meta-analysis. Medicine (Baltimore). 2018;97(20):e10461. doi:10.1097/MD.0000000000010461
13. Greenland P, Blaha MJ, Budoff MJ, Erbel R, Watson KE. Coronary calcium score and cardiovascular risk. J Am Coll Cardiol. 2018;72(4):434-447. doi:10.1016/j.jacc.2018.05.027
14. Arnett DK, Blumenthal RS, Albert MA, et al. 2019 ACC/AHA Guideline on the Primary Prevention of Cardiovascular Disease: Executive Summary: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Circulation. 2019;140(11):e563-e595. doi:10.1161/CIR.0000000000000677
15. Pletcher MJ, Pignone M, Earnshaw S, et al. Using the coronary artery calcium score to guide statin therapy: a cost-effectiveness analysis. Circ Cardiovasc Qual Outcomes. 2014;7(2):276-284. doi:10.1161/CIRCOUTCOMES.113.000799
16. Hong JC, Blankstein R, Shaw LJ, et al. Implications of coronary artery calcium testing for treatment decisions among statin candidates according to the ACC/AHA Cholesterol Management Guidelines: a cost-effectiveness analysis. JACC Cardiovasc Imaging. 2017;10(8):938-952. doi:10.1016/j.jcmg.2017.04.014
17. US Preventive Services Task Force, Curry SJ, Krist AH, et al. Risk assessment for cardiovascular disease with nontraditional risk factors: US Preventive Services Task Force Recommendation Statement. JAMA. 2018;320(3):272-280. doi:10.1001/jama.2018.8359
18. Hughes-Austin JM, Dominguez A 3rd, Allison MA, et al. Relationship of coronary calcium on standard chest CT scans with mortality. JACC Cardiovasc Imaging. 2016;9(2):152-159. doi:10.1016/j.jcmg.2015.06.030
19. Haller C, Vandehei A, Fisher R, et al. Incidence and implication of coronary artery calcium on non-gated chest computed tomography scans: a large observational cohort. Cureus. 2019;11(11):e6218. Published 2019 Nov 22. doi:10.7759/cureus.6218
20. Agatston AS, Janowitz WR, Hildner FJ, Zusmer NR, Viamonte M Jr, Detrano R. Quantification of coronary artery calcium using ultrafast computed tomography. J Am Coll Cardiol. 1990;15(4):827-832. doi:10.1016/0735-1097(90)90282-t
21. Aberle D, Berg C, Black W, et al. The National Lung Screening Trial: overview and study design. Radiology. 2011;258(1):243-53. doi:10.1148/radiol.10091808
22. Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999;8(2):135-160. doi:10.1177/096228029900800204
23. Rumberger JA, Brundage BH, Rader DJ, Kondos G. Electron beam computed tomographic coronary calcium scanning: a review and guidelines for use in asymptomatic persons. Mayo Clin Proc. 1999;74(3):243-252. doi:10.4065/74.3.243
24. Douthit NT, Wyatt N, Schwartz B. Clinical impact of reporting coronary artery calcium scores of non-gated chest computed tomography on statin management. Cureus. 2021;13(5):e14856. Published 2021 May 5. doi:10.7759/cureus.14856
25. Miedema MD, Dardari ZA, Kianoush S, et al. Statin eligibility, coronary artery calcium, and subsequent cardiovascular events according to the 2016 United States Preventive Services Task Force (USPSTF) Statin Guidelines: MESA (Multi-Ethnic Study of Atherosclerosis). J Am Heart Assoc. 2018;7(12):e008920. Published 2018 Jun 13. doi:10.1161/JAHA.118.008920
26. Fisher R, Vandehei A, Haller C, et al. Reporting the presence of coronary artery calcium in the final impression of non-gated CT chest scans increases the appropriate utilization of statins. Cureus. 2020;12(9):e10579. Published 2020 Sep 21. doi:10.7759/cureus.10579
27. Blaha MJ, Budoff MJ, DeFilippis AP, et al. Associations between C-reactive protein, coronary artery calcium, and cardiovascular events: implications for the JUPITER population from MESA, a population-based cohort study. Lancet. 2011;378(9792):684-692. doi:10.1016/S0140-6736(11)60784-8
28. Waheed S, Pollack S, Roth M, Reichek N, Guerci A, Cao JJ. Collective impact of conventional cardiovascular risk factors and coronary calcium score on clinical outcomes with or without statin therapy: the St Francis Heart Study. Atherosclerosis. 2016;255:193-199. doi:10.1016/j.atherosclerosis.2016.09.060
29. Mahabadi AA, Möhlenkamp S, Lehmann N, et al. CAC score improves coronary and CV risk assessment above statin indication by ESC and AHA/ACC Primary Prevention Guidelines. JACC Cardiovasc Imaging. 2017;10(2):143-153. doi:10.1016/j.jcmg.2016.03.022
30. Blaha MJ, Cainzos-Achirica M, Greenland P, et al. Role of coronary artery calcium score of zero and other negative risk markers for cardiovascular disease: the Multi-Ethnic Study of Atherosclerosis (MESA). Circulation. 2016;133(9):849-858. doi:10.1161/CIRCULATIONAHA.115.018524
31. Hoffmann U, Massaro JM, D’Agostino RB Sr, Kathiresan S, Fox CS, O’Donnell CJ. Cardiovascular event prediction and risk reclassification by coronary, aortic, and valvular calcification in the Framingham Heart Study. J Am Heart Assoc. 2016;5(2):e003144. Published 2016 Feb 22. doi:10.1161/JAHA.115.003144
32. Williams KA Sr, Kim JT, Holohan KM. Frequency of unrecognized, unreported, or underreported coronary artery and cardiovascular calcification on noncardiac chest CT. J Cardiovasc Comput Tomogr. 2013;7(3):167-172. doi:10.1016/j.jcct.2013.05.003
33. Berrington de González A, Mahesh M, Kim KP, et al. Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch Intern Med. 2009;169(22):2071-2077. doi:10.1001/archinternmed.2009.440
34. Azour L, Kadoch MA, Ward TJ, Eber CD, Jacobi AH. Estimation of cardiovascular risk on routine chest CT: Ordinal coronary artery calcium scoring as an accurate predictor of Agatston score ranges. J Cardiovasc Comput Tomogr. 2017;11(1):8-15. doi:10.1016/j.jcct.2016.10.001
35. Waltz J, Kocher M, Kahn J, Dirr M, Burt JR. The future of concurrent automated coronary artery calcium scoring on screening low-dose computed tomography. Cureus. 2020;12(6):e8574. Published 2020 Jun 12. doi:10.7759/cureus.8574
36. Huang YL, Wu FZ, Wang YC, et al. Reliable categorisation of visual scoring of coronary artery calcification on low-dose CT for lung cancer screening: validation with the standard Agatston score. Eur Radiol. 2013;23(5):1226-1233. doi:10.1007/s00330-012-2726-5
37. Kim YK, Sung YM, Cho SH, Park YN, Choi HY. Reliability analysis of visual ranking of coronary artery calcification on low-dose CT of the thorax for lung cancer screening: comparison with ECG-gated calcium scoring CT. Int J Cardiovasc Imaging. 2014;30 Suppl 2:81-87. doi:10.1007/s10554-014-0507-8
38. Xia C, Vonder M, Pelgrim GJ, et al. High-pitch dual-source CT for coronary artery calcium scoring: A head-to-head comparison of non-triggered chest versus triggered cardiac acquisition. J Cardiovasc Comput Tomogr. 2021;15(1):65-72. doi:10.1016/j.jcct.2020.04.013
39. Hutt A, Duhamel A, Deken V, et al. Coronary calcium screening with dual-source CT: reliability of ungated, high-pitch chest CT in comparison with dedicated calcium-scoring CT. Eur Radiol. 2016;26(6):1521-1528. doi:10.1007/s00330-015-3978-7
40. Blaha MJ, Budoff MJ, Tota-Maharaj R, et al. Improving the CAC score by addition of regional measures of calcium distribution: Multi-Ethnic Study of Atherosclerosis. JACC Cardiovasc Imaging. 2016;9(12):1407-1416. doi:10.1016/j.jcmg.2016.03.001
Cigarette smoking is an independent risk factor for lung cancer and atherosclerotic cardiovascular disease (ASCVD).1-3 The National Lung Screening Trial (NLST) demonstrated both lung cancer mortality reduction with the use of surveillance low-dose computed tomography (LDCT) and ASCVD as the most common cause of death among smokers.4,5 ASCVD remains the leading cause of death in the lung cancer screening (LCS) population.2,3 After publication of the NLST results, the US Preventive Services Task Force (USPSTF) established LCS eligibility among smokers and the Center for Medicare and Medicaid Services approved payment for annual LDCT in this group.1,6,7
Recently LDCT has been proposed as an adjunct diagnostic tool for detecting coronary artery calcium (CAC), which is independently associated with ASCVD and mortality.8-13 CAC scores have been recommended by the 2019 American College of Cardiology/American Heart Association cholesterol treatment guidelines and shown to be cost-effective in guiding statin therapy for patients with borderline to intermediate ASCVD risk.14-16 While CAC is conventionally quantified using electrocardiogram (ECG)-gated CT, these scans are not routinely performed in clinical practice because preventive CAC screening is neither recommended by the USPSTF nor covered by most insurance providers.17,18 LDCT, conversely, is reimbursable and a well-validated ASCVD risk predictor.18,19
In this study, we aimed to determine the validity of LDCT in identifying CAC among the military LCS population and whether it would impact statin recommendations based on 10-year ASCVD risk.
Methods
Participants were recruited from a retrospective cohort of 563 Military Health System (MHS) beneficiaries who received LCS with LDCT at Naval Medical Center Portsmouth (NMCP) in Virginia between January 1, 2019, and December 31, 2020. The 2013 USPSTF LCS guidelines were followed as the 2021 guidelines had not been published before the start of the study; thus, eligible participants included adults aged 55 to 80 years with at least a 30-pack-year smoking history and currently smoked or had quit within 15 years from the date of study consent.6,7
Between November 2020 and May 2021, study investigators screened 287 patient records and recruited 190 participants by telephone, starting with individuals who had the most recent LDCT and working backward until reaching the predetermined 170 subjects who had undergone in-office consents before ECG-gated CT scans. Since LDCT was not obtained simultaneously with the ECG-gated CT, participants were required to complete their gated CT within 24 months of their last LDCT. Of the 190 subjects initially recruited, those who were ineligible for LCS (n = 4), had a history of angioplasty, stent, or bypass revascularization procedure (n = 4), did not complete their ECG-gated CT within the specified time frame (n = 8), or withdrew from the study (n = 4) were excluded. While gated CT scans were scored for CAC in the present time, LDCT (previously only read for general lung pathology) was not scored until after participant consent. Patients were peripherally followed, via health record reviews, for 3 months after their gated CT to document any additional imaging ordered by their primary care practitioners. The study was approved by the NMCP Institutional Review Board.
Coronary Artery Calcification Scoring
We performed CT scans using Siemens SOMATOM Flash, a second-generation dual-source scanner; and GE LightSpeed VCT, a single-source, 64-slice scanner. A step-and-shoot prospective trigger technique was used, and contiguous axial images were reconstructed at 2.5-mm or 3-mm intervals for CAC quantification using the Agatston method.20 ECG-gated CT scans were electrocardiographically triggered at mid-diastole (70% of the R-R interval). Radiation dose reduction techniques involved adjustments of the mA according to body mass index and iterative reconstruction. LDCT scans were performed without ECG gating. We reconstructed contiguous axial images at 1-mm intervals for evaluation of the lung parenchyma. Similar dose-reduction techniques were used, to limit radiation exposure for each LDCT scan to < 1.5 mSv, per established guidelines.21 CAC on LDCT was also scored using the Agatston method. CAC was scored on the 2 scan types by different blinded reviewers.
Covariates
We reviewed outpatient health records to obtain participants’ age, sex, medical history, statin use, smoking status (current or former), and pack-years. International Classification of Diseases, Tenth Revision codes within medical encounters were used to document prevalent hypertension, hyperlipidemia, and diabetes mellitus. Participants’ most recent low-density lipoprotein value (within 24 months of ECG-gated CT) was recorded and 10-year ASCVD risk scores were calculated using the pooled cohorts equation.
Statistical Analysis
A power analysis performed before study initiation determined that a prospective sample size of 170 would be sufficient to provide strength of correlation between CAC scores calculated from ECG-gated CT and LDCT and achieve a statistical power of at least 80%. The Wilcoxon rank sum and Fisher exact tests were used to evaluate differences in continuous and categorical CAC scores, respectively. Given skewed distributions, Spearman rank correlations and Kendall W coefficient of concordance were respectively used to evaluate correlation and concordance of CAC scores between the 2 scan types. κ statistics were used to rate agreement between categorical CAC scores. Bland-Altman analysis was performed to determine the bias and limits of agreement between ECG-gated CT and LDCT.22 For categorical CAC score analysis, participants were categorized into 5 groups according to standard Agatston score cut-off points. We defined the 5 categories of CAC for both scan types based on previous analysis from Rumberger and colleagues: CAC = 0 (absent), CAC = 1-10 (minimal), CAC = 11-100 (mild), CAC = 101-400 (moderate), CAC > 400 (severe).23 Of note, LDCT reports at NMCP include a visual CAC score using these qualitative descriptors that were available to LDCT reviewers. Analyses were conducted using SAS version 9.4 and Microsoft Excel; P values < .05 were considered statistically significant.
Results
The 170 participants had a mean (SD) age of 62.1 (4.6) years and were 70.6% male (Table 1). Hyperlipidemia was the most prevalent cardiac risk factor with almost 70% of participants on a statin. There was no incidence of ischemic ASCVD during follow-up, although 1 participant was later diagnosed with lung cancer after evaluation of suspicious pulmonary findings on ECG-gated CT. CAC was identified on both scan types in 126 participants; however, LDCT was discordant with gated CT in identifying CAC in 24 subjects (P < .001).
The correlation between CAC scores on ECG-gated CT and LDCT was 0.945 (P < .001) and the concordance was 0.643, indicating moderate agreement between CAC scores on the 2 different scans (Figure 1). Median CAC scores were significantly higher on ECG-gated CT when compared with LDCT (107.5 vs 48.1 Agatston units, respectively; P < .05). Table 2 shows the CAC score characteristics for both scan types. The κ statistic for agreement between categorical CAC scores on ECG-gated CT compared with LDCT was 0.49 (SEκ= 0.05; 95% CI, -0.73-1.71), and the weighted κ statistic was 0.71, indicating moderate to substantial agreement between the 2 scans using the specified cutoff points. The Bland-Altman analysis presented a mean bias of 111.45 Agatston units, with limits of agreement between -268.64 and 491.54, as shown in Figure 2, suggesting that CAC scores on ECG-gated CT were, on average, about 111 units higher than those on LDCT. Finally, there were 24 participants with CAC seen on ECG-gated CT but none identified on LDCT (P < .001); of this cohort 20 were already on a statin, and of the remaining 4 individuals, 1 met statin criteria based on a > 20% ASCVD risk score alone (regardless of CAC score), 1 with an intermediate risk score met statin criteria based on CAC score reporting, 1 did not meet criteria due to a low-risk score, and the last had no reportable ASCVD risk score.
In the study, there were 80 participants with reportable borderline to intermediate 10-year ASCVD risk scores (5% ≤ 10-year ASCVD risk < 20%), 49 of which were taking a statin. Of the remaining 31 participants not on a statin, 19 met statin criteria after CAC was identified on ECG-gated CT (of these 18 also had CAC identified on LDCT). Subsequently, the number of participants who met statin criteria after additional CAC reporting (on ECG-gated CT and LDCT) was statistically significant (P < .001 and P < .05, respectively). Of the 49 participants on a statin, only 1 individual no longer met statin criteria due to a CAC score < 1 on gated CT.
Discussion
In this study population of recruited MHS beneficiaries, there was a strong correlation and moderate to substantial agreement between CAC scores calculated from LDCT and conventional ECG-gated CT. The number of nonstatin participants who met statin criteria and would have benefited from additional CAC score reporting was statistically significant as compared to their statin counterparts who no longer met the criteria.
CAC screening using nongated CT has become an increasingly available and consistently reproducible means for stratifying ASCVD risk and guiding statin therapy in individuals with equivocal ASCVD risk scores.24-26 As has been demonstrated in previous studies, our study additionally highlights the effective use of LDCT in not only identifying CAC, but also in beneficially impacting statin decisions in the high-risk smoking population.24-26 Our results also showed LDCT missed CAC in participants, the majority of which were already on a statin, and only 1 nonstatin individual benefited from additional CAC reporting. CAC scoring on LDCT should be an adjunct, not a substitute, for ASCVD risk stratification to help guide statin management.25,27
Our results may provide cost considerate implications for preventive CAC screening. While TRICARE covers the cost of ECG-gated CT for MHS beneficiaries, the same is not true of most nonmilitary insurance providers. Concerns about cancer risk from radiation exposure may also lead to hesitation about receiving additional CTs in the smoking population. Since the LCS population already receives annual LDCT, these scans can also be used for CAC scoring to help primary care professionals risk stratify their patients, as has been previously shown.28-31 Clinicians should consider implementing CAC scoring with annual LDCT scans, which would curtail further risks and expenses from CAC-specified scans.
Although CAC is scored visually and routinely reported in the body of LDCT reports at our facility, this is not a universal practice and was performed in only 44% of subjects with known CAC by a previous study.32 In 2007, there were 600,000 CAC scoring scans and > 9 million routine chest CTs performed in the United States.33 Based on our results and the growing consensus in the existing literature, CAC scoring on nongated CT is not only valid and reliable, but also can estimate ASCVD risk and subsequent mortality.34-36 Routine chest CTs remain an available resource for providing additional ASCVD risk stratification.
As we demonstrated, median CAC scores on LDCT were on average significantly lower than those from gated CT. This could be due to slice thickness variability between the GE and Siemens scanners or CAC progression between the time of the retrospective LDCT and prospective ECG-gated CT. Aside from this potential limitation, LDCT has been shown to have a high level of agreement with gated CT in predicting CAC, both visually and by the Agatston technique.37-39 Our results further support previous recommendations of utilizing CAC score categories when determining ASCVD risk from LDCT and that establishing scoring cutoff points warrants further development for potential standardization.37-39 Readers should be mindful that LDCT may still be less sensitive and underestimate low CAC levels and that ECG-gated CT may occasionally be more optimal in determining ASCVD risk when considering the negative predictive value of CAC.40
Limitations
Our study cohort was composed of MHS beneficiaries. Compared with the general population, these individuals may have greater access to care and be more likely to receive statins after preventive screenings. Additional studies may be required to assess CAC-associated statin eligibility among the general population. As discussed previously LDCT was not performed concomitantly with the ECG-gated CT. Although there was moderate to substantial CAC agreement between the 2 scan types, the timing difference could have led to absolute differences in CAC scores across both scan types and impacted the ability to detect low-level CAC on LDCT. CAC values should be interpreted based on the respective scan type.
Conclusions
LDCT is a reliable diagnostic alternative to ECG-gated CT in predicting CAC. CAC scores from LDCT are highly correlated and concordant with those from gated CT and can help guide statin management in individuals with intermediate ASCVD risk. The proposed duality of LDCT to assess ASCVD risk in addition to lung cancer can reduce the need for unnecessary scans while optimizing preventive clinical care. While coronary calcium and elevated CAC scores can facilitate clinical decision making to initiate statin therapy for intermediate-risk patients, physicians must still determine whether additional cardiac testing is warranted to avoid unnecessary procedures and health care costs. Smokers undergoing annual LDCT may benefit from standardized CAC scoring to help further stratify ASCVD risk while limiting the expense and radiation of additional scans.
Acknowledgments
The authors thank Ms. Lorie Gower for her contributions to the study.
Cigarette smoking is an independent risk factor for lung cancer and atherosclerotic cardiovascular disease (ASCVD).1-3 The National Lung Screening Trial (NLST) demonstrated both lung cancer mortality reduction with the use of surveillance low-dose computed tomography (LDCT) and ASCVD as the most common cause of death among smokers.4,5 ASCVD remains the leading cause of death in the lung cancer screening (LCS) population.2,3 After publication of the NLST results, the US Preventive Services Task Force (USPSTF) established LCS eligibility among smokers and the Center for Medicare and Medicaid Services approved payment for annual LDCT in this group.1,6,7
Recently LDCT has been proposed as an adjunct diagnostic tool for detecting coronary artery calcium (CAC), which is independently associated with ASCVD and mortality.8-13 CAC scores have been recommended by the 2019 American College of Cardiology/American Heart Association cholesterol treatment guidelines and shown to be cost-effective in guiding statin therapy for patients with borderline to intermediate ASCVD risk.14-16 While CAC is conventionally quantified using electrocardiogram (ECG)-gated CT, these scans are not routinely performed in clinical practice because preventive CAC screening is neither recommended by the USPSTF nor covered by most insurance providers.17,18 LDCT, conversely, is reimbursable and a well-validated ASCVD risk predictor.18,19
In this study, we aimed to determine the validity of LDCT in identifying CAC among the military LCS population and whether it would impact statin recommendations based on 10-year ASCVD risk.
Methods
Participants were recruited from a retrospective cohort of 563 Military Health System (MHS) beneficiaries who received LCS with LDCT at Naval Medical Center Portsmouth (NMCP) in Virginia between January 1, 2019, and December 31, 2020. The 2013 USPSTF LCS guidelines were followed as the 2021 guidelines had not been published before the start of the study; thus, eligible participants included adults aged 55 to 80 years with at least a 30-pack-year smoking history and currently smoked or had quit within 15 years from the date of study consent.6,7
Between November 2020 and May 2021, study investigators screened 287 patient records and recruited 190 participants by telephone, starting with individuals who had the most recent LDCT and working backward until reaching the predetermined 170 subjects who had undergone in-office consents before ECG-gated CT scans. Since LDCT was not obtained simultaneously with the ECG-gated CT, participants were required to complete their gated CT within 24 months of their last LDCT. Of the 190 subjects initially recruited, those who were ineligible for LCS (n = 4), had a history of angioplasty, stent, or bypass revascularization procedure (n = 4), did not complete their ECG-gated CT within the specified time frame (n = 8), or withdrew from the study (n = 4) were excluded. While gated CT scans were scored for CAC in the present time, LDCT (previously only read for general lung pathology) was not scored until after participant consent. Patients were peripherally followed, via health record reviews, for 3 months after their gated CT to document any additional imaging ordered by their primary care practitioners. The study was approved by the NMCP Institutional Review Board.
Coronary Artery Calcification Scoring
We performed CT scans using Siemens SOMATOM Flash, a second-generation dual-source scanner; and GE LightSpeed VCT, a single-source, 64-slice scanner. A step-and-shoot prospective trigger technique was used, and contiguous axial images were reconstructed at 2.5-mm or 3-mm intervals for CAC quantification using the Agatston method.20 ECG-gated CT scans were electrocardiographically triggered at mid-diastole (70% of the R-R interval). Radiation dose reduction techniques involved adjustments of the mA according to body mass index and iterative reconstruction. LDCT scans were performed without ECG gating. We reconstructed contiguous axial images at 1-mm intervals for evaluation of the lung parenchyma. Similar dose-reduction techniques were used, to limit radiation exposure for each LDCT scan to < 1.5 mSv, per established guidelines.21 CAC on LDCT was also scored using the Agatston method. CAC was scored on the 2 scan types by different blinded reviewers.
Covariates
We reviewed outpatient health records to obtain participants’ age, sex, medical history, statin use, smoking status (current or former), and pack-years. International Classification of Diseases, Tenth Revision codes within medical encounters were used to document prevalent hypertension, hyperlipidemia, and diabetes mellitus. Participants’ most recent low-density lipoprotein value (within 24 months of ECG-gated CT) was recorded and 10-year ASCVD risk scores were calculated using the pooled cohorts equation.
Statistical Analysis
A power analysis performed before study initiation determined that a prospective sample size of 170 would be sufficient to provide strength of correlation between CAC scores calculated from ECG-gated CT and LDCT and achieve a statistical power of at least 80%. The Wilcoxon rank sum and Fisher exact tests were used to evaluate differences in continuous and categorical CAC scores, respectively. Given skewed distributions, Spearman rank correlations and Kendall W coefficient of concordance were respectively used to evaluate correlation and concordance of CAC scores between the 2 scan types. κ statistics were used to rate agreement between categorical CAC scores. Bland-Altman analysis was performed to determine the bias and limits of agreement between ECG-gated CT and LDCT.22 For categorical CAC score analysis, participants were categorized into 5 groups according to standard Agatston score cut-off points. We defined the 5 categories of CAC for both scan types based on previous analysis from Rumberger and colleagues: CAC = 0 (absent), CAC = 1-10 (minimal), CAC = 11-100 (mild), CAC = 101-400 (moderate), CAC > 400 (severe).23 Of note, LDCT reports at NMCP include a visual CAC score using these qualitative descriptors that were available to LDCT reviewers. Analyses were conducted using SAS version 9.4 and Microsoft Excel; P values < .05 were considered statistically significant.
Results
The 170 participants had a mean (SD) age of 62.1 (4.6) years and were 70.6% male (Table 1). Hyperlipidemia was the most prevalent cardiac risk factor with almost 70% of participants on a statin. There was no incidence of ischemic ASCVD during follow-up, although 1 participant was later diagnosed with lung cancer after evaluation of suspicious pulmonary findings on ECG-gated CT. CAC was identified on both scan types in 126 participants; however, LDCT was discordant with gated CT in identifying CAC in 24 subjects (P < .001).
The correlation between CAC scores on ECG-gated CT and LDCT was 0.945 (P < .001) and the concordance was 0.643, indicating moderate agreement between CAC scores on the 2 different scans (Figure 1). Median CAC scores were significantly higher on ECG-gated CT when compared with LDCT (107.5 vs 48.1 Agatston units, respectively; P < .05). Table 2 shows the CAC score characteristics for both scan types. The κ statistic for agreement between categorical CAC scores on ECG-gated CT compared with LDCT was 0.49 (SEκ= 0.05; 95% CI, -0.73-1.71), and the weighted κ statistic was 0.71, indicating moderate to substantial agreement between the 2 scans using the specified cutoff points. The Bland-Altman analysis presented a mean bias of 111.45 Agatston units, with limits of agreement between -268.64 and 491.54, as shown in Figure 2, suggesting that CAC scores on ECG-gated CT were, on average, about 111 units higher than those on LDCT. Finally, there were 24 participants with CAC seen on ECG-gated CT but none identified on LDCT (P < .001); of this cohort 20 were already on a statin, and of the remaining 4 individuals, 1 met statin criteria based on a > 20% ASCVD risk score alone (regardless of CAC score), 1 with an intermediate risk score met statin criteria based on CAC score reporting, 1 did not meet criteria due to a low-risk score, and the last had no reportable ASCVD risk score.
In the study, there were 80 participants with reportable borderline to intermediate 10-year ASCVD risk scores (5% ≤ 10-year ASCVD risk < 20%), 49 of which were taking a statin. Of the remaining 31 participants not on a statin, 19 met statin criteria after CAC was identified on ECG-gated CT (of these 18 also had CAC identified on LDCT). Subsequently, the number of participants who met statin criteria after additional CAC reporting (on ECG-gated CT and LDCT) was statistically significant (P < .001 and P < .05, respectively). Of the 49 participants on a statin, only 1 individual no longer met statin criteria due to a CAC score < 1 on gated CT.
Discussion
In this study population of recruited MHS beneficiaries, there was a strong correlation and moderate to substantial agreement between CAC scores calculated from LDCT and conventional ECG-gated CT. The number of nonstatin participants who met statin criteria and would have benefited from additional CAC score reporting was statistically significant as compared to their statin counterparts who no longer met the criteria.
CAC screening using nongated CT has become an increasingly available and consistently reproducible means for stratifying ASCVD risk and guiding statin therapy in individuals with equivocal ASCVD risk scores.24-26 As has been demonstrated in previous studies, our study additionally highlights the effective use of LDCT in not only identifying CAC, but also in beneficially impacting statin decisions in the high-risk smoking population.24-26 Our results also showed LDCT missed CAC in participants, the majority of which were already on a statin, and only 1 nonstatin individual benefited from additional CAC reporting. CAC scoring on LDCT should be an adjunct, not a substitute, for ASCVD risk stratification to help guide statin management.25,27
Our results may provide cost considerate implications for preventive CAC screening. While TRICARE covers the cost of ECG-gated CT for MHS beneficiaries, the same is not true of most nonmilitary insurance providers. Concerns about cancer risk from radiation exposure may also lead to hesitation about receiving additional CTs in the smoking population. Since the LCS population already receives annual LDCT, these scans can also be used for CAC scoring to help primary care professionals risk stratify their patients, as has been previously shown.28-31 Clinicians should consider implementing CAC scoring with annual LDCT scans, which would curtail further risks and expenses from CAC-specified scans.
Although CAC is scored visually and routinely reported in the body of LDCT reports at our facility, this is not a universal practice and was performed in only 44% of subjects with known CAC by a previous study.32 In 2007, there were 600,000 CAC scoring scans and > 9 million routine chest CTs performed in the United States.33 Based on our results and the growing consensus in the existing literature, CAC scoring on nongated CT is not only valid and reliable, but also can estimate ASCVD risk and subsequent mortality.34-36 Routine chest CTs remain an available resource for providing additional ASCVD risk stratification.
As we demonstrated, median CAC scores on LDCT were on average significantly lower than those from gated CT. This could be due to slice thickness variability between the GE and Siemens scanners or CAC progression between the time of the retrospective LDCT and prospective ECG-gated CT. Aside from this potential limitation, LDCT has been shown to have a high level of agreement with gated CT in predicting CAC, both visually and by the Agatston technique.37-39 Our results further support previous recommendations of utilizing CAC score categories when determining ASCVD risk from LDCT and that establishing scoring cutoff points warrants further development for potential standardization.37-39 Readers should be mindful that LDCT may still be less sensitive and underestimate low CAC levels and that ECG-gated CT may occasionally be more optimal in determining ASCVD risk when considering the negative predictive value of CAC.40
Limitations
Our study cohort was composed of MHS beneficiaries. Compared with the general population, these individuals may have greater access to care and be more likely to receive statins after preventive screenings. Additional studies may be required to assess CAC-associated statin eligibility among the general population. As discussed previously LDCT was not performed concomitantly with the ECG-gated CT. Although there was moderate to substantial CAC agreement between the 2 scan types, the timing difference could have led to absolute differences in CAC scores across both scan types and impacted the ability to detect low-level CAC on LDCT. CAC values should be interpreted based on the respective scan type.
Conclusions
LDCT is a reliable diagnostic alternative to ECG-gated CT in predicting CAC. CAC scores from LDCT are highly correlated and concordant with those from gated CT and can help guide statin management in individuals with intermediate ASCVD risk. The proposed duality of LDCT to assess ASCVD risk in addition to lung cancer can reduce the need for unnecessary scans while optimizing preventive clinical care. While coronary calcium and elevated CAC scores can facilitate clinical decision making to initiate statin therapy for intermediate-risk patients, physicians must still determine whether additional cardiac testing is warranted to avoid unnecessary procedures and health care costs. Smokers undergoing annual LDCT may benefit from standardized CAC scoring to help further stratify ASCVD risk while limiting the expense and radiation of additional scans.
Acknowledgments
The authors thank Ms. Lorie Gower for her contributions to the study.
1. Leigh A, McEvoy JW, Garg P, et al. Coronary artery calcium scores and atherosclerotic cardiovascular disease risk stratification in smokers. JACC Cardiovasc Imaging. 2019;12(5):852-861. doi:10.1016/j.jcmg.2017.12.017
2. Lu MT, Onuma OK, Massaro JM, D’Agostino RB Sr, O’Donnell CJ, Hoffmann U. Lung cancer screening eligibility in the community: cardiovascular risk factors, coronary artery calcification, and cardiovascular events. Circulation. 2016;134(12):897-899. doi:10.1161/CIRCULATIONAHA.116.023957
3. Tailor TD, Chiles C, Yeboah J, et al. Cardiovascular risk in the lung cancer screening population: a multicenter study evaluating the association between coronary artery calcification and preventive statin prescription. J Am Coll Radiol. 2021;18(9):1258-1266. doi:10.1016/j.jacr.2021.01.015
4. National Lung Screening Trial Research Team, Church TR, Black WC, et al. Results of initial low-dose computed tomographic screening for lung cancer. N Engl J Med. 2013;368(21):1980-1991. doi:10.1056/NEJMoa1209120
5. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics—2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-e322. doi:10.1161/CIR.0000000000000152
6. Moyer VA; U.S. Preventive Services Task Force. Screening for lung cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014;160(5):330-338. doi:10.7326/M13-2771
7. US Preventive Services Task Force, Krist AH, Davidson KW, et al. Screening for lung cancer: US Preventive Services Task Force Recommendation Statement. JAMA. 2021;325(10):962-970. doi:10.1001/jama.2021.1117
8. Arcadi T, Maffei E, Sverzellati N, et al. Coronary artery calcium score on low-dose computed tomography for lung cancer screening. World J Radiol. 2014;6(6):381-387. doi:10.4329/wjr.v6.i6.381
9. Kim SM, Chung MJ, Lee KS, Choe YH, Yi CA, Choe BK. Coronary calcium screening using low-dose lung cancer screening: effectiveness of MDCT with retrospective reconstruction. AJR Am J Roentgenol. 2008;190(4):917-922. doi:10.2214/AJR.07.2979
10. Ruparel M, Quaife SL, Dickson JL, et al. Evaluation of cardiovascular risk in a lung cancer screening cohort. Thorax. 2019;74(12):1140-1146. doi:10.1136/thoraxjnl-2018-212812
11. Jacobs PC, Gondrie MJ, van der Graaf Y, et al. Coronary artery calcium can predict all-cause mortality and cardiovascular events on low-dose CT screening for lung cancer. AJR Am J Roentgenol. 2012;198(3):505-511. doi:10.2214/AJR.10.5577
12. Fan L, Fan K. Lung cancer screening CT-based coronary artery calcification in predicting cardiovascular events: A systematic review and meta-analysis. Medicine (Baltimore). 2018;97(20):e10461. doi:10.1097/MD.0000000000010461
13. Greenland P, Blaha MJ, Budoff MJ, Erbel R, Watson KE. Coronary calcium score and cardiovascular risk. J Am Coll Cardiol. 2018;72(4):434-447. doi:10.1016/j.jacc.2018.05.027
14. Arnett DK, Blumenthal RS, Albert MA, et al. 2019 ACC/AHA Guideline on the Primary Prevention of Cardiovascular Disease: Executive Summary: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Circulation. 2019;140(11):e563-e595. doi:10.1161/CIR.0000000000000677
15. Pletcher MJ, Pignone M, Earnshaw S, et al. Using the coronary artery calcium score to guide statin therapy: a cost-effectiveness analysis. Circ Cardiovasc Qual Outcomes. 2014;7(2):276-284. doi:10.1161/CIRCOUTCOMES.113.000799
16. Hong JC, Blankstein R, Shaw LJ, et al. Implications of coronary artery calcium testing for treatment decisions among statin candidates according to the ACC/AHA Cholesterol Management Guidelines: a cost-effectiveness analysis. JACC Cardiovasc Imaging. 2017;10(8):938-952. doi:10.1016/j.jcmg.2017.04.014
17. US Preventive Services Task Force, Curry SJ, Krist AH, et al. Risk assessment for cardiovascular disease with nontraditional risk factors: US Preventive Services Task Force Recommendation Statement. JAMA. 2018;320(3):272-280. doi:10.1001/jama.2018.8359
18. Hughes-Austin JM, Dominguez A 3rd, Allison MA, et al. Relationship of coronary calcium on standard chest CT scans with mortality. JACC Cardiovasc Imaging. 2016;9(2):152-159. doi:10.1016/j.jcmg.2015.06.030
19. Haller C, Vandehei A, Fisher R, et al. Incidence and implication of coronary artery calcium on non-gated chest computed tomography scans: a large observational cohort. Cureus. 2019;11(11):e6218. Published 2019 Nov 22. doi:10.7759/cureus.6218
20. Agatston AS, Janowitz WR, Hildner FJ, Zusmer NR, Viamonte M Jr, Detrano R. Quantification of coronary artery calcium using ultrafast computed tomography. J Am Coll Cardiol. 1990;15(4):827-832. doi:10.1016/0735-1097(90)90282-t
21. Aberle D, Berg C, Black W, et al. The National Lung Screening Trial: overview and study design. Radiology. 2011;258(1):243-53. doi:10.1148/radiol.10091808
22. Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999;8(2):135-160. doi:10.1177/096228029900800204
23. Rumberger JA, Brundage BH, Rader DJ, Kondos G. Electron beam computed tomographic coronary calcium scanning: a review and guidelines for use in asymptomatic persons. Mayo Clin Proc. 1999;74(3):243-252. doi:10.4065/74.3.243
24. Douthit NT, Wyatt N, Schwartz B. Clinical impact of reporting coronary artery calcium scores of non-gated chest computed tomography on statin management. Cureus. 2021;13(5):e14856. Published 2021 May 5. doi:10.7759/cureus.14856
25. Miedema MD, Dardari ZA, Kianoush S, et al. Statin eligibility, coronary artery calcium, and subsequent cardiovascular events according to the 2016 United States Preventive Services Task Force (USPSTF) Statin Guidelines: MESA (Multi-Ethnic Study of Atherosclerosis). J Am Heart Assoc. 2018;7(12):e008920. Published 2018 Jun 13. doi:10.1161/JAHA.118.008920
26. Fisher R, Vandehei A, Haller C, et al. Reporting the presence of coronary artery calcium in the final impression of non-gated CT chest scans increases the appropriate utilization of statins. Cureus. 2020;12(9):e10579. Published 2020 Sep 21. doi:10.7759/cureus.10579
27. Blaha MJ, Budoff MJ, DeFilippis AP, et al. Associations between C-reactive protein, coronary artery calcium, and cardiovascular events: implications for the JUPITER population from MESA, a population-based cohort study. Lancet. 2011;378(9792):684-692. doi:10.1016/S0140-6736(11)60784-8
28. Waheed S, Pollack S, Roth M, Reichek N, Guerci A, Cao JJ. Collective impact of conventional cardiovascular risk factors and coronary calcium score on clinical outcomes with or without statin therapy: the St Francis Heart Study. Atherosclerosis. 2016;255:193-199. doi:10.1016/j.atherosclerosis.2016.09.060
29. Mahabadi AA, Möhlenkamp S, Lehmann N, et al. CAC score improves coronary and CV risk assessment above statin indication by ESC and AHA/ACC Primary Prevention Guidelines. JACC Cardiovasc Imaging. 2017;10(2):143-153. doi:10.1016/j.jcmg.2016.03.022
30. Blaha MJ, Cainzos-Achirica M, Greenland P, et al. Role of coronary artery calcium score of zero and other negative risk markers for cardiovascular disease: the Multi-Ethnic Study of Atherosclerosis (MESA). Circulation. 2016;133(9):849-858. doi:10.1161/CIRCULATIONAHA.115.018524
31. Hoffmann U, Massaro JM, D’Agostino RB Sr, Kathiresan S, Fox CS, O’Donnell CJ. Cardiovascular event prediction and risk reclassification by coronary, aortic, and valvular calcification in the Framingham Heart Study. J Am Heart Assoc. 2016;5(2):e003144. Published 2016 Feb 22. doi:10.1161/JAHA.115.003144
32. Williams KA Sr, Kim JT, Holohan KM. Frequency of unrecognized, unreported, or underreported coronary artery and cardiovascular calcification on noncardiac chest CT. J Cardiovasc Comput Tomogr. 2013;7(3):167-172. doi:10.1016/j.jcct.2013.05.003
33. Berrington de González A, Mahesh M, Kim KP, et al. Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch Intern Med. 2009;169(22):2071-2077. doi:10.1001/archinternmed.2009.440
34. Azour L, Kadoch MA, Ward TJ, Eber CD, Jacobi AH. Estimation of cardiovascular risk on routine chest CT: Ordinal coronary artery calcium scoring as an accurate predictor of Agatston score ranges. J Cardiovasc Comput Tomogr. 2017;11(1):8-15. doi:10.1016/j.jcct.2016.10.001
35. Waltz J, Kocher M, Kahn J, Dirr M, Burt JR. The future of concurrent automated coronary artery calcium scoring on screening low-dose computed tomography. Cureus. 2020;12(6):e8574. Published 2020 Jun 12. doi:10.7759/cureus.8574
36. Huang YL, Wu FZ, Wang YC, et al. Reliable categorisation of visual scoring of coronary artery calcification on low-dose CT for lung cancer screening: validation with the standard Agatston score. Eur Radiol. 2013;23(5):1226-1233. doi:10.1007/s00330-012-2726-5
37. Kim YK, Sung YM, Cho SH, Park YN, Choi HY. Reliability analysis of visual ranking of coronary artery calcification on low-dose CT of the thorax for lung cancer screening: comparison with ECG-gated calcium scoring CT. Int J Cardiovasc Imaging. 2014;30 Suppl 2:81-87. doi:10.1007/s10554-014-0507-8
38. Xia C, Vonder M, Pelgrim GJ, et al. High-pitch dual-source CT for coronary artery calcium scoring: A head-to-head comparison of non-triggered chest versus triggered cardiac acquisition. J Cardiovasc Comput Tomogr. 2021;15(1):65-72. doi:10.1016/j.jcct.2020.04.013
39. Hutt A, Duhamel A, Deken V, et al. Coronary calcium screening with dual-source CT: reliability of ungated, high-pitch chest CT in comparison with dedicated calcium-scoring CT. Eur Radiol. 2016;26(6):1521-1528. doi:10.1007/s00330-015-3978-7
40. Blaha MJ, Budoff MJ, Tota-Maharaj R, et al. Improving the CAC score by addition of regional measures of calcium distribution: Multi-Ethnic Study of Atherosclerosis. JACC Cardiovasc Imaging. 2016;9(12):1407-1416. doi:10.1016/j.jcmg.2016.03.001
1. Leigh A, McEvoy JW, Garg P, et al. Coronary artery calcium scores and atherosclerotic cardiovascular disease risk stratification in smokers. JACC Cardiovasc Imaging. 2019;12(5):852-861. doi:10.1016/j.jcmg.2017.12.017
2. Lu MT, Onuma OK, Massaro JM, D’Agostino RB Sr, O’Donnell CJ, Hoffmann U. Lung cancer screening eligibility in the community: cardiovascular risk factors, coronary artery calcification, and cardiovascular events. Circulation. 2016;134(12):897-899. doi:10.1161/CIRCULATIONAHA.116.023957
3. Tailor TD, Chiles C, Yeboah J, et al. Cardiovascular risk in the lung cancer screening population: a multicenter study evaluating the association between coronary artery calcification and preventive statin prescription. J Am Coll Radiol. 2021;18(9):1258-1266. doi:10.1016/j.jacr.2021.01.015
4. National Lung Screening Trial Research Team, Church TR, Black WC, et al. Results of initial low-dose computed tomographic screening for lung cancer. N Engl J Med. 2013;368(21):1980-1991. doi:10.1056/NEJMoa1209120
5. Mozaffarian D, Benjamin EJ, Go AS, et al. Heart disease and stroke statistics—2015 update: a report from the American Heart Association. Circulation. 2015;131(4):e29-e322. doi:10.1161/CIR.0000000000000152
6. Moyer VA; U.S. Preventive Services Task Force. Screening for lung cancer: U.S. Preventive Services Task Force recommendation statement. Ann Intern Med. 2014;160(5):330-338. doi:10.7326/M13-2771
7. US Preventive Services Task Force, Krist AH, Davidson KW, et al. Screening for lung cancer: US Preventive Services Task Force Recommendation Statement. JAMA. 2021;325(10):962-970. doi:10.1001/jama.2021.1117
8. Arcadi T, Maffei E, Sverzellati N, et al. Coronary artery calcium score on low-dose computed tomography for lung cancer screening. World J Radiol. 2014;6(6):381-387. doi:10.4329/wjr.v6.i6.381
9. Kim SM, Chung MJ, Lee KS, Choe YH, Yi CA, Choe BK. Coronary calcium screening using low-dose lung cancer screening: effectiveness of MDCT with retrospective reconstruction. AJR Am J Roentgenol. 2008;190(4):917-922. doi:10.2214/AJR.07.2979
10. Ruparel M, Quaife SL, Dickson JL, et al. Evaluation of cardiovascular risk in a lung cancer screening cohort. Thorax. 2019;74(12):1140-1146. doi:10.1136/thoraxjnl-2018-212812
11. Jacobs PC, Gondrie MJ, van der Graaf Y, et al. Coronary artery calcium can predict all-cause mortality and cardiovascular events on low-dose CT screening for lung cancer. AJR Am J Roentgenol. 2012;198(3):505-511. doi:10.2214/AJR.10.5577
12. Fan L, Fan K. Lung cancer screening CT-based coronary artery calcification in predicting cardiovascular events: A systematic review and meta-analysis. Medicine (Baltimore). 2018;97(20):e10461. doi:10.1097/MD.0000000000010461
13. Greenland P, Blaha MJ, Budoff MJ, Erbel R, Watson KE. Coronary calcium score and cardiovascular risk. J Am Coll Cardiol. 2018;72(4):434-447. doi:10.1016/j.jacc.2018.05.027
14. Arnett DK, Blumenthal RS, Albert MA, et al. 2019 ACC/AHA Guideline on the Primary Prevention of Cardiovascular Disease: Executive Summary: A Report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Circulation. 2019;140(11):e563-e595. doi:10.1161/CIR.0000000000000677
15. Pletcher MJ, Pignone M, Earnshaw S, et al. Using the coronary artery calcium score to guide statin therapy: a cost-effectiveness analysis. Circ Cardiovasc Qual Outcomes. 2014;7(2):276-284. doi:10.1161/CIRCOUTCOMES.113.000799
16. Hong JC, Blankstein R, Shaw LJ, et al. Implications of coronary artery calcium testing for treatment decisions among statin candidates according to the ACC/AHA Cholesterol Management Guidelines: a cost-effectiveness analysis. JACC Cardiovasc Imaging. 2017;10(8):938-952. doi:10.1016/j.jcmg.2017.04.014
17. US Preventive Services Task Force, Curry SJ, Krist AH, et al. Risk assessment for cardiovascular disease with nontraditional risk factors: US Preventive Services Task Force Recommendation Statement. JAMA. 2018;320(3):272-280. doi:10.1001/jama.2018.8359
18. Hughes-Austin JM, Dominguez A 3rd, Allison MA, et al. Relationship of coronary calcium on standard chest CT scans with mortality. JACC Cardiovasc Imaging. 2016;9(2):152-159. doi:10.1016/j.jcmg.2015.06.030
19. Haller C, Vandehei A, Fisher R, et al. Incidence and implication of coronary artery calcium on non-gated chest computed tomography scans: a large observational cohort. Cureus. 2019;11(11):e6218. Published 2019 Nov 22. doi:10.7759/cureus.6218
20. Agatston AS, Janowitz WR, Hildner FJ, Zusmer NR, Viamonte M Jr, Detrano R. Quantification of coronary artery calcium using ultrafast computed tomography. J Am Coll Cardiol. 1990;15(4):827-832. doi:10.1016/0735-1097(90)90282-t
21. Aberle D, Berg C, Black W, et al. The National Lung Screening Trial: overview and study design. Radiology. 2011;258(1):243-53. doi:10.1148/radiol.10091808
22. Bland JM, Altman DG. Measuring agreement in method comparison studies. Stat Methods Med Res. 1999;8(2):135-160. doi:10.1177/096228029900800204
23. Rumberger JA, Brundage BH, Rader DJ, Kondos G. Electron beam computed tomographic coronary calcium scanning: a review and guidelines for use in asymptomatic persons. Mayo Clin Proc. 1999;74(3):243-252. doi:10.4065/74.3.243
24. Douthit NT, Wyatt N, Schwartz B. Clinical impact of reporting coronary artery calcium scores of non-gated chest computed tomography on statin management. Cureus. 2021;13(5):e14856. Published 2021 May 5. doi:10.7759/cureus.14856
25. Miedema MD, Dardari ZA, Kianoush S, et al. Statin eligibility, coronary artery calcium, and subsequent cardiovascular events according to the 2016 United States Preventive Services Task Force (USPSTF) Statin Guidelines: MESA (Multi-Ethnic Study of Atherosclerosis). J Am Heart Assoc. 2018;7(12):e008920. Published 2018 Jun 13. doi:10.1161/JAHA.118.008920
26. Fisher R, Vandehei A, Haller C, et al. Reporting the presence of coronary artery calcium in the final impression of non-gated CT chest scans increases the appropriate utilization of statins. Cureus. 2020;12(9):e10579. Published 2020 Sep 21. doi:10.7759/cureus.10579
27. Blaha MJ, Budoff MJ, DeFilippis AP, et al. Associations between C-reactive protein, coronary artery calcium, and cardiovascular events: implications for the JUPITER population from MESA, a population-based cohort study. Lancet. 2011;378(9792):684-692. doi:10.1016/S0140-6736(11)60784-8
28. Waheed S, Pollack S, Roth M, Reichek N, Guerci A, Cao JJ. Collective impact of conventional cardiovascular risk factors and coronary calcium score on clinical outcomes with or without statin therapy: the St Francis Heart Study. Atherosclerosis. 2016;255:193-199. doi:10.1016/j.atherosclerosis.2016.09.060
29. Mahabadi AA, Möhlenkamp S, Lehmann N, et al. CAC score improves coronary and CV risk assessment above statin indication by ESC and AHA/ACC Primary Prevention Guidelines. JACC Cardiovasc Imaging. 2017;10(2):143-153. doi:10.1016/j.jcmg.2016.03.022
30. Blaha MJ, Cainzos-Achirica M, Greenland P, et al. Role of coronary artery calcium score of zero and other negative risk markers for cardiovascular disease: the Multi-Ethnic Study of Atherosclerosis (MESA). Circulation. 2016;133(9):849-858. doi:10.1161/CIRCULATIONAHA.115.018524
31. Hoffmann U, Massaro JM, D’Agostino RB Sr, Kathiresan S, Fox CS, O’Donnell CJ. Cardiovascular event prediction and risk reclassification by coronary, aortic, and valvular calcification in the Framingham Heart Study. J Am Heart Assoc. 2016;5(2):e003144. Published 2016 Feb 22. doi:10.1161/JAHA.115.003144
32. Williams KA Sr, Kim JT, Holohan KM. Frequency of unrecognized, unreported, or underreported coronary artery and cardiovascular calcification on noncardiac chest CT. J Cardiovasc Comput Tomogr. 2013;7(3):167-172. doi:10.1016/j.jcct.2013.05.003
33. Berrington de González A, Mahesh M, Kim KP, et al. Projected cancer risks from computed tomographic scans performed in the United States in 2007. Arch Intern Med. 2009;169(22):2071-2077. doi:10.1001/archinternmed.2009.440
34. Azour L, Kadoch MA, Ward TJ, Eber CD, Jacobi AH. Estimation of cardiovascular risk on routine chest CT: Ordinal coronary artery calcium scoring as an accurate predictor of Agatston score ranges. J Cardiovasc Comput Tomogr. 2017;11(1):8-15. doi:10.1016/j.jcct.2016.10.001
35. Waltz J, Kocher M, Kahn J, Dirr M, Burt JR. The future of concurrent automated coronary artery calcium scoring on screening low-dose computed tomography. Cureus. 2020;12(6):e8574. Published 2020 Jun 12. doi:10.7759/cureus.8574
36. Huang YL, Wu FZ, Wang YC, et al. Reliable categorisation of visual scoring of coronary artery calcification on low-dose CT for lung cancer screening: validation with the standard Agatston score. Eur Radiol. 2013;23(5):1226-1233. doi:10.1007/s00330-012-2726-5
37. Kim YK, Sung YM, Cho SH, Park YN, Choi HY. Reliability analysis of visual ranking of coronary artery calcification on low-dose CT of the thorax for lung cancer screening: comparison with ECG-gated calcium scoring CT. Int J Cardiovasc Imaging. 2014;30 Suppl 2:81-87. doi:10.1007/s10554-014-0507-8
38. Xia C, Vonder M, Pelgrim GJ, et al. High-pitch dual-source CT for coronary artery calcium scoring: A head-to-head comparison of non-triggered chest versus triggered cardiac acquisition. J Cardiovasc Comput Tomogr. 2021;15(1):65-72. doi:10.1016/j.jcct.2020.04.013
39. Hutt A, Duhamel A, Deken V, et al. Coronary calcium screening with dual-source CT: reliability of ungated, high-pitch chest CT in comparison with dedicated calcium-scoring CT. Eur Radiol. 2016;26(6):1521-1528. doi:10.1007/s00330-015-3978-7
40. Blaha MJ, Budoff MJ, Tota-Maharaj R, et al. Improving the CAC score by addition of regional measures of calcium distribution: Multi-Ethnic Study of Atherosclerosis. JACC Cardiovasc Imaging. 2016;9(12):1407-1416. doi:10.1016/j.jcmg.2016.03.001
Engaging Veterans With Serious Mental Illness in Primary Care
People with serious mental illness (SMI) are at substantial risk for premature mortality, dying on average 10 to 20 years earlier than others.1 The reasons for this disparity are complex; however, the high prevalence of chronic disease and physical comorbidities in the SMI population have been identified as prominent factors.2 Engagement and reengagement in care, including primary care for medical comorbidities, can mitigate these mortality risks.2-4 Among veterans with SMI lost to follow-up care for more than 12 months, those not successfully reengaged in care were more likely to die compared with those reengaged in care.2,3
Given this evidence, health care systems, including the US Department of Veterans Affairs (VA), have looked to better engage these patients in care. These efforts have included mental health population health management, colocation of mental health with primary care, designation of primary care teams specializing in SMI, and integration of mental health and primary care services for patients experiencing homelessness.5-8
As part of a national approach to encourage locally driven quality improvement (QI), the VA compiles performance metrics for each facility, across a gamut of care settings, conditions, and veteran populations.9 Quarterly facility report cards, with longitudinal data and cross-facility comparisons, enable facilities to identify targets for QI and track improvement progress. One metric reports on the proportion of enrolled veterans with SMI who have primary care engagement, defined as having an assigned primary care practitioner (PCP) and a primary care visit in the prior 12 months.
In support of a QI initiative at the VA Greater Los Angeles Healthcare System (VAGLAHS), we sought to describe promising practices being utilized by VA facilities with higher levels of primary care engagement among their veterans with SMI populations.
Methods
We conducted semistructured telephone interviews with a purposeful sample of key informants at VA facilities with high levels of engagement in primary care among veterans with SMI. All project components were conducted by an interdisciplinary team, which included a medical anthropologist (JM), a mental health physician (PR), an internal medicine physician (KC), and other health services researchers (JB, AG). Because the primary objective of the project was QI, this project was designated as nonresearch by the VAGLAHS Institutional Review Board.
The VA Facility Complexity Model classifies facilities into 5 tiers: 1a (most complex), 1b, 1c, 2, and 3 (least complex), based on patient care volume, patient risk, complexity of clinical programs, and size of research and teaching programs. We sampled informants at VA facilities with complexity ratings of 1a or 1b with better than median scores for primary care engagement of veterans with SMI based on report cards from January 2019 to March 2019. To increase the likelihood of identifying lessons that can generalize to the VAGLAHS with its large population of veterans experiencing homelessness, we selected facilities serving populations consisting of more than 1000 veterans experiencing homelessness.
At each selected facility, we first aimed to interview mental health leaders responsible for quality measurement and improvement identified from a national VA database. We then used snowball sampling to identify other informants at these VA facilities who were knowledgeable about relevant processes. Potential interviewees were contacted via email.
Interviews
The interview guide was developed by the interdisciplinary team and based on published literature about strategies for engaging patients with SMI in care. Interview guide questions focused on local practice arrangements, panel management, population health practices, and quality measurement and improvement efforts for engaging veterans with SMI in primary care (Appendix). Interviews were conducted by telephone, from May 2019 through July 2019, by experienced qualitative interviewers (JM, JB). Interviewees were assured confidentiality of their responses.
Interview audio recordings were used to generate detailed notes (AG). Structured summaries were prepared from these notes, using a template based on the interview guide. We organized these summaries into matrices for analysis, grouping summarized points by interview domains to facilitate comparison across interviews.10-11 Our team reviewed and discussed the matrices, and iteratively identified and defined themes to identify the common engagement approaches and the nature of the connections between mental health and primary care. To ensure rigor, findings were checked by the senior qualitative lead (JM).
Results
The median SMI engagement score—defined as the proportion of veterans with SMI who have had a primary care visit in the prior 12 months and who have an assigned PCP—was 75.6% across 1a and 1b VA facilities. We identified 16 VA facilities that had a median or higher score and more than 1000 enrolled veterans experiencing homelessness. From these16 facilities, we emailed 31 potential interviewees, 14 of whom were identified from a VA database and 17 referred by other interviewees. In total, we interviewed 18 key informants across 11 (69%) facilities, including chiefs of psychology and mental health services, PCPs with mental health expertise, QI specialists, a psychosocial rehabilitation leader, and a local recovery coordinator, who helps veterans with SMI access recovery-oriented services. Characteristics of the facilities and interviewees are shown in Table 1. Interviews lasted a mean 35 (range, 26-50) minutes.
Engagement Approaches
The strategies used to engage veterans with SMI were heterogenous, with no single strategy common across all facilities. However, we identified 2 categories of engagement approaches: targeted outreach and routine practices.
Targeted outreach strategies included deliberate, systematic approaches to reach veterans with SMI outside of regularly scheduled visits. These strategies were designed to be proactive, often prioritizing veterans at risk of disengaging from care. Designated VA care team members identified and reached out to veterans well before 12 months had passed since their prior visit (the VA definition of disengagement from care); visits included any care at VA, including, but not exclusively, primary care. Table 2 describes the key components of targeted outreach strategies: (1) identifying veterans’ last visit; (2) prioritizing which veterans to outreach to; and (3) assigning responsibility and reaching out. A key defining feature of targeted outreach is that veterans were identified and prioritized for outreach independent from any visits with mental health or other VA services.
In identifying veterans at risk for disengagement, a designated employee in mental health or primary care (eg, local recovery coordinator) reviewed a VA dashboard or locally developed report that identified veterans who have not engaged in care for several months. This process was repeated regularly. The designated employee either contacted those veterans directly or coordinated with other clinicians and support staff. When possible, a clinician or nurse with an existing relationship with the veteran would call them. If no such relationship existed, an administrative staff member made a cold call, sometimes accompanied by mailed outreach materials.
Routine practices were business-as-usual activities embedded in regular clinical workflows that facilitated engagement or reengagement of veterans with SMI in care. Of note, and in contrast to targeted outreach, these activities were tied to veteran visits with mental health practitioners. These practices were typically described as being at least as important as targeted outreach efforts. For example, during mental health visits, clinicians routinely checked the VA electronic health record to assess whether veterans had an assigned primary care team. If not, they would contact the primary care service to refer the patient for a primary care visit and assignment. If the patient already had a primary care team assigned, the mental health practitioner checked for recent primary care visits. If none were evident, the mental health practitioner might email the assigned PCP or contact them via instant message.
At some facilities, mental health support staff were able to directly schedule primary care appointments, which was identified as an important enabling factor in promoting mental health patient engagement in primary care. Some interviewees seemed to take for granted the idea that mental health practitioners would help engage patients in primary care—suggesting that these practices had perhaps become a cultural norm within their facility. However, some interviewees identified clear strategies for making these practices a consistent part of care—for example, by designing a protocol for initial mental health assessments to include a routine check for primary care engagement.
Mental Health/Primary Care Connections
Interviewees characterized the nature of the connections between mental health and primary care at their facilities. Nearly all interviewees described that their medical centers had extensive ties, formal and informal, between mental health and primary care.
Formal ties may include the reverse integration care model, in which primary care services are embedded in mental health settings. Interviewees at sites with programs based on this model noted that these programs enabled warm hand-offs from mental health to primary care and suggested that it can foster integration between primary care and mental health care for patients with SMI. However, the size, scope, and structure of these programs varied, sometimes serving a small proportion of a facility’s population of SMI patients. Other examples of formal ties included written agreements, establishing frequent, regular meetings between mental health and primary care leadership and front-line staff, and giving mental health clerks the ability to directly schedule primary care appointments.
Informal ties between mental health and primary care included communication and personal working relationships between mental health and PCPs, facilitated by mental health and primary care leaders working together in workgroups and other administrative activities. Some participants described a history of collaboration between mental health and primary care leaders yielding productive and trusting working relationships. Some interviewees described frequent direct communication between individual mental health practitioners and PCPs—either face-to-face or via secure messaging.
Discussion
VA facilities with high levels of primary care engagement among veterans with SMI used extensive engagement strategies, including a diverse array of targeted outreach and routine practices. In both approaches, intentional organizational structural and process decisions, as well as formal and informal ties between mental health and primary care, established and supported them. In addition, organizational cultural factors were especially relevant to routine practice strategies.
To enable targeted outreach, a bevy of organizational resources, both local and national were required. Large accountable care organizations and integrated delivery systems, like the VA, are often better able to create dashboards and other informational resources for population health management compared with smaller, less integrated health care systems. Though these resources are difficult to create in fragmented systems, comparable tools have been explored by multiple state health departments.12 Our findings suggest that these data tools, though resource intensive to develop, may enable facilities to be more methodical and reliable in conducting outreach to vulnerable patients.
In contrast to targeted outreach, routine practices depend less on population health management resources and more on cultural norms. Such norms are notoriously difficult to change, but intentional structural decisions like embedding primary care engagement in mental health protocols may signal that primary care engagement is an important and legitimate consideration for mental health care.13
We identified extensive and heterogenous connections between mental health and primary care in our sample of VA facilities with high engagement of patients with SMI in primary care. A growing body of literature on relational coordination studies the factors that contribute to organizational siloing and mechanisms for breaking down those silos so work can be coordinated across boundaries (eg, the organizational boundary between mental health and primary care).14 Coordinating care across these boundaries, through good relational coordination practices has been shown to improve outcomes in health care and other sectors. Notably, VA facilities in our sample had several of the defining characteristics of good relational coordination: relationships between mental health and primary care that include shared goals, shared knowledge, and mutual respect, all reinforced by frequent communication structured around problem solving.15 The relational coordination literature also offers a way to identify evidence-based interventions for facilitating relational coordination in places where it is lacking, for example, with information systems, boundary-spanning individuals, facility design, and formal conflict resolution.15 Future work might explore how relational coordination can be further used to optimize mental health and primary care connections to keep veterans with SMI engaged in care.
Our approach of interviewing informants in higher-performing facilities draws heavily on the idea of positive deviance, which holds that information on what works in health care is available from organizations that already are demonstrating “consistently exceptional performance.”16 This approach works best when high performance and organizational characteristics are observable for a large number of facilities, and when high-performing facilities are willing to share their strategies. These features allow investigators to identify promising practices and hypotheses that can then be empirically tested and compared. Such testing, including assessing for unintended consequences, is needed for the approaches we identified. Research is also needed to assess for factors that would promote the implementation of effective strategies.
Limitations
As a QI project seeking to identify promising practices, our interviews were limited to 18 key informants across 11 VA facilities with high engagement of care among veterans with SMI. No inferences can be made that these practices are directly related to this high level of engagement, nor the differential impact of different practices. Future work is needed to assess for these relationships. We also did not interview veterans to understand their perspectives on these strategies, which is an additional important topic for future work. In addition, these interviews were performed before the start of the COVID-19 pandemic. Further work is needed to understand how these strategies may have been modified in response to changes in practice. The shift to care from in-person to virtual services may have impacted both clinical interactions with veterans, as well as between clinicians.
Conclusions
Interviews with key informants demonstrate that while engaging and retaining veterans with SMI in primary care is vital, it also requires intentional and potentially resource-intensive practices, including targeted outreach and routine engagement strategies embedded into mental health visits. These promising practices can provide valuable insights for both VA and community health care systems providing care to patients with SMI.
Acknowledgments
We thank Gracielle J. Tan, MD for administrative assistance in preparing this manuscript.
1. Liu NH, Daumit GL, Dua T, et al. Excess mortality in persons with severe mental disorders: a multilevel intervention framework and priorities for clinical practice, policy and research agendas. World Psychiatry. 2017;16(1):30-40. doi:10.1002/wps.20384
2. Bowersox NW, Kilbourne AM, Abraham KM, et al. Cause-specific mortality among veterans with serious mental illness lost to follow-up. Gen Hosp Psychiatry. 2012;34(6):651-653. doi:10.1016/j.genhosppsych.2012.05.014
3. Davis CL, Kilbourne AM, Blow FC, et al. Reduced mortality among Department of Veterans Affairs patients with schizophrenia or bipolar disorder lost to follow-up and engaged in active outreach to return for care. Am J Public Health. 2012;102(suppl 1):S74-S79. doi:10.2105/AJPH.2011.300502
4. Copeland LA, Zeber JE, Wang CP, et al. Patterns of primary care and mortality among patients with schizophrenia or diabetes: a cluster analysis approach to the retrospective study of healthcare utilization. BMC Health Serv Res. 2009;9:127. doi:10.1186/1472-6963-9-127
5. Abraham KM, Mach J, Visnic S, McCarthy JF. Enhancing treatment reengagement for veterans with serious mental illness: evaluating the effectiveness of SMI re-engage. Psychiatr Serv. 2018;69(8):887-895. doi:10.1176/appi.ps.201700407
6. Ward MC, Druss BG. Reverse integration initiatives for individuals with serious mental illness. Focus (Am Psychiatr Publ). 2017;15(3):271-278. doi:10.1176/appi.focus.20170011
7. Chang ET, Vinzon M, Cohen AN, Young AS. Effective models urgently needed to improve physical care for people with serious mental illnesses. Health Serv Insights. 2019;12:1178632919837628. Published 2019 Apr 2. doi:10.1177/1178632919837628
8. Gabrielian S, Gordon AJ, Gelberg L, et al. Primary care medical services for homeless veterans. Fed Pract. 2014;31(10):10-19.
9. Lemke S, Boden MT, Kearney LK, et al. Measurement-based management of mental health quality and access in VHA: SAIL mental health domain. Psychol Serv. 2017;14(1):1-12. doi:10.1037/ser0000097
10. Averill JB. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qual Health Res. 2002;12(6):855-866. doi:10.1177/104973230201200611
11. Zuchowski JL, Chrystal JG, Hamilton AB, et al. Coordinating care across health care systems for Veterans with gynecologic malignancies: a qualitative analysis. Med Care. 2017;55(suppl 1):S53-S60. doi:10.1097/MLR.0000000000000737
12. Daumit GL, Stone EM, Kennedy-Hendricks A, Choksy S, Marsteller JA, McGinty EE. Care coordination and population health management strategies and challenges in a behavioral health home model. Med Care. 2019;57(1):79-84. doi:10.1097/MLR.0000000000001023
13. Parmelli E, Flodgren G, Beyer F, et al. The effectiveness of strategies to change organisational culture to improve healthcare performance: a systematic review. Implement Sci. 2011;6(33):1-8. doi:10.1186/1748-5908-6-33
14. Bolton R, Logan C, Gittell JH. Revisiting relational coordination: a systematic review. J Appl Behav Sci. 2021;57(3):290-322. doi:10.1177/0021886321991597
15. Gittell JH, Godfrey M, Thistlethwaite J. Interprofessional collaborative practice and relational coordination: improving healthcare through relationships. J Interprof Care. 2013;27(3):210-13. doi:10.3109/13561820.2012.730564
16. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM. Research in action: using positive deviance to improve quality of health care. Implement Sci. 2009;4:25. Published 2009 May 8. doi:10.1186/1748-5908-4-25
People with serious mental illness (SMI) are at substantial risk for premature mortality, dying on average 10 to 20 years earlier than others.1 The reasons for this disparity are complex; however, the high prevalence of chronic disease and physical comorbidities in the SMI population have been identified as prominent factors.2 Engagement and reengagement in care, including primary care for medical comorbidities, can mitigate these mortality risks.2-4 Among veterans with SMI lost to follow-up care for more than 12 months, those not successfully reengaged in care were more likely to die compared with those reengaged in care.2,3
Given this evidence, health care systems, including the US Department of Veterans Affairs (VA), have looked to better engage these patients in care. These efforts have included mental health population health management, colocation of mental health with primary care, designation of primary care teams specializing in SMI, and integration of mental health and primary care services for patients experiencing homelessness.5-8
As part of a national approach to encourage locally driven quality improvement (QI), the VA compiles performance metrics for each facility, across a gamut of care settings, conditions, and veteran populations.9 Quarterly facility report cards, with longitudinal data and cross-facility comparisons, enable facilities to identify targets for QI and track improvement progress. One metric reports on the proportion of enrolled veterans with SMI who have primary care engagement, defined as having an assigned primary care practitioner (PCP) and a primary care visit in the prior 12 months.
In support of a QI initiative at the VA Greater Los Angeles Healthcare System (VAGLAHS), we sought to describe promising practices being utilized by VA facilities with higher levels of primary care engagement among their veterans with SMI populations.
Methods
We conducted semistructured telephone interviews with a purposeful sample of key informants at VA facilities with high levels of engagement in primary care among veterans with SMI. All project components were conducted by an interdisciplinary team, which included a medical anthropologist (JM), a mental health physician (PR), an internal medicine physician (KC), and other health services researchers (JB, AG). Because the primary objective of the project was QI, this project was designated as nonresearch by the VAGLAHS Institutional Review Board.
The VA Facility Complexity Model classifies facilities into 5 tiers: 1a (most complex), 1b, 1c, 2, and 3 (least complex), based on patient care volume, patient risk, complexity of clinical programs, and size of research and teaching programs. We sampled informants at VA facilities with complexity ratings of 1a or 1b with better than median scores for primary care engagement of veterans with SMI based on report cards from January 2019 to March 2019. To increase the likelihood of identifying lessons that can generalize to the VAGLAHS with its large population of veterans experiencing homelessness, we selected facilities serving populations consisting of more than 1000 veterans experiencing homelessness.
At each selected facility, we first aimed to interview mental health leaders responsible for quality measurement and improvement identified from a national VA database. We then used snowball sampling to identify other informants at these VA facilities who were knowledgeable about relevant processes. Potential interviewees were contacted via email.
Interviews
The interview guide was developed by the interdisciplinary team and based on published literature about strategies for engaging patients with SMI in care. Interview guide questions focused on local practice arrangements, panel management, population health practices, and quality measurement and improvement efforts for engaging veterans with SMI in primary care (Appendix). Interviews were conducted by telephone, from May 2019 through July 2019, by experienced qualitative interviewers (JM, JB). Interviewees were assured confidentiality of their responses.
Interview audio recordings were used to generate detailed notes (AG). Structured summaries were prepared from these notes, using a template based on the interview guide. We organized these summaries into matrices for analysis, grouping summarized points by interview domains to facilitate comparison across interviews.10-11 Our team reviewed and discussed the matrices, and iteratively identified and defined themes to identify the common engagement approaches and the nature of the connections between mental health and primary care. To ensure rigor, findings were checked by the senior qualitative lead (JM).
Results
The median SMI engagement score—defined as the proportion of veterans with SMI who have had a primary care visit in the prior 12 months and who have an assigned PCP—was 75.6% across 1a and 1b VA facilities. We identified 16 VA facilities that had a median or higher score and more than 1000 enrolled veterans experiencing homelessness. From these16 facilities, we emailed 31 potential interviewees, 14 of whom were identified from a VA database and 17 referred by other interviewees. In total, we interviewed 18 key informants across 11 (69%) facilities, including chiefs of psychology and mental health services, PCPs with mental health expertise, QI specialists, a psychosocial rehabilitation leader, and a local recovery coordinator, who helps veterans with SMI access recovery-oriented services. Characteristics of the facilities and interviewees are shown in Table 1. Interviews lasted a mean 35 (range, 26-50) minutes.
Engagement Approaches
The strategies used to engage veterans with SMI were heterogenous, with no single strategy common across all facilities. However, we identified 2 categories of engagement approaches: targeted outreach and routine practices.
Targeted outreach strategies included deliberate, systematic approaches to reach veterans with SMI outside of regularly scheduled visits. These strategies were designed to be proactive, often prioritizing veterans at risk of disengaging from care. Designated VA care team members identified and reached out to veterans well before 12 months had passed since their prior visit (the VA definition of disengagement from care); visits included any care at VA, including, but not exclusively, primary care. Table 2 describes the key components of targeted outreach strategies: (1) identifying veterans’ last visit; (2) prioritizing which veterans to outreach to; and (3) assigning responsibility and reaching out. A key defining feature of targeted outreach is that veterans were identified and prioritized for outreach independent from any visits with mental health or other VA services.
In identifying veterans at risk for disengagement, a designated employee in mental health or primary care (eg, local recovery coordinator) reviewed a VA dashboard or locally developed report that identified veterans who have not engaged in care for several months. This process was repeated regularly. The designated employee either contacted those veterans directly or coordinated with other clinicians and support staff. When possible, a clinician or nurse with an existing relationship with the veteran would call them. If no such relationship existed, an administrative staff member made a cold call, sometimes accompanied by mailed outreach materials.
Routine practices were business-as-usual activities embedded in regular clinical workflows that facilitated engagement or reengagement of veterans with SMI in care. Of note, and in contrast to targeted outreach, these activities were tied to veteran visits with mental health practitioners. These practices were typically described as being at least as important as targeted outreach efforts. For example, during mental health visits, clinicians routinely checked the VA electronic health record to assess whether veterans had an assigned primary care team. If not, they would contact the primary care service to refer the patient for a primary care visit and assignment. If the patient already had a primary care team assigned, the mental health practitioner checked for recent primary care visits. If none were evident, the mental health practitioner might email the assigned PCP or contact them via instant message.
At some facilities, mental health support staff were able to directly schedule primary care appointments, which was identified as an important enabling factor in promoting mental health patient engagement in primary care. Some interviewees seemed to take for granted the idea that mental health practitioners would help engage patients in primary care—suggesting that these practices had perhaps become a cultural norm within their facility. However, some interviewees identified clear strategies for making these practices a consistent part of care—for example, by designing a protocol for initial mental health assessments to include a routine check for primary care engagement.
Mental Health/Primary Care Connections
Interviewees characterized the nature of the connections between mental health and primary care at their facilities. Nearly all interviewees described that their medical centers had extensive ties, formal and informal, between mental health and primary care.
Formal ties may include the reverse integration care model, in which primary care services are embedded in mental health settings. Interviewees at sites with programs based on this model noted that these programs enabled warm hand-offs from mental health to primary care and suggested that it can foster integration between primary care and mental health care for patients with SMI. However, the size, scope, and structure of these programs varied, sometimes serving a small proportion of a facility’s population of SMI patients. Other examples of formal ties included written agreements, establishing frequent, regular meetings between mental health and primary care leadership and front-line staff, and giving mental health clerks the ability to directly schedule primary care appointments.
Informal ties between mental health and primary care included communication and personal working relationships between mental health and PCPs, facilitated by mental health and primary care leaders working together in workgroups and other administrative activities. Some participants described a history of collaboration between mental health and primary care leaders yielding productive and trusting working relationships. Some interviewees described frequent direct communication between individual mental health practitioners and PCPs—either face-to-face or via secure messaging.
Discussion
VA facilities with high levels of primary care engagement among veterans with SMI used extensive engagement strategies, including a diverse array of targeted outreach and routine practices. In both approaches, intentional organizational structural and process decisions, as well as formal and informal ties between mental health and primary care, established and supported them. In addition, organizational cultural factors were especially relevant to routine practice strategies.
To enable targeted outreach, a bevy of organizational resources, both local and national were required. Large accountable care organizations and integrated delivery systems, like the VA, are often better able to create dashboards and other informational resources for population health management compared with smaller, less integrated health care systems. Though these resources are difficult to create in fragmented systems, comparable tools have been explored by multiple state health departments.12 Our findings suggest that these data tools, though resource intensive to develop, may enable facilities to be more methodical and reliable in conducting outreach to vulnerable patients.
In contrast to targeted outreach, routine practices depend less on population health management resources and more on cultural norms. Such norms are notoriously difficult to change, but intentional structural decisions like embedding primary care engagement in mental health protocols may signal that primary care engagement is an important and legitimate consideration for mental health care.13
We identified extensive and heterogenous connections between mental health and primary care in our sample of VA facilities with high engagement of patients with SMI in primary care. A growing body of literature on relational coordination studies the factors that contribute to organizational siloing and mechanisms for breaking down those silos so work can be coordinated across boundaries (eg, the organizational boundary between mental health and primary care).14 Coordinating care across these boundaries, through good relational coordination practices has been shown to improve outcomes in health care and other sectors. Notably, VA facilities in our sample had several of the defining characteristics of good relational coordination: relationships between mental health and primary care that include shared goals, shared knowledge, and mutual respect, all reinforced by frequent communication structured around problem solving.15 The relational coordination literature also offers a way to identify evidence-based interventions for facilitating relational coordination in places where it is lacking, for example, with information systems, boundary-spanning individuals, facility design, and formal conflict resolution.15 Future work might explore how relational coordination can be further used to optimize mental health and primary care connections to keep veterans with SMI engaged in care.
Our approach of interviewing informants in higher-performing facilities draws heavily on the idea of positive deviance, which holds that information on what works in health care is available from organizations that already are demonstrating “consistently exceptional performance.”16 This approach works best when high performance and organizational characteristics are observable for a large number of facilities, and when high-performing facilities are willing to share their strategies. These features allow investigators to identify promising practices and hypotheses that can then be empirically tested and compared. Such testing, including assessing for unintended consequences, is needed for the approaches we identified. Research is also needed to assess for factors that would promote the implementation of effective strategies.
Limitations
As a QI project seeking to identify promising practices, our interviews were limited to 18 key informants across 11 VA facilities with high engagement of care among veterans with SMI. No inferences can be made that these practices are directly related to this high level of engagement, nor the differential impact of different practices. Future work is needed to assess for these relationships. We also did not interview veterans to understand their perspectives on these strategies, which is an additional important topic for future work. In addition, these interviews were performed before the start of the COVID-19 pandemic. Further work is needed to understand how these strategies may have been modified in response to changes in practice. The shift to care from in-person to virtual services may have impacted both clinical interactions with veterans, as well as between clinicians.
Conclusions
Interviews with key informants demonstrate that while engaging and retaining veterans with SMI in primary care is vital, it also requires intentional and potentially resource-intensive practices, including targeted outreach and routine engagement strategies embedded into mental health visits. These promising practices can provide valuable insights for both VA and community health care systems providing care to patients with SMI.
Acknowledgments
We thank Gracielle J. Tan, MD for administrative assistance in preparing this manuscript.
People with serious mental illness (SMI) are at substantial risk for premature mortality, dying on average 10 to 20 years earlier than others.1 The reasons for this disparity are complex; however, the high prevalence of chronic disease and physical comorbidities in the SMI population have been identified as prominent factors.2 Engagement and reengagement in care, including primary care for medical comorbidities, can mitigate these mortality risks.2-4 Among veterans with SMI lost to follow-up care for more than 12 months, those not successfully reengaged in care were more likely to die compared with those reengaged in care.2,3
Given this evidence, health care systems, including the US Department of Veterans Affairs (VA), have looked to better engage these patients in care. These efforts have included mental health population health management, colocation of mental health with primary care, designation of primary care teams specializing in SMI, and integration of mental health and primary care services for patients experiencing homelessness.5-8
As part of a national approach to encourage locally driven quality improvement (QI), the VA compiles performance metrics for each facility, across a gamut of care settings, conditions, and veteran populations.9 Quarterly facility report cards, with longitudinal data and cross-facility comparisons, enable facilities to identify targets for QI and track improvement progress. One metric reports on the proportion of enrolled veterans with SMI who have primary care engagement, defined as having an assigned primary care practitioner (PCP) and a primary care visit in the prior 12 months.
In support of a QI initiative at the VA Greater Los Angeles Healthcare System (VAGLAHS), we sought to describe promising practices being utilized by VA facilities with higher levels of primary care engagement among their veterans with SMI populations.
Methods
We conducted semistructured telephone interviews with a purposeful sample of key informants at VA facilities with high levels of engagement in primary care among veterans with SMI. All project components were conducted by an interdisciplinary team, which included a medical anthropologist (JM), a mental health physician (PR), an internal medicine physician (KC), and other health services researchers (JB, AG). Because the primary objective of the project was QI, this project was designated as nonresearch by the VAGLAHS Institutional Review Board.
The VA Facility Complexity Model classifies facilities into 5 tiers: 1a (most complex), 1b, 1c, 2, and 3 (least complex), based on patient care volume, patient risk, complexity of clinical programs, and size of research and teaching programs. We sampled informants at VA facilities with complexity ratings of 1a or 1b with better than median scores for primary care engagement of veterans with SMI based on report cards from January 2019 to March 2019. To increase the likelihood of identifying lessons that can generalize to the VAGLAHS with its large population of veterans experiencing homelessness, we selected facilities serving populations consisting of more than 1000 veterans experiencing homelessness.
At each selected facility, we first aimed to interview mental health leaders responsible for quality measurement and improvement identified from a national VA database. We then used snowball sampling to identify other informants at these VA facilities who were knowledgeable about relevant processes. Potential interviewees were contacted via email.
Interviews
The interview guide was developed by the interdisciplinary team and based on published literature about strategies for engaging patients with SMI in care. Interview guide questions focused on local practice arrangements, panel management, population health practices, and quality measurement and improvement efforts for engaging veterans with SMI in primary care (Appendix). Interviews were conducted by telephone, from May 2019 through July 2019, by experienced qualitative interviewers (JM, JB). Interviewees were assured confidentiality of their responses.
Interview audio recordings were used to generate detailed notes (AG). Structured summaries were prepared from these notes, using a template based on the interview guide. We organized these summaries into matrices for analysis, grouping summarized points by interview domains to facilitate comparison across interviews.10-11 Our team reviewed and discussed the matrices, and iteratively identified and defined themes to identify the common engagement approaches and the nature of the connections between mental health and primary care. To ensure rigor, findings were checked by the senior qualitative lead (JM).
Results
The median SMI engagement score—defined as the proportion of veterans with SMI who have had a primary care visit in the prior 12 months and who have an assigned PCP—was 75.6% across 1a and 1b VA facilities. We identified 16 VA facilities that had a median or higher score and more than 1000 enrolled veterans experiencing homelessness. From these16 facilities, we emailed 31 potential interviewees, 14 of whom were identified from a VA database and 17 referred by other interviewees. In total, we interviewed 18 key informants across 11 (69%) facilities, including chiefs of psychology and mental health services, PCPs with mental health expertise, QI specialists, a psychosocial rehabilitation leader, and a local recovery coordinator, who helps veterans with SMI access recovery-oriented services. Characteristics of the facilities and interviewees are shown in Table 1. Interviews lasted a mean 35 (range, 26-50) minutes.
Engagement Approaches
The strategies used to engage veterans with SMI were heterogenous, with no single strategy common across all facilities. However, we identified 2 categories of engagement approaches: targeted outreach and routine practices.
Targeted outreach strategies included deliberate, systematic approaches to reach veterans with SMI outside of regularly scheduled visits. These strategies were designed to be proactive, often prioritizing veterans at risk of disengaging from care. Designated VA care team members identified and reached out to veterans well before 12 months had passed since their prior visit (the VA definition of disengagement from care); visits included any care at VA, including, but not exclusively, primary care. Table 2 describes the key components of targeted outreach strategies: (1) identifying veterans’ last visit; (2) prioritizing which veterans to outreach to; and (3) assigning responsibility and reaching out. A key defining feature of targeted outreach is that veterans were identified and prioritized for outreach independent from any visits with mental health or other VA services.
In identifying veterans at risk for disengagement, a designated employee in mental health or primary care (eg, local recovery coordinator) reviewed a VA dashboard or locally developed report that identified veterans who have not engaged in care for several months. This process was repeated regularly. The designated employee either contacted those veterans directly or coordinated with other clinicians and support staff. When possible, a clinician or nurse with an existing relationship with the veteran would call them. If no such relationship existed, an administrative staff member made a cold call, sometimes accompanied by mailed outreach materials.
Routine practices were business-as-usual activities embedded in regular clinical workflows that facilitated engagement or reengagement of veterans with SMI in care. Of note, and in contrast to targeted outreach, these activities were tied to veteran visits with mental health practitioners. These practices were typically described as being at least as important as targeted outreach efforts. For example, during mental health visits, clinicians routinely checked the VA electronic health record to assess whether veterans had an assigned primary care team. If not, they would contact the primary care service to refer the patient for a primary care visit and assignment. If the patient already had a primary care team assigned, the mental health practitioner checked for recent primary care visits. If none were evident, the mental health practitioner might email the assigned PCP or contact them via instant message.
At some facilities, mental health support staff were able to directly schedule primary care appointments, which was identified as an important enabling factor in promoting mental health patient engagement in primary care. Some interviewees seemed to take for granted the idea that mental health practitioners would help engage patients in primary care—suggesting that these practices had perhaps become a cultural norm within their facility. However, some interviewees identified clear strategies for making these practices a consistent part of care—for example, by designing a protocol for initial mental health assessments to include a routine check for primary care engagement.
Mental Health/Primary Care Connections
Interviewees characterized the nature of the connections between mental health and primary care at their facilities. Nearly all interviewees described that their medical centers had extensive ties, formal and informal, between mental health and primary care.
Formal ties may include the reverse integration care model, in which primary care services are embedded in mental health settings. Interviewees at sites with programs based on this model noted that these programs enabled warm hand-offs from mental health to primary care and suggested that it can foster integration between primary care and mental health care for patients with SMI. However, the size, scope, and structure of these programs varied, sometimes serving a small proportion of a facility’s population of SMI patients. Other examples of formal ties included written agreements, establishing frequent, regular meetings between mental health and primary care leadership and front-line staff, and giving mental health clerks the ability to directly schedule primary care appointments.
Informal ties between mental health and primary care included communication and personal working relationships between mental health and PCPs, facilitated by mental health and primary care leaders working together in workgroups and other administrative activities. Some participants described a history of collaboration between mental health and primary care leaders yielding productive and trusting working relationships. Some interviewees described frequent direct communication between individual mental health practitioners and PCPs—either face-to-face or via secure messaging.
Discussion
VA facilities with high levels of primary care engagement among veterans with SMI used extensive engagement strategies, including a diverse array of targeted outreach and routine practices. In both approaches, intentional organizational structural and process decisions, as well as formal and informal ties between mental health and primary care, established and supported them. In addition, organizational cultural factors were especially relevant to routine practice strategies.
To enable targeted outreach, a bevy of organizational resources, both local and national were required. Large accountable care organizations and integrated delivery systems, like the VA, are often better able to create dashboards and other informational resources for population health management compared with smaller, less integrated health care systems. Though these resources are difficult to create in fragmented systems, comparable tools have been explored by multiple state health departments.12 Our findings suggest that these data tools, though resource intensive to develop, may enable facilities to be more methodical and reliable in conducting outreach to vulnerable patients.
In contrast to targeted outreach, routine practices depend less on population health management resources and more on cultural norms. Such norms are notoriously difficult to change, but intentional structural decisions like embedding primary care engagement in mental health protocols may signal that primary care engagement is an important and legitimate consideration for mental health care.13
We identified extensive and heterogenous connections between mental health and primary care in our sample of VA facilities with high engagement of patients with SMI in primary care. A growing body of literature on relational coordination studies the factors that contribute to organizational siloing and mechanisms for breaking down those silos so work can be coordinated across boundaries (eg, the organizational boundary between mental health and primary care).14 Coordinating care across these boundaries, through good relational coordination practices has been shown to improve outcomes in health care and other sectors. Notably, VA facilities in our sample had several of the defining characteristics of good relational coordination: relationships between mental health and primary care that include shared goals, shared knowledge, and mutual respect, all reinforced by frequent communication structured around problem solving.15 The relational coordination literature also offers a way to identify evidence-based interventions for facilitating relational coordination in places where it is lacking, for example, with information systems, boundary-spanning individuals, facility design, and formal conflict resolution.15 Future work might explore how relational coordination can be further used to optimize mental health and primary care connections to keep veterans with SMI engaged in care.
Our approach of interviewing informants in higher-performing facilities draws heavily on the idea of positive deviance, which holds that information on what works in health care is available from organizations that already are demonstrating “consistently exceptional performance.”16 This approach works best when high performance and organizational characteristics are observable for a large number of facilities, and when high-performing facilities are willing to share their strategies. These features allow investigators to identify promising practices and hypotheses that can then be empirically tested and compared. Such testing, including assessing for unintended consequences, is needed for the approaches we identified. Research is also needed to assess for factors that would promote the implementation of effective strategies.
Limitations
As a QI project seeking to identify promising practices, our interviews were limited to 18 key informants across 11 VA facilities with high engagement of care among veterans with SMI. No inferences can be made that these practices are directly related to this high level of engagement, nor the differential impact of different practices. Future work is needed to assess for these relationships. We also did not interview veterans to understand their perspectives on these strategies, which is an additional important topic for future work. In addition, these interviews were performed before the start of the COVID-19 pandemic. Further work is needed to understand how these strategies may have been modified in response to changes in practice. The shift to care from in-person to virtual services may have impacted both clinical interactions with veterans, as well as between clinicians.
Conclusions
Interviews with key informants demonstrate that while engaging and retaining veterans with SMI in primary care is vital, it also requires intentional and potentially resource-intensive practices, including targeted outreach and routine engagement strategies embedded into mental health visits. These promising practices can provide valuable insights for both VA and community health care systems providing care to patients with SMI.
Acknowledgments
We thank Gracielle J. Tan, MD for administrative assistance in preparing this manuscript.
1. Liu NH, Daumit GL, Dua T, et al. Excess mortality in persons with severe mental disorders: a multilevel intervention framework and priorities for clinical practice, policy and research agendas. World Psychiatry. 2017;16(1):30-40. doi:10.1002/wps.20384
2. Bowersox NW, Kilbourne AM, Abraham KM, et al. Cause-specific mortality among veterans with serious mental illness lost to follow-up. Gen Hosp Psychiatry. 2012;34(6):651-653. doi:10.1016/j.genhosppsych.2012.05.014
3. Davis CL, Kilbourne AM, Blow FC, et al. Reduced mortality among Department of Veterans Affairs patients with schizophrenia or bipolar disorder lost to follow-up and engaged in active outreach to return for care. Am J Public Health. 2012;102(suppl 1):S74-S79. doi:10.2105/AJPH.2011.300502
4. Copeland LA, Zeber JE, Wang CP, et al. Patterns of primary care and mortality among patients with schizophrenia or diabetes: a cluster analysis approach to the retrospective study of healthcare utilization. BMC Health Serv Res. 2009;9:127. doi:10.1186/1472-6963-9-127
5. Abraham KM, Mach J, Visnic S, McCarthy JF. Enhancing treatment reengagement for veterans with serious mental illness: evaluating the effectiveness of SMI re-engage. Psychiatr Serv. 2018;69(8):887-895. doi:10.1176/appi.ps.201700407
6. Ward MC, Druss BG. Reverse integration initiatives for individuals with serious mental illness. Focus (Am Psychiatr Publ). 2017;15(3):271-278. doi:10.1176/appi.focus.20170011
7. Chang ET, Vinzon M, Cohen AN, Young AS. Effective models urgently needed to improve physical care for people with serious mental illnesses. Health Serv Insights. 2019;12:1178632919837628. Published 2019 Apr 2. doi:10.1177/1178632919837628
8. Gabrielian S, Gordon AJ, Gelberg L, et al. Primary care medical services for homeless veterans. Fed Pract. 2014;31(10):10-19.
9. Lemke S, Boden MT, Kearney LK, et al. Measurement-based management of mental health quality and access in VHA: SAIL mental health domain. Psychol Serv. 2017;14(1):1-12. doi:10.1037/ser0000097
10. Averill JB. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qual Health Res. 2002;12(6):855-866. doi:10.1177/104973230201200611
11. Zuchowski JL, Chrystal JG, Hamilton AB, et al. Coordinating care across health care systems for Veterans with gynecologic malignancies: a qualitative analysis. Med Care. 2017;55(suppl 1):S53-S60. doi:10.1097/MLR.0000000000000737
12. Daumit GL, Stone EM, Kennedy-Hendricks A, Choksy S, Marsteller JA, McGinty EE. Care coordination and population health management strategies and challenges in a behavioral health home model. Med Care. 2019;57(1):79-84. doi:10.1097/MLR.0000000000001023
13. Parmelli E, Flodgren G, Beyer F, et al. The effectiveness of strategies to change organisational culture to improve healthcare performance: a systematic review. Implement Sci. 2011;6(33):1-8. doi:10.1186/1748-5908-6-33
14. Bolton R, Logan C, Gittell JH. Revisiting relational coordination: a systematic review. J Appl Behav Sci. 2021;57(3):290-322. doi:10.1177/0021886321991597
15. Gittell JH, Godfrey M, Thistlethwaite J. Interprofessional collaborative practice and relational coordination: improving healthcare through relationships. J Interprof Care. 2013;27(3):210-13. doi:10.3109/13561820.2012.730564
16. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM. Research in action: using positive deviance to improve quality of health care. Implement Sci. 2009;4:25. Published 2009 May 8. doi:10.1186/1748-5908-4-25
1. Liu NH, Daumit GL, Dua T, et al. Excess mortality in persons with severe mental disorders: a multilevel intervention framework and priorities for clinical practice, policy and research agendas. World Psychiatry. 2017;16(1):30-40. doi:10.1002/wps.20384
2. Bowersox NW, Kilbourne AM, Abraham KM, et al. Cause-specific mortality among veterans with serious mental illness lost to follow-up. Gen Hosp Psychiatry. 2012;34(6):651-653. doi:10.1016/j.genhosppsych.2012.05.014
3. Davis CL, Kilbourne AM, Blow FC, et al. Reduced mortality among Department of Veterans Affairs patients with schizophrenia or bipolar disorder lost to follow-up and engaged in active outreach to return for care. Am J Public Health. 2012;102(suppl 1):S74-S79. doi:10.2105/AJPH.2011.300502
4. Copeland LA, Zeber JE, Wang CP, et al. Patterns of primary care and mortality among patients with schizophrenia or diabetes: a cluster analysis approach to the retrospective study of healthcare utilization. BMC Health Serv Res. 2009;9:127. doi:10.1186/1472-6963-9-127
5. Abraham KM, Mach J, Visnic S, McCarthy JF. Enhancing treatment reengagement for veterans with serious mental illness: evaluating the effectiveness of SMI re-engage. Psychiatr Serv. 2018;69(8):887-895. doi:10.1176/appi.ps.201700407
6. Ward MC, Druss BG. Reverse integration initiatives for individuals with serious mental illness. Focus (Am Psychiatr Publ). 2017;15(3):271-278. doi:10.1176/appi.focus.20170011
7. Chang ET, Vinzon M, Cohen AN, Young AS. Effective models urgently needed to improve physical care for people with serious mental illnesses. Health Serv Insights. 2019;12:1178632919837628. Published 2019 Apr 2. doi:10.1177/1178632919837628
8. Gabrielian S, Gordon AJ, Gelberg L, et al. Primary care medical services for homeless veterans. Fed Pract. 2014;31(10):10-19.
9. Lemke S, Boden MT, Kearney LK, et al. Measurement-based management of mental health quality and access in VHA: SAIL mental health domain. Psychol Serv. 2017;14(1):1-12. doi:10.1037/ser0000097
10. Averill JB. Matrix analysis as a complementary analytic strategy in qualitative inquiry. Qual Health Res. 2002;12(6):855-866. doi:10.1177/104973230201200611
11. Zuchowski JL, Chrystal JG, Hamilton AB, et al. Coordinating care across health care systems for Veterans with gynecologic malignancies: a qualitative analysis. Med Care. 2017;55(suppl 1):S53-S60. doi:10.1097/MLR.0000000000000737
12. Daumit GL, Stone EM, Kennedy-Hendricks A, Choksy S, Marsteller JA, McGinty EE. Care coordination and population health management strategies and challenges in a behavioral health home model. Med Care. 2019;57(1):79-84. doi:10.1097/MLR.0000000000001023
13. Parmelli E, Flodgren G, Beyer F, et al. The effectiveness of strategies to change organisational culture to improve healthcare performance: a systematic review. Implement Sci. 2011;6(33):1-8. doi:10.1186/1748-5908-6-33
14. Bolton R, Logan C, Gittell JH. Revisiting relational coordination: a systematic review. J Appl Behav Sci. 2021;57(3):290-322. doi:10.1177/0021886321991597
15. Gittell JH, Godfrey M, Thistlethwaite J. Interprofessional collaborative practice and relational coordination: improving healthcare through relationships. J Interprof Care. 2013;27(3):210-13. doi:10.3109/13561820.2012.730564
16. Bradley EH, Curry LA, Ramanadhan S, Rowe L, Nembhard IM, Krumholz HM. Research in action: using positive deviance to improve quality of health care. Implement Sci. 2009;4:25. Published 2009 May 8. doi:10.1186/1748-5908-4-25
Catheter-Directed Retrieval of an Infected Fragment in a Vietnam War Veteran
Shrapnel injuries are commonly encountered in war zones.1 Shrapnel injuries can remain asymptomatic or become systemic, with health effects of the retained foreign body ranging from local to systemic toxicities depending on the patient’s reaction to the chemical composition and corrosiveness of the fragments in vivo.2 We present a case of a reactivating shrapnel injury in the form of a retroperitoneal infection and subsequent iliopsoas abscess. A collaborative procedure was performed between surgery and interventional radiology to snare and remove the infected fragment and drain the abscess.
Case Presentation
While serving in Vietnam, a soldier sustained a fragment injury to his left lower abdomen. He underwent a laparotomy, small bowel resection, and a temporary ileostomy at the time of the injury. Nearly 50 years later, the patient presented with chronic left lower quadrant pain and a low-grade fever. He was diagnosed clinically in the emergency department (ED) with diverticulitis and treated with antibiotics. The patient initially responded to treatment but returned 6 months later with similar symptoms, low-grade fever, and mild leukocytosis. A computed tomography (CT) scan during that encounter without IV contrast revealed a few scattered colonic diverticula without definite diverticulitis as well as a metallic fragment embedded in the left iliopsoas with increased soft tissue density.
The patient was diagnosed with a pelvic/abdominal wall hematoma and was discharged with pain medication. The patient reported recurrent attacks of left lower quadrant pain, fever, and changes in bowel habits, prompting gastrointestinal consultation and a colonoscopy that was unremarkable. Ten months later, the patient again presented to the ED, with recurrent symptoms, a fever of 102 °F, and leukocytosis with a white blood cell count of 11.7 × 109/L. CT scan with IV contrast revealed a large left iliopsoas abscess associated with an approximately 1-cm metallic fragment (Figure 1). A drainage catheter was placed under CT guidance and approximately 270 mL of purulent fluid was drained. Culture of the fluid was positive for Escherichia coli (E coli). Two days after drain placement, the fragment was removed as a joint procedure with interventional radiology and surgery. Using the drainage catheter tract as a point of entry, multiple attempts were made to retrieve the fragment with Olympus EndoJaw endoscopic forceps without success.
Ultimately a stiff directional sheath from a Cook Medical transjugular liver biopsy kit was used with a Merit Medical EnSnare to relocate the fragment to the left inguinal region for surgical excision (Figures 2, 3, and 4). The fragment was removed and swabbed for culture and sensitivity and a BLAKE drain was placed in the evacuated abscess cavity. The patient tolerated the procedure well and was discharged the following day. Three days later, culture and sensitivity grew E coli and Acinetobacter, thus confirming infection and a nidus for the surrounding abscess formation. On follow-up with general surgery 7 days later, the patient reported he was doing well, and the drain was removed without difficulty.
Discussion
Foreign body injuries can be benign or debilitating depending on the initial damage, anatomical location of the foreign body, composition of the foreign body, and the patient’s response to it. Retained shrapnel deep within the muscle tissue rarely causes complications. Although many times embedded objects can be asymptomatic and require no further management, migration of the foreign body or the formation of a fistula is possible, causing symptoms and requiring surgical intervention.1 One case involved the formation of a purulent fistula appearing a year after an explosive wound to the lumbosacral spine, which was treated with antimicrobials. Recurrence of the fistula several times after treatment led to surgical removal of the shrapnel along with antibiotic treatment of the osteomyelitis.3 Although uncommon, lead exposure that occurs due to retained foreign body fragments from gunshot or military-related injuries can cause systemic lead toxicity. Symptoms may range from abdominal pain, nausea, and constipation to jaundice and hepatitis.4 The severity has also been stated to correlate with the surface area of the lead exposed for dissolution.5 Migration of foreign bodies and shrapnel to other sites in the body, such as movement from soft tissues into distantly located body cavities, have been reported as well. Such a case involved the spontaneous onset of knee synovitis due to an intra-articular metallic object that was introduced via a blast injury to the upper third of the ipsilateral thigh.1
In this patient’s case, a large intramuscular abscess had formed nearly 50 years after the initial combat injury, requiring drainage of the abscess and removal of the fragment. By snaring the foreign body to a more superficial site, the surgical removal only required a minor incision, decreasing recovery time and the likelihood of postoperative complications that would have been associated with a large retroperitoneal dissection. While loop snare is often the first-line technique for the removal of intravascular foreign bodies, its use in soft tissue retained materials is scarcely reported.6 The more typical uses involve the removal of intraluminal materials, such as partially fractured venous catheters, guide wires, stents, and vena cava filters. The same report mentioned that in all 16 cases of percutaneous foreign body retrieval, no surgical intervention was required.7 In the case of most nonvascular foreign bodies, however, surgical retrieval is usually performed.8
Surgical removal of foreign bodies can be difficult in cases where a foreign body is anatomically located next to vital structures.9 An additional challenge with a sole surgical approach to foreign body retrieval is when it is small in size and lies deep within the soft tissue, as was the case for our patient. In such cases, the surgical procedure can be time consuming and lead to more trauma to the surrounding tissues.10 These factors alone necessitate consideration of postoperative morbidity and mortality.
In our patient, the retained fragment was embedded in the wall of an abscess located retroperitoneally in his iliopsoas muscle. When considering the proximity of the iliopsoas muscle to the digestive tract, urinary tract, and iliac lymph nodes, it is reasonable for infectious material to come in contact with the foreign body from these nearby structures, resulting in secondary infection.11 Surgery was previously considered the first-line treatment for retroperitoneal abscesses until the advent of imaging-guided percutaneous drainage.12
In some instances, surgical drainage may still be attempted, such as if there are different disease processes requiring open surgery or if percutaneous catheter drainage is not technically possible due to the location of the abscess, thick exudate, loculation/septations, or phlegmon. In these cases, laparoscopic drainage as opposed to open surgical drainage can provide the benefits of an open procedure (ie, total drainage and resection of infected tissue) but is less invasive, requires a smaller incision, and heals faster.13 Percutaneous drainage is the current first-line treatment due to the lack of need for general anesthesia, lower cost, and better morbidity and mortality outcomes compared to surgical methods.12 While percutaneous drainage proved to be immediately therapeutic for our patient, the risk of abscess recurrence with the retained infected fragment necessitated coordination of procedures across specialties to provide the best outcome for the patient.
Conclusions
This case demonstrates a multidisciplinary approach to transforming an otherwise large retroperitoneal dissection to a minimally invasive and technically efficient abscess drainage and foreign body retrieval.
1. Schroeder JE, Lowe J, Chaimsky G, Liebergall M, Mosheiff R. Migrating shrapnel: a rare cause of knee synovitis. Mil Med. 2010;175(11):929-930. doi:10.7205/milmed-d-09-00254
2. Centeno JA, Rogers DA, van der Voet GB, et al. Embedded fragments from U.S. military personnel—chemical analysis and potential health implications. Int J Environ Res Public Health. 2014;11(2):1261-1278. Published 2014 Jan 23. doi:10.3390/ijerph110201261
3. Carija R, Busic Z, Bradaric N, Bulovic B, Borzic Z, Pavicic-Perkovic S. Surgical removal of metallic foreign body (shrapnel) from the lumbosacral spine and the treatment of chronic osteomyelitis: a case report. West Indian Med J. 2014;63(4):373-375. doi:10.7727/wimj.2012.290
4. Grasso I, Blattner M, Short T, Downs J. Severe systemic lead toxicity resulting from extra-articular retained shrapnel presenting as jaundice and hepatitis: a case report and review of the literature. Mil Med. 2017;182(3-4):e1843-e1848. doi:10.7205/MILMED-D-16-00231
5. Dillman RO, Crumb CK, Lidsky MJ. Lead poisoning from a gunshot wound: report of a case and review of the literature. Am J Med. 1979;66(3):509-514. doi:10.1016/0002-9343(79)91083-0
6. Woodhouse JB, Uberoi R. Techniques for intravascular foreign body retrieval. Cardiovasc Intervent Radiol. 2013;36(4):888-897. doi:10.1007/s00270-012-0488-8
7. Mallmann CV, Wolf KJ, Wacker FK. Retrieval of vascular foreign bodies using a self-made wire snare. Acta Radiol. 2008;49(10):1124-1128. doi:10.1080/02841850802454741
8. Nosher JL, Siegel R. Percutaneous retrieval of nonvascular foreign bodies. Radiology. 1993;187(3):649-651. doi:10.1148/radiology.187.3.8497610
9. Fu Y, Cui LG, Romagnoli C, Li ZQ, Lei YT. Ultrasound-guided removal of retained soft tissue foreign body with late presentation. Chin Med J (Engl). 2017;130(14):1753-1754. doi:10.4103/0366-6999.209910
10. Liang HD, Li H, Feng H, Zhao ZN, Song WJ, Yuan B. Application of intraoperative navigation and positioning system in the removal of deep foreign bodies in the limbs. Chin Med J (Engl). 2019;132(11):1375-1377. doi:10.1097/CM9.0000000000000253
11. Moriarty CM, Baker RJ. A pain in the psoas. Sports Health. 2016;8(6):568-572. doi:10.1177/1941738116665112
12. Akhan O, Durmaz H, Balcı S, Birgi E, Çiftçi T, Akıncı D. Percutaneous drainage of retroperitoneal abscesses: variables for success, failure, and recurrence. Diagn Interv Radiol. 2020;26(2):124-130. doi:10.5152/dir.2019.19199
13. Hong CH, Hong YC, Bae SH, et al. Laparoscopic drainage as a minimally invasive treatment for a psoas abscess: a single center case series and literature review. Medicine (Baltimore). 2020;99(14):e19640. doi:10.1097/MD.0000000000019640
Shrapnel injuries are commonly encountered in war zones.1 Shrapnel injuries can remain asymptomatic or become systemic, with health effects of the retained foreign body ranging from local to systemic toxicities depending on the patient’s reaction to the chemical composition and corrosiveness of the fragments in vivo.2 We present a case of a reactivating shrapnel injury in the form of a retroperitoneal infection and subsequent iliopsoas abscess. A collaborative procedure was performed between surgery and interventional radiology to snare and remove the infected fragment and drain the abscess.
Case Presentation
While serving in Vietnam, a soldier sustained a fragment injury to his left lower abdomen. He underwent a laparotomy, small bowel resection, and a temporary ileostomy at the time of the injury. Nearly 50 years later, the patient presented with chronic left lower quadrant pain and a low-grade fever. He was diagnosed clinically in the emergency department (ED) with diverticulitis and treated with antibiotics. The patient initially responded to treatment but returned 6 months later with similar symptoms, low-grade fever, and mild leukocytosis. A computed tomography (CT) scan during that encounter without IV contrast revealed a few scattered colonic diverticula without definite diverticulitis as well as a metallic fragment embedded in the left iliopsoas with increased soft tissue density.
The patient was diagnosed with a pelvic/abdominal wall hematoma and was discharged with pain medication. The patient reported recurrent attacks of left lower quadrant pain, fever, and changes in bowel habits, prompting gastrointestinal consultation and a colonoscopy that was unremarkable. Ten months later, the patient again presented to the ED, with recurrent symptoms, a fever of 102 °F, and leukocytosis with a white blood cell count of 11.7 × 109/L. CT scan with IV contrast revealed a large left iliopsoas abscess associated with an approximately 1-cm metallic fragment (Figure 1). A drainage catheter was placed under CT guidance and approximately 270 mL of purulent fluid was drained. Culture of the fluid was positive for Escherichia coli (E coli). Two days after drain placement, the fragment was removed as a joint procedure with interventional radiology and surgery. Using the drainage catheter tract as a point of entry, multiple attempts were made to retrieve the fragment with Olympus EndoJaw endoscopic forceps without success.
Ultimately a stiff directional sheath from a Cook Medical transjugular liver biopsy kit was used with a Merit Medical EnSnare to relocate the fragment to the left inguinal region for surgical excision (Figures 2, 3, and 4). The fragment was removed and swabbed for culture and sensitivity and a BLAKE drain was placed in the evacuated abscess cavity. The patient tolerated the procedure well and was discharged the following day. Three days later, culture and sensitivity grew E coli and Acinetobacter, thus confirming infection and a nidus for the surrounding abscess formation. On follow-up with general surgery 7 days later, the patient reported he was doing well, and the drain was removed without difficulty.
Discussion
Foreign body injuries can be benign or debilitating depending on the initial damage, anatomical location of the foreign body, composition of the foreign body, and the patient’s response to it. Retained shrapnel deep within the muscle tissue rarely causes complications. Although many times embedded objects can be asymptomatic and require no further management, migration of the foreign body or the formation of a fistula is possible, causing symptoms and requiring surgical intervention.1 One case involved the formation of a purulent fistula appearing a year after an explosive wound to the lumbosacral spine, which was treated with antimicrobials. Recurrence of the fistula several times after treatment led to surgical removal of the shrapnel along with antibiotic treatment of the osteomyelitis.3 Although uncommon, lead exposure that occurs due to retained foreign body fragments from gunshot or military-related injuries can cause systemic lead toxicity. Symptoms may range from abdominal pain, nausea, and constipation to jaundice and hepatitis.4 The severity has also been stated to correlate with the surface area of the lead exposed for dissolution.5 Migration of foreign bodies and shrapnel to other sites in the body, such as movement from soft tissues into distantly located body cavities, have been reported as well. Such a case involved the spontaneous onset of knee synovitis due to an intra-articular metallic object that was introduced via a blast injury to the upper third of the ipsilateral thigh.1
In this patient’s case, a large intramuscular abscess had formed nearly 50 years after the initial combat injury, requiring drainage of the abscess and removal of the fragment. By snaring the foreign body to a more superficial site, the surgical removal only required a minor incision, decreasing recovery time and the likelihood of postoperative complications that would have been associated with a large retroperitoneal dissection. While loop snare is often the first-line technique for the removal of intravascular foreign bodies, its use in soft tissue retained materials is scarcely reported.6 The more typical uses involve the removal of intraluminal materials, such as partially fractured venous catheters, guide wires, stents, and vena cava filters. The same report mentioned that in all 16 cases of percutaneous foreign body retrieval, no surgical intervention was required.7 In the case of most nonvascular foreign bodies, however, surgical retrieval is usually performed.8
Surgical removal of foreign bodies can be difficult in cases where a foreign body is anatomically located next to vital structures.9 An additional challenge with a sole surgical approach to foreign body retrieval is when it is small in size and lies deep within the soft tissue, as was the case for our patient. In such cases, the surgical procedure can be time consuming and lead to more trauma to the surrounding tissues.10 These factors alone necessitate consideration of postoperative morbidity and mortality.
In our patient, the retained fragment was embedded in the wall of an abscess located retroperitoneally in his iliopsoas muscle. When considering the proximity of the iliopsoas muscle to the digestive tract, urinary tract, and iliac lymph nodes, it is reasonable for infectious material to come in contact with the foreign body from these nearby structures, resulting in secondary infection.11 Surgery was previously considered the first-line treatment for retroperitoneal abscesses until the advent of imaging-guided percutaneous drainage.12
In some instances, surgical drainage may still be attempted, such as if there are different disease processes requiring open surgery or if percutaneous catheter drainage is not technically possible due to the location of the abscess, thick exudate, loculation/septations, or phlegmon. In these cases, laparoscopic drainage as opposed to open surgical drainage can provide the benefits of an open procedure (ie, total drainage and resection of infected tissue) but is less invasive, requires a smaller incision, and heals faster.13 Percutaneous drainage is the current first-line treatment due to the lack of need for general anesthesia, lower cost, and better morbidity and mortality outcomes compared to surgical methods.12 While percutaneous drainage proved to be immediately therapeutic for our patient, the risk of abscess recurrence with the retained infected fragment necessitated coordination of procedures across specialties to provide the best outcome for the patient.
Conclusions
This case demonstrates a multidisciplinary approach to transforming an otherwise large retroperitoneal dissection to a minimally invasive and technically efficient abscess drainage and foreign body retrieval.
Shrapnel injuries are commonly encountered in war zones.1 Shrapnel injuries can remain asymptomatic or become systemic, with health effects of the retained foreign body ranging from local to systemic toxicities depending on the patient’s reaction to the chemical composition and corrosiveness of the fragments in vivo.2 We present a case of a reactivating shrapnel injury in the form of a retroperitoneal infection and subsequent iliopsoas abscess. A collaborative procedure was performed between surgery and interventional radiology to snare and remove the infected fragment and drain the abscess.
Case Presentation
While serving in Vietnam, a soldier sustained a fragment injury to his left lower abdomen. He underwent a laparotomy, small bowel resection, and a temporary ileostomy at the time of the injury. Nearly 50 years later, the patient presented with chronic left lower quadrant pain and a low-grade fever. He was diagnosed clinically in the emergency department (ED) with diverticulitis and treated with antibiotics. The patient initially responded to treatment but returned 6 months later with similar symptoms, low-grade fever, and mild leukocytosis. A computed tomography (CT) scan during that encounter without IV contrast revealed a few scattered colonic diverticula without definite diverticulitis as well as a metallic fragment embedded in the left iliopsoas with increased soft tissue density.
The patient was diagnosed with a pelvic/abdominal wall hematoma and was discharged with pain medication. The patient reported recurrent attacks of left lower quadrant pain, fever, and changes in bowel habits, prompting gastrointestinal consultation and a colonoscopy that was unremarkable. Ten months later, the patient again presented to the ED, with recurrent symptoms, a fever of 102 °F, and leukocytosis with a white blood cell count of 11.7 × 109/L. CT scan with IV contrast revealed a large left iliopsoas abscess associated with an approximately 1-cm metallic fragment (Figure 1). A drainage catheter was placed under CT guidance and approximately 270 mL of purulent fluid was drained. Culture of the fluid was positive for Escherichia coli (E coli). Two days after drain placement, the fragment was removed as a joint procedure with interventional radiology and surgery. Using the drainage catheter tract as a point of entry, multiple attempts were made to retrieve the fragment with Olympus EndoJaw endoscopic forceps without success.
Ultimately a stiff directional sheath from a Cook Medical transjugular liver biopsy kit was used with a Merit Medical EnSnare to relocate the fragment to the left inguinal region for surgical excision (Figures 2, 3, and 4). The fragment was removed and swabbed for culture and sensitivity and a BLAKE drain was placed in the evacuated abscess cavity. The patient tolerated the procedure well and was discharged the following day. Three days later, culture and sensitivity grew E coli and Acinetobacter, thus confirming infection and a nidus for the surrounding abscess formation. On follow-up with general surgery 7 days later, the patient reported he was doing well, and the drain was removed without difficulty.
Discussion
Foreign body injuries can be benign or debilitating depending on the initial damage, anatomical location of the foreign body, composition of the foreign body, and the patient’s response to it. Retained shrapnel deep within the muscle tissue rarely causes complications. Although many times embedded objects can be asymptomatic and require no further management, migration of the foreign body or the formation of a fistula is possible, causing symptoms and requiring surgical intervention.1 One case involved the formation of a purulent fistula appearing a year after an explosive wound to the lumbosacral spine, which was treated with antimicrobials. Recurrence of the fistula several times after treatment led to surgical removal of the shrapnel along with antibiotic treatment of the osteomyelitis.3 Although uncommon, lead exposure that occurs due to retained foreign body fragments from gunshot or military-related injuries can cause systemic lead toxicity. Symptoms may range from abdominal pain, nausea, and constipation to jaundice and hepatitis.4 The severity has also been stated to correlate with the surface area of the lead exposed for dissolution.5 Migration of foreign bodies and shrapnel to other sites in the body, such as movement from soft tissues into distantly located body cavities, have been reported as well. Such a case involved the spontaneous onset of knee synovitis due to an intra-articular metallic object that was introduced via a blast injury to the upper third of the ipsilateral thigh.1
In this patient’s case, a large intramuscular abscess had formed nearly 50 years after the initial combat injury, requiring drainage of the abscess and removal of the fragment. By snaring the foreign body to a more superficial site, the surgical removal only required a minor incision, decreasing recovery time and the likelihood of postoperative complications that would have been associated with a large retroperitoneal dissection. While loop snare is often the first-line technique for the removal of intravascular foreign bodies, its use in soft tissue retained materials is scarcely reported.6 The more typical uses involve the removal of intraluminal materials, such as partially fractured venous catheters, guide wires, stents, and vena cava filters. The same report mentioned that in all 16 cases of percutaneous foreign body retrieval, no surgical intervention was required.7 In the case of most nonvascular foreign bodies, however, surgical retrieval is usually performed.8
Surgical removal of foreign bodies can be difficult in cases where a foreign body is anatomically located next to vital structures.9 An additional challenge with a sole surgical approach to foreign body retrieval is when it is small in size and lies deep within the soft tissue, as was the case for our patient. In such cases, the surgical procedure can be time consuming and lead to more trauma to the surrounding tissues.10 These factors alone necessitate consideration of postoperative morbidity and mortality.
In our patient, the retained fragment was embedded in the wall of an abscess located retroperitoneally in his iliopsoas muscle. When considering the proximity of the iliopsoas muscle to the digestive tract, urinary tract, and iliac lymph nodes, it is reasonable for infectious material to come in contact with the foreign body from these nearby structures, resulting in secondary infection.11 Surgery was previously considered the first-line treatment for retroperitoneal abscesses until the advent of imaging-guided percutaneous drainage.12
In some instances, surgical drainage may still be attempted, such as if there are different disease processes requiring open surgery or if percutaneous catheter drainage is not technically possible due to the location of the abscess, thick exudate, loculation/septations, or phlegmon. In these cases, laparoscopic drainage as opposed to open surgical drainage can provide the benefits of an open procedure (ie, total drainage and resection of infected tissue) but is less invasive, requires a smaller incision, and heals faster.13 Percutaneous drainage is the current first-line treatment due to the lack of need for general anesthesia, lower cost, and better morbidity and mortality outcomes compared to surgical methods.12 While percutaneous drainage proved to be immediately therapeutic for our patient, the risk of abscess recurrence with the retained infected fragment necessitated coordination of procedures across specialties to provide the best outcome for the patient.
Conclusions
This case demonstrates a multidisciplinary approach to transforming an otherwise large retroperitoneal dissection to a minimally invasive and technically efficient abscess drainage and foreign body retrieval.
1. Schroeder JE, Lowe J, Chaimsky G, Liebergall M, Mosheiff R. Migrating shrapnel: a rare cause of knee synovitis. Mil Med. 2010;175(11):929-930. doi:10.7205/milmed-d-09-00254
2. Centeno JA, Rogers DA, van der Voet GB, et al. Embedded fragments from U.S. military personnel—chemical analysis and potential health implications. Int J Environ Res Public Health. 2014;11(2):1261-1278. Published 2014 Jan 23. doi:10.3390/ijerph110201261
3. Carija R, Busic Z, Bradaric N, Bulovic B, Borzic Z, Pavicic-Perkovic S. Surgical removal of metallic foreign body (shrapnel) from the lumbosacral spine and the treatment of chronic osteomyelitis: a case report. West Indian Med J. 2014;63(4):373-375. doi:10.7727/wimj.2012.290
4. Grasso I, Blattner M, Short T, Downs J. Severe systemic lead toxicity resulting from extra-articular retained shrapnel presenting as jaundice and hepatitis: a case report and review of the literature. Mil Med. 2017;182(3-4):e1843-e1848. doi:10.7205/MILMED-D-16-00231
5. Dillman RO, Crumb CK, Lidsky MJ. Lead poisoning from a gunshot wound: report of a case and review of the literature. Am J Med. 1979;66(3):509-514. doi:10.1016/0002-9343(79)91083-0
6. Woodhouse JB, Uberoi R. Techniques for intravascular foreign body retrieval. Cardiovasc Intervent Radiol. 2013;36(4):888-897. doi:10.1007/s00270-012-0488-8
7. Mallmann CV, Wolf KJ, Wacker FK. Retrieval of vascular foreign bodies using a self-made wire snare. Acta Radiol. 2008;49(10):1124-1128. doi:10.1080/02841850802454741
8. Nosher JL, Siegel R. Percutaneous retrieval of nonvascular foreign bodies. Radiology. 1993;187(3):649-651. doi:10.1148/radiology.187.3.8497610
9. Fu Y, Cui LG, Romagnoli C, Li ZQ, Lei YT. Ultrasound-guided removal of retained soft tissue foreign body with late presentation. Chin Med J (Engl). 2017;130(14):1753-1754. doi:10.4103/0366-6999.209910
10. Liang HD, Li H, Feng H, Zhao ZN, Song WJ, Yuan B. Application of intraoperative navigation and positioning system in the removal of deep foreign bodies in the limbs. Chin Med J (Engl). 2019;132(11):1375-1377. doi:10.1097/CM9.0000000000000253
11. Moriarty CM, Baker RJ. A pain in the psoas. Sports Health. 2016;8(6):568-572. doi:10.1177/1941738116665112
12. Akhan O, Durmaz H, Balcı S, Birgi E, Çiftçi T, Akıncı D. Percutaneous drainage of retroperitoneal abscesses: variables for success, failure, and recurrence. Diagn Interv Radiol. 2020;26(2):124-130. doi:10.5152/dir.2019.19199
13. Hong CH, Hong YC, Bae SH, et al. Laparoscopic drainage as a minimally invasive treatment for a psoas abscess: a single center case series and literature review. Medicine (Baltimore). 2020;99(14):e19640. doi:10.1097/MD.0000000000019640
1. Schroeder JE, Lowe J, Chaimsky G, Liebergall M, Mosheiff R. Migrating shrapnel: a rare cause of knee synovitis. Mil Med. 2010;175(11):929-930. doi:10.7205/milmed-d-09-00254
2. Centeno JA, Rogers DA, van der Voet GB, et al. Embedded fragments from U.S. military personnel—chemical analysis and potential health implications. Int J Environ Res Public Health. 2014;11(2):1261-1278. Published 2014 Jan 23. doi:10.3390/ijerph110201261
3. Carija R, Busic Z, Bradaric N, Bulovic B, Borzic Z, Pavicic-Perkovic S. Surgical removal of metallic foreign body (shrapnel) from the lumbosacral spine and the treatment of chronic osteomyelitis: a case report. West Indian Med J. 2014;63(4):373-375. doi:10.7727/wimj.2012.290
4. Grasso I, Blattner M, Short T, Downs J. Severe systemic lead toxicity resulting from extra-articular retained shrapnel presenting as jaundice and hepatitis: a case report and review of the literature. Mil Med. 2017;182(3-4):e1843-e1848. doi:10.7205/MILMED-D-16-00231
5. Dillman RO, Crumb CK, Lidsky MJ. Lead poisoning from a gunshot wound: report of a case and review of the literature. Am J Med. 1979;66(3):509-514. doi:10.1016/0002-9343(79)91083-0
6. Woodhouse JB, Uberoi R. Techniques for intravascular foreign body retrieval. Cardiovasc Intervent Radiol. 2013;36(4):888-897. doi:10.1007/s00270-012-0488-8
7. Mallmann CV, Wolf KJ, Wacker FK. Retrieval of vascular foreign bodies using a self-made wire snare. Acta Radiol. 2008;49(10):1124-1128. doi:10.1080/02841850802454741
8. Nosher JL, Siegel R. Percutaneous retrieval of nonvascular foreign bodies. Radiology. 1993;187(3):649-651. doi:10.1148/radiology.187.3.8497610
9. Fu Y, Cui LG, Romagnoli C, Li ZQ, Lei YT. Ultrasound-guided removal of retained soft tissue foreign body with late presentation. Chin Med J (Engl). 2017;130(14):1753-1754. doi:10.4103/0366-6999.209910
10. Liang HD, Li H, Feng H, Zhao ZN, Song WJ, Yuan B. Application of intraoperative navigation and positioning system in the removal of deep foreign bodies in the limbs. Chin Med J (Engl). 2019;132(11):1375-1377. doi:10.1097/CM9.0000000000000253
11. Moriarty CM, Baker RJ. A pain in the psoas. Sports Health. 2016;8(6):568-572. doi:10.1177/1941738116665112
12. Akhan O, Durmaz H, Balcı S, Birgi E, Çiftçi T, Akıncı D. Percutaneous drainage of retroperitoneal abscesses: variables for success, failure, and recurrence. Diagn Interv Radiol. 2020;26(2):124-130. doi:10.5152/dir.2019.19199
13. Hong CH, Hong YC, Bae SH, et al. Laparoscopic drainage as a minimally invasive treatment for a psoas abscess: a single center case series and literature review. Medicine (Baltimore). 2020;99(14):e19640. doi:10.1097/MD.0000000000019640
A ‘big breakfast’ diet affects hunger, not weight loss
Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.
, published in“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”
Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.
Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.
All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
No optimum time to eat for weight loss
Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.
The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.
“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.
“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
Meal timing reduces hunger but does not affect weight loss
However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”
“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.
“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.
“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
‘Major finding’ for chrono-nutrition
Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.
“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”
It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.
“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
Great variability in individual responses to diets
Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.
“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.
“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.
“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.
“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”
This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.
A version of this article first appeared on Medscape.co.uk.
Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.
, published in“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”
Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.
Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.
All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
No optimum time to eat for weight loss
Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.
The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.
“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.
“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
Meal timing reduces hunger but does not affect weight loss
However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”
“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.
“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.
“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
‘Major finding’ for chrono-nutrition
Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.
“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”
It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.
“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
Great variability in individual responses to diets
Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.
“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.
“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.
“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.
“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”
This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.
A version of this article first appeared on Medscape.co.uk.
Cell Metabolism, from the University of Aberdeen. The idea that ‘front-loading’ calories early in the day might help dieting attempts was based on the belief that consuming the bulk of daily calories in the morning optimizes weight loss by burning calories more efficiently and quickly.
, published in“There are a lot of myths surrounding the timing of eating and how it might influence either body weight or health,” said senior author Alexandra Johnstone, PhD, a researcher at the Rowett Institute, University of Aberdeen, who specializes in appetite control. “This has been driven largely by the circadian rhythm field. But we in the nutrition field have wondered how this could be possible. Where would the energy go? We decided to take a closer look at how time of day interacts with metabolism.”
Her team undertook a randomized crossover trial of 30 overweight and obese subjects recruited via social media ads. Participants – 16 men and 14 women – had a mean age of 51 years, and body mass index of 27-42 kg/ m2 but were otherwise healthy. The researchers compared two calorie-restricted but isoenergetic weight loss diets: morning-loaded calories with 45% of intake at breakfast, 35% at lunch, and 20% at dinner, and evening-loaded calories with the inverse proportions of 20%, 35%, and 45% at breakfast, lunch, and dinner, respectively.
Each diet was followed for 4 weeks, with a controlled baseline diet in which calories were balanced throughout the day provided for 1 week at the outset and during a 1-week washout period between the two intervention diets. Each person’s calorie intake was fixed, referenced to their individual measured resting metabolic rate, to assess the effect on weight loss and energy expenditure of meal timing under isoenergetic intake. Both diets were designed to provide the same nutrient composition of 30% protein, 35% carbohydrate, and 35% fat.
All food and beverages were provided, “making this the most rigorously controlled study to assess timing of eating in humans to date,” the team said, “with the aim of accounting for all aspects of energy balance.”
No optimum time to eat for weight loss
Results showed that both diets resulted in significant weight reduction at the end of each dietary intervention period, with subjects losing an average of just over 3 kg during each of the 4-week periods. However, there was no difference in weight loss between the morning-loaded and evening-loaded diets.
The relative size of breakfast and dinner – whether a person eats the largest meal early or late in the day – does not have an impact on metabolism, the team said. This challenges previous studies that have suggested that “evening eaters” – now a majority of the U.K. population – have a greater likelihood of gaining weight and greater difficulty in losing it.
“Participants were provided with all their meals for 8 weeks and their energy expenditure and body composition monitored for changes, using gold standard techniques at the Rowett Institute,” Dr. Johnstone said. “The same number of calories was consumed by volunteers at different times of the day, with energy expenditure measures using analysis of urine.
“This study is important because it challenges the previously held belief that eating at different times of the day leads to differential energy expenditure. The research shows that under weight loss conditions there is no optimum time to eat in order to manage weight, and that change in body weight is determined by energy balance.”
Meal timing reduces hunger but does not affect weight loss
However, the research also revealed that when subjects consumed the morning-loaded (big breakfast) diet, they reported feeling significantly less hungry later in the day. “Morning-loaded intake may assist with compliance to weight loss regime, through a greater suppression of appetite,” the authors said, adding that this “could foster easier weight loss in the real world.”
“The participants reported that their appetites were better controlled on the days they ate a bigger breakfast and that they felt satiated throughout the rest of the day,” Dr. Johnstone said.
“We know that appetite control is important to achieve weight loss, and our study suggests that those consuming the most calories in the morning felt less hungry, in contrast to when they consumed more calories in the evening period.
“This could be quite useful in the real-world environment, versus in the research setting that we were working in.”
‘Major finding’ for chrono-nutrition
Coauthor Jonathan Johnston, PhD, professor of chronobiology and integrative physiology at the University of Surrey, said: “This is a major finding for the field of meal timing (‘chrono-nutrition’) research. Many aspects of human biology change across the day and we are starting to understand how this interacts with food intake.
“Our new research shows that, in weight loss conditions, the size of breakfast and dinner regulates our appetite but not the total amount of energy that our bodies use,” Dr. Johnston said. “We plan to build upon this research to improve the health of the general population and specific groups, e.g, shift workers.”
It’s possible that shift workers could have different metabolic responses, due to the disruption of their circadian rhythms, the team said. Dr. Johnstone noted that this type of experiment could also be applied to the study of intermittent fasting (time-restricted eating), to help determine the best time of day for people to consume their calories.
“One thing that’s important to note is that when it comes to timing and dieting, there is not likely going to be one diet that fits all,” she concluded. “Figuring this out is going to be the future of diet studies, but it’s something that’s very difficult to measure.”
Great variability in individual responses to diets
Commenting on the study, Helena Gibson-Moore, RNutr (PH), nutrition scientist and spokesperson for the British Nutrition Foundation, said: “With about two in three adults in the UK either overweight or obese, it’s important that research continues to look into effective strategies for people to lose weight.” She described the study as “interesting,” and a challenge to previous research supporting “front-loading” calories earlier in the day as more effective for weight loss.
“However, whilst in this study there were no differences in weight loss, participants did report significantly lower hunger when eating a higher proportion of calories in the morning,” she said. “Therefore, for people who prefer having a big breakfast this may still be a useful way to help compliance to a weight loss regime through feeling less hungry in the evening, which in turn may lead to a reduced calorie intake later in the day.
“However, research has shown that as individuals we respond to diets in different ways. For example, a study comparing weight loss after a healthy low-fat diet vs. a healthy low-carbohydrate diet showed similar mean weight loss at 12 months, but there was large variability in the personal responses to each diet with some participants actually gaining weight.
“Differences in individual responses to dietary exposures has led to research into a personalized nutrition approach which requires collection of personal data and then provides individualized advice based on this.” Research has suggested that personalized dietary and physical activity advice was more effective than conventional generalized advice, she said.
“The bottom line for effective weight loss is that it is clear there is ‘no one size fits all’ approach and different weight loss strategies can work for different people but finding effective strategies for long-term sustainability of weight loss continues to be the major challenge. There are many factors that impact successful weight management and for some people it may not just be what we eat that is important, but also how and when we eat.”
This study was funded by the Medical Research Council and the Scottish Government, Rural and Environment Science and Analytical Services Division.
A version of this article first appeared on Medscape.co.uk.
FROM CELL METABOLISM
Successful Use of Lanadelumab in an Older Patient With Type II Hereditary Angioedema
Hereditary angioedema (HAE) is a rare genetic disorder affecting about 1 in 67,000 individuals and may lead to increased morbidity and mortality.1,2 HAE is characterized by recurring episodes of subcutaneous and/or submucosal edema without urticaria due to an excess of bradykinin.2,3 Autosomal dominant inheritance is present in 75% of patients with HAE and is classified into 2 main types.2 Type I HAE is caused by deficiency of C1 esterase inhibitor, accounting for 85% of cases.2 Type II HAE is marked by normal to elevated levels of C1 esterase inhibitor but with reduced activity.2
Cutaneous and abdominal angioedema attacks are the most common presentation.1 However, any location may be affected, including the face, oropharynx, and larynx.1 Only 0.9% of all HAE attacks cause laryngeal edema, but 50% of HAE patients have experienced a laryngeal attack, which may be lethal.1 An angioedema attack can range in severity, depending on the location and degree of edema.3 In addition, patients with HAE often are diagnosed with anxiety and depression secondary to their poor quality of life.4 Thus, long-term prophylaxis of attacks is crucial to reduce the physical and psychological implications.
Previously, HAE was treated with antifibrinolytic agents and attenuated androgens for short- and long-term prophylaxis.1 These treatment modalities are now considered second-line since the development of novel medications with improved efficacy and limited adverse effects (AEs).1 For long-term prophylaxis, subcutaneous and IV C1 esterase inhibitor has been proven effective in both types I and II HAE.1 Another option, lanadelumab, a subcutaneously delivered monoclonal antibody inhibitor of plasma kallikrein, has been proven to decrease the frequency of HAE attacks without significant AEs.5 Lanadelumab works by binding to the active site of plasma kallikrein, which reduces its activity and slows the production of bradykinin.6 This results in decreasing vascular permeability and swelling episodes in patients with HAE.7 Data, however, are limited, specifically regarding patients with type II HAE and patients aged ≥ 65 years.5 This article reports on an older male with type II HAE successfully treated with lanadelumab.
Case Presentation
An 81-year-old male patient with hypertension, hypertriglyceridemia, and aortic aneurysm had recurrent, frequent episodes of severe abdominal pain with a remote history of extremity and scrotal swelling since adolescence. He was misdiagnosed for years and was eventually determined to have HAE at age 75 years after his niece was diagnosed, prompting him to be reevaluated for his frequent bouts of abdominal pain. His laboratory findings were consistent with HAE type II with low C4 (7.8 mg/dL), normal C1 esterase inhibitor levels (24 mg/dL), and low levels of C1 esterase inhibitor activity (28% of normal).
Initially, he described having weekly attacks of abdominal pain that could last 1 to several days. At worst, these attacks would last up to a month, causing a decrease in appetite and weight loss. At age 77 years, he began an on-demand treatment, icatibant, a bradykinin receptor blocker. After initiating icatibant during an acute attack, the pain would diminish within 1 to 2 hours, and within several hours, he would be pain free. Previously, pain relief would take several days to weeks. He continued to use icatibant on-demand, typically requiring treatment every 1 to 2 months for only the more severe attacks.
After an increasing frequency of abdominal pain attacks, prophylactic medication was recommended. Therefore, subcutaneous lanadelumab 300 mg every 2 weeks was initiated for long-term prophylaxis. The patient went from requiring on-demand treatment 2 to 3 times per month to once in 6 months after starting lanadelumab. In addition, he tolerated the medication well without any AEs.
Discussion
According to the international WAO/EAACI 2021 guidelines, HAE treatment goals are “to achieve complete control of the disease and to normalize patients’ lives.”8 On-demand treatment options include C1 esterase inhibitor, icatibant, or ecallantide (a kallikrein inhibitor).8 Long-term prophylaxis in HAE should be considered, accounting for disease activity, burden, control, and patient preference. Five medications have been used for long-term prophylaxis: antifibrinolytic agents (not recommended), attenuated androgens (considered second-line), C1 esterase inhibitor, berotralstat, and lanadelumab.8
Antifibrinolytics are no longer recommended for long-term prophylactic treatment as their efficacy is poor and was not considered for our patient. Attenuated androgens, such as danazol, have a history of prophylactic use in patients with HAE due to their good efficacy but are suboptimal due to their significant AE profile and many drug-drug interactions.8 In addition, androgens have many contraindications, including hypertension and hypertriglyceridemia, which were both present in our patient. Consequently, danazol was not an advised treatment for our patient. C1 esterase inhibitor is often used to prevent HAE attacks and can be given intravenously or subcutaneously, typically administered biweekly. A potential AE of C1 esterase inhibitor is thrombosis.Therefore, C1 esterase inhibitor was not a preferred choice in our older patient with a history of hypercoagulability. Berotralstat, a plasma kallikrein inhibitor, is an oral treatment option that also has shown efficacy in long-term prophylaxis. The most common AEs of berotralstat tend to be gastrointestinal symptoms, and the medication requires dose adjustment for patients with hepatic impairment.8 Berotralstat was not considered because it was not an approved treatment option at the time of this patient’s treatment. Lanadelumab is a human monoclonal antibody against plasma kallikrein, which decreases bradykinin production in patients with HAE, thus preventing angioedema attacks.5 Data regarding the use of lanadelumab in patients with type II HAE are limited, but because HAE with normal C1 esterase inhibitor levels involves the production of bradykinin via kallikrein, lanadelumab should still be effective.1 Lanadelumab was chosen for our patient because of its minimal AEs and is not known to increase the risk of thrombosis.
Lanadelumab is a novel medication, recently approved in 2018 by the US Food and Drug Administration for the treatment of type I and type 2 HAE in patients aged ≥ 12 years.7 The phase 3 Hereditary Angioedema Long-term Prophylaxis (HELP) study concluded that treatment with subcutaneous lanadelumab for 26 weeks significantly decreased the frequency of angioedema attacks compared with placebo.5 However, 113 (90.4%) of patients in the phase III HELP study had type I HAE.5 Of the 125 patients that completed this randomized, double-blind study, only 12 had type II HAE.5 In addition, this study only included 5 patients aged ≥ 65 years.5 Also, no patients aged ≥ 65 years were part of the treatment arms that included a lanadelumab dose of 300 mg.5 In a case series of 12 patients in Canada, treatment with lanadelumab decreased angioedema attacks by 72%.9 However, the series only included 1 patient with type II HAE who was aged 36 years.9 Therefore, our case demonstrates the efficacy of lanadelumab in a patient aged ≥ 65 years with type II HAE.
Conclusions
HAE is a rare and potentially fatal disease characterized by recurrent, unpredictable attacks of edema throughout the body. The disease burden adversely affects a patient’s quality of life. Therefore, long-term prophylaxis is critical to managing patients with HAE. Lanadelumab has been proven as an effective long-term prophylactic treatment option for HAE attacks. This case supports the use of lanadelumab in patients with type II HAE and patients aged ≥ 65 years.
Acknowledgments
The patient was initially written up based on his delayed diagnosis as a case report.3 An earlier version of this article was presented by Samuel Weiss, MD, and Derek Smith, MD, as a poster at the American Academy of Allergy, Asthma, and Immunology virtual conference February 26 to March 1, 2021.
1. Busse PJ, Christiansen SC. Hereditary angioedema. N Engl J Med. 2020;382(12):1136-1148. doi:10.1056/NEJMra1808012
2. Bernstein JA. Severity of hereditary angioedema, prevalence, and diagnostic considerations. Am J Manag Care. 2018;24(14)(suppl):S292-S298.
3. Berger J, Carroll MP Jr, Champoux E, Coop CA. Extremely delayed diagnosis of type II hereditary angioedema: case report and review of the literature. Mil Med. 2018;183(11-12):e765-e767. doi:10.1093/milmed/usy031
4. Fouche AS, Saunders EF, Craig T. Depression and anxiety in patients with hereditary angioedema. Ann Allergy Asthma Immunol. 2014;112(4):371-375. doi:10.1016/j.anai.2013.05.028
5. Banerji A, Riedl MA, Bernstein JA, et al; HELP Investigators. Effect of lanadelumab compared with placebo on prevention of hereditary angioedema attacks: a randomized clinical trial. JAMA. 2018;320(20):2108-2121. doi:10.1001/jama.2018.16773
6. Busse PJ, Farkas H, Banerji A, et al. Lanadelumab for the prophylactic treatment of hereditary angioedema with C1 inhibitor deficiency: a review of preclinical and phase I studies. BioDrugs. 2019;33(1):33-43. doi:10.1007/s40259-018-0325-y
7. Riedl MA, Maurer M, Bernstein JA, et al. Lanadelumab demonstrates rapid and sustained prevention of hereditary angioedema attacks. Allergy. 2020;75(11):2879-2887. doi:10.1111/all.14416
8. Maurer M, Magerl M, Betschel S, et al. The international WAO/EAACI guideline for the management of hereditary angioedema—the 2021 revision and update. Allergy. 2022;77(7):1961-1990. doi:10.1111/all.15214
9. Iaboni A, Kanani A, Lacuesta G, Song C, Kan M, Betschel SD. Impact of lanadelumab in hereditary angioedema: a case series of 12 patients in Canada. Allergy Asthma Clin Immunol. 2021;17(1):78. Published 2021 Jul 23. doi:10.1186/s13223-021-00579-6
Hereditary angioedema (HAE) is a rare genetic disorder affecting about 1 in 67,000 individuals and may lead to increased morbidity and mortality.1,2 HAE is characterized by recurring episodes of subcutaneous and/or submucosal edema without urticaria due to an excess of bradykinin.2,3 Autosomal dominant inheritance is present in 75% of patients with HAE and is classified into 2 main types.2 Type I HAE is caused by deficiency of C1 esterase inhibitor, accounting for 85% of cases.2 Type II HAE is marked by normal to elevated levels of C1 esterase inhibitor but with reduced activity.2
Cutaneous and abdominal angioedema attacks are the most common presentation.1 However, any location may be affected, including the face, oropharynx, and larynx.1 Only 0.9% of all HAE attacks cause laryngeal edema, but 50% of HAE patients have experienced a laryngeal attack, which may be lethal.1 An angioedema attack can range in severity, depending on the location and degree of edema.3 In addition, patients with HAE often are diagnosed with anxiety and depression secondary to their poor quality of life.4 Thus, long-term prophylaxis of attacks is crucial to reduce the physical and psychological implications.
Previously, HAE was treated with antifibrinolytic agents and attenuated androgens for short- and long-term prophylaxis.1 These treatment modalities are now considered second-line since the development of novel medications with improved efficacy and limited adverse effects (AEs).1 For long-term prophylaxis, subcutaneous and IV C1 esterase inhibitor has been proven effective in both types I and II HAE.1 Another option, lanadelumab, a subcutaneously delivered monoclonal antibody inhibitor of plasma kallikrein, has been proven to decrease the frequency of HAE attacks without significant AEs.5 Lanadelumab works by binding to the active site of plasma kallikrein, which reduces its activity and slows the production of bradykinin.6 This results in decreasing vascular permeability and swelling episodes in patients with HAE.7 Data, however, are limited, specifically regarding patients with type II HAE and patients aged ≥ 65 years.5 This article reports on an older male with type II HAE successfully treated with lanadelumab.
Case Presentation
An 81-year-old male patient with hypertension, hypertriglyceridemia, and aortic aneurysm had recurrent, frequent episodes of severe abdominal pain with a remote history of extremity and scrotal swelling since adolescence. He was misdiagnosed for years and was eventually determined to have HAE at age 75 years after his niece was diagnosed, prompting him to be reevaluated for his frequent bouts of abdominal pain. His laboratory findings were consistent with HAE type II with low C4 (7.8 mg/dL), normal C1 esterase inhibitor levels (24 mg/dL), and low levels of C1 esterase inhibitor activity (28% of normal).
Initially, he described having weekly attacks of abdominal pain that could last 1 to several days. At worst, these attacks would last up to a month, causing a decrease in appetite and weight loss. At age 77 years, he began an on-demand treatment, icatibant, a bradykinin receptor blocker. After initiating icatibant during an acute attack, the pain would diminish within 1 to 2 hours, and within several hours, he would be pain free. Previously, pain relief would take several days to weeks. He continued to use icatibant on-demand, typically requiring treatment every 1 to 2 months for only the more severe attacks.
After an increasing frequency of abdominal pain attacks, prophylactic medication was recommended. Therefore, subcutaneous lanadelumab 300 mg every 2 weeks was initiated for long-term prophylaxis. The patient went from requiring on-demand treatment 2 to 3 times per month to once in 6 months after starting lanadelumab. In addition, he tolerated the medication well without any AEs.
Discussion
According to the international WAO/EAACI 2021 guidelines, HAE treatment goals are “to achieve complete control of the disease and to normalize patients’ lives.”8 On-demand treatment options include C1 esterase inhibitor, icatibant, or ecallantide (a kallikrein inhibitor).8 Long-term prophylaxis in HAE should be considered, accounting for disease activity, burden, control, and patient preference. Five medications have been used for long-term prophylaxis: antifibrinolytic agents (not recommended), attenuated androgens (considered second-line), C1 esterase inhibitor, berotralstat, and lanadelumab.8
Antifibrinolytics are no longer recommended for long-term prophylactic treatment as their efficacy is poor and was not considered for our patient. Attenuated androgens, such as danazol, have a history of prophylactic use in patients with HAE due to their good efficacy but are suboptimal due to their significant AE profile and many drug-drug interactions.8 In addition, androgens have many contraindications, including hypertension and hypertriglyceridemia, which were both present in our patient. Consequently, danazol was not an advised treatment for our patient. C1 esterase inhibitor is often used to prevent HAE attacks and can be given intravenously or subcutaneously, typically administered biweekly. A potential AE of C1 esterase inhibitor is thrombosis.Therefore, C1 esterase inhibitor was not a preferred choice in our older patient with a history of hypercoagulability. Berotralstat, a plasma kallikrein inhibitor, is an oral treatment option that also has shown efficacy in long-term prophylaxis. The most common AEs of berotralstat tend to be gastrointestinal symptoms, and the medication requires dose adjustment for patients with hepatic impairment.8 Berotralstat was not considered because it was not an approved treatment option at the time of this patient’s treatment. Lanadelumab is a human monoclonal antibody against plasma kallikrein, which decreases bradykinin production in patients with HAE, thus preventing angioedema attacks.5 Data regarding the use of lanadelumab in patients with type II HAE are limited, but because HAE with normal C1 esterase inhibitor levels involves the production of bradykinin via kallikrein, lanadelumab should still be effective.1 Lanadelumab was chosen for our patient because of its minimal AEs and is not known to increase the risk of thrombosis.
Lanadelumab is a novel medication, recently approved in 2018 by the US Food and Drug Administration for the treatment of type I and type 2 HAE in patients aged ≥ 12 years.7 The phase 3 Hereditary Angioedema Long-term Prophylaxis (HELP) study concluded that treatment with subcutaneous lanadelumab for 26 weeks significantly decreased the frequency of angioedema attacks compared with placebo.5 However, 113 (90.4%) of patients in the phase III HELP study had type I HAE.5 Of the 125 patients that completed this randomized, double-blind study, only 12 had type II HAE.5 In addition, this study only included 5 patients aged ≥ 65 years.5 Also, no patients aged ≥ 65 years were part of the treatment arms that included a lanadelumab dose of 300 mg.5 In a case series of 12 patients in Canada, treatment with lanadelumab decreased angioedema attacks by 72%.9 However, the series only included 1 patient with type II HAE who was aged 36 years.9 Therefore, our case demonstrates the efficacy of lanadelumab in a patient aged ≥ 65 years with type II HAE.
Conclusions
HAE is a rare and potentially fatal disease characterized by recurrent, unpredictable attacks of edema throughout the body. The disease burden adversely affects a patient’s quality of life. Therefore, long-term prophylaxis is critical to managing patients with HAE. Lanadelumab has been proven as an effective long-term prophylactic treatment option for HAE attacks. This case supports the use of lanadelumab in patients with type II HAE and patients aged ≥ 65 years.
Acknowledgments
The patient was initially written up based on his delayed diagnosis as a case report.3 An earlier version of this article was presented by Samuel Weiss, MD, and Derek Smith, MD, as a poster at the American Academy of Allergy, Asthma, and Immunology virtual conference February 26 to March 1, 2021.
Hereditary angioedema (HAE) is a rare genetic disorder affecting about 1 in 67,000 individuals and may lead to increased morbidity and mortality.1,2 HAE is characterized by recurring episodes of subcutaneous and/or submucosal edema without urticaria due to an excess of bradykinin.2,3 Autosomal dominant inheritance is present in 75% of patients with HAE and is classified into 2 main types.2 Type I HAE is caused by deficiency of C1 esterase inhibitor, accounting for 85% of cases.2 Type II HAE is marked by normal to elevated levels of C1 esterase inhibitor but with reduced activity.2
Cutaneous and abdominal angioedema attacks are the most common presentation.1 However, any location may be affected, including the face, oropharynx, and larynx.1 Only 0.9% of all HAE attacks cause laryngeal edema, but 50% of HAE patients have experienced a laryngeal attack, which may be lethal.1 An angioedema attack can range in severity, depending on the location and degree of edema.3 In addition, patients with HAE often are diagnosed with anxiety and depression secondary to their poor quality of life.4 Thus, long-term prophylaxis of attacks is crucial to reduce the physical and psychological implications.
Previously, HAE was treated with antifibrinolytic agents and attenuated androgens for short- and long-term prophylaxis.1 These treatment modalities are now considered second-line since the development of novel medications with improved efficacy and limited adverse effects (AEs).1 For long-term prophylaxis, subcutaneous and IV C1 esterase inhibitor has been proven effective in both types I and II HAE.1 Another option, lanadelumab, a subcutaneously delivered monoclonal antibody inhibitor of plasma kallikrein, has been proven to decrease the frequency of HAE attacks without significant AEs.5 Lanadelumab works by binding to the active site of plasma kallikrein, which reduces its activity and slows the production of bradykinin.6 This results in decreasing vascular permeability and swelling episodes in patients with HAE.7 Data, however, are limited, specifically regarding patients with type II HAE and patients aged ≥ 65 years.5 This article reports on an older male with type II HAE successfully treated with lanadelumab.
Case Presentation
An 81-year-old male patient with hypertension, hypertriglyceridemia, and aortic aneurysm had recurrent, frequent episodes of severe abdominal pain with a remote history of extremity and scrotal swelling since adolescence. He was misdiagnosed for years and was eventually determined to have HAE at age 75 years after his niece was diagnosed, prompting him to be reevaluated for his frequent bouts of abdominal pain. His laboratory findings were consistent with HAE type II with low C4 (7.8 mg/dL), normal C1 esterase inhibitor levels (24 mg/dL), and low levels of C1 esterase inhibitor activity (28% of normal).
Initially, he described having weekly attacks of abdominal pain that could last 1 to several days. At worst, these attacks would last up to a month, causing a decrease in appetite and weight loss. At age 77 years, he began an on-demand treatment, icatibant, a bradykinin receptor blocker. After initiating icatibant during an acute attack, the pain would diminish within 1 to 2 hours, and within several hours, he would be pain free. Previously, pain relief would take several days to weeks. He continued to use icatibant on-demand, typically requiring treatment every 1 to 2 months for only the more severe attacks.
After an increasing frequency of abdominal pain attacks, prophylactic medication was recommended. Therefore, subcutaneous lanadelumab 300 mg every 2 weeks was initiated for long-term prophylaxis. The patient went from requiring on-demand treatment 2 to 3 times per month to once in 6 months after starting lanadelumab. In addition, he tolerated the medication well without any AEs.
Discussion
According to the international WAO/EAACI 2021 guidelines, HAE treatment goals are “to achieve complete control of the disease and to normalize patients’ lives.”8 On-demand treatment options include C1 esterase inhibitor, icatibant, or ecallantide (a kallikrein inhibitor).8 Long-term prophylaxis in HAE should be considered, accounting for disease activity, burden, control, and patient preference. Five medications have been used for long-term prophylaxis: antifibrinolytic agents (not recommended), attenuated androgens (considered second-line), C1 esterase inhibitor, berotralstat, and lanadelumab.8
Antifibrinolytics are no longer recommended for long-term prophylactic treatment as their efficacy is poor and was not considered for our patient. Attenuated androgens, such as danazol, have a history of prophylactic use in patients with HAE due to their good efficacy but are suboptimal due to their significant AE profile and many drug-drug interactions.8 In addition, androgens have many contraindications, including hypertension and hypertriglyceridemia, which were both present in our patient. Consequently, danazol was not an advised treatment for our patient. C1 esterase inhibitor is often used to prevent HAE attacks and can be given intravenously or subcutaneously, typically administered biweekly. A potential AE of C1 esterase inhibitor is thrombosis.Therefore, C1 esterase inhibitor was not a preferred choice in our older patient with a history of hypercoagulability. Berotralstat, a plasma kallikrein inhibitor, is an oral treatment option that also has shown efficacy in long-term prophylaxis. The most common AEs of berotralstat tend to be gastrointestinal symptoms, and the medication requires dose adjustment for patients with hepatic impairment.8 Berotralstat was not considered because it was not an approved treatment option at the time of this patient’s treatment. Lanadelumab is a human monoclonal antibody against plasma kallikrein, which decreases bradykinin production in patients with HAE, thus preventing angioedema attacks.5 Data regarding the use of lanadelumab in patients with type II HAE are limited, but because HAE with normal C1 esterase inhibitor levels involves the production of bradykinin via kallikrein, lanadelumab should still be effective.1 Lanadelumab was chosen for our patient because of its minimal AEs and is not known to increase the risk of thrombosis.
Lanadelumab is a novel medication, recently approved in 2018 by the US Food and Drug Administration for the treatment of type I and type 2 HAE in patients aged ≥ 12 years.7 The phase 3 Hereditary Angioedema Long-term Prophylaxis (HELP) study concluded that treatment with subcutaneous lanadelumab for 26 weeks significantly decreased the frequency of angioedema attacks compared with placebo.5 However, 113 (90.4%) of patients in the phase III HELP study had type I HAE.5 Of the 125 patients that completed this randomized, double-blind study, only 12 had type II HAE.5 In addition, this study only included 5 patients aged ≥ 65 years.5 Also, no patients aged ≥ 65 years were part of the treatment arms that included a lanadelumab dose of 300 mg.5 In a case series of 12 patients in Canada, treatment with lanadelumab decreased angioedema attacks by 72%.9 However, the series only included 1 patient with type II HAE who was aged 36 years.9 Therefore, our case demonstrates the efficacy of lanadelumab in a patient aged ≥ 65 years with type II HAE.
Conclusions
HAE is a rare and potentially fatal disease characterized by recurrent, unpredictable attacks of edema throughout the body. The disease burden adversely affects a patient’s quality of life. Therefore, long-term prophylaxis is critical to managing patients with HAE. Lanadelumab has been proven as an effective long-term prophylactic treatment option for HAE attacks. This case supports the use of lanadelumab in patients with type II HAE and patients aged ≥ 65 years.
Acknowledgments
The patient was initially written up based on his delayed diagnosis as a case report.3 An earlier version of this article was presented by Samuel Weiss, MD, and Derek Smith, MD, as a poster at the American Academy of Allergy, Asthma, and Immunology virtual conference February 26 to March 1, 2021.
1. Busse PJ, Christiansen SC. Hereditary angioedema. N Engl J Med. 2020;382(12):1136-1148. doi:10.1056/NEJMra1808012
2. Bernstein JA. Severity of hereditary angioedema, prevalence, and diagnostic considerations. Am J Manag Care. 2018;24(14)(suppl):S292-S298.
3. Berger J, Carroll MP Jr, Champoux E, Coop CA. Extremely delayed diagnosis of type II hereditary angioedema: case report and review of the literature. Mil Med. 2018;183(11-12):e765-e767. doi:10.1093/milmed/usy031
4. Fouche AS, Saunders EF, Craig T. Depression and anxiety in patients with hereditary angioedema. Ann Allergy Asthma Immunol. 2014;112(4):371-375. doi:10.1016/j.anai.2013.05.028
5. Banerji A, Riedl MA, Bernstein JA, et al; HELP Investigators. Effect of lanadelumab compared with placebo on prevention of hereditary angioedema attacks: a randomized clinical trial. JAMA. 2018;320(20):2108-2121. doi:10.1001/jama.2018.16773
6. Busse PJ, Farkas H, Banerji A, et al. Lanadelumab for the prophylactic treatment of hereditary angioedema with C1 inhibitor deficiency: a review of preclinical and phase I studies. BioDrugs. 2019;33(1):33-43. doi:10.1007/s40259-018-0325-y
7. Riedl MA, Maurer M, Bernstein JA, et al. Lanadelumab demonstrates rapid and sustained prevention of hereditary angioedema attacks. Allergy. 2020;75(11):2879-2887. doi:10.1111/all.14416
8. Maurer M, Magerl M, Betschel S, et al. The international WAO/EAACI guideline for the management of hereditary angioedema—the 2021 revision and update. Allergy. 2022;77(7):1961-1990. doi:10.1111/all.15214
9. Iaboni A, Kanani A, Lacuesta G, Song C, Kan M, Betschel SD. Impact of lanadelumab in hereditary angioedema: a case series of 12 patients in Canada. Allergy Asthma Clin Immunol. 2021;17(1):78. Published 2021 Jul 23. doi:10.1186/s13223-021-00579-6
1. Busse PJ, Christiansen SC. Hereditary angioedema. N Engl J Med. 2020;382(12):1136-1148. doi:10.1056/NEJMra1808012
2. Bernstein JA. Severity of hereditary angioedema, prevalence, and diagnostic considerations. Am J Manag Care. 2018;24(14)(suppl):S292-S298.
3. Berger J, Carroll MP Jr, Champoux E, Coop CA. Extremely delayed diagnosis of type II hereditary angioedema: case report and review of the literature. Mil Med. 2018;183(11-12):e765-e767. doi:10.1093/milmed/usy031
4. Fouche AS, Saunders EF, Craig T. Depression and anxiety in patients with hereditary angioedema. Ann Allergy Asthma Immunol. 2014;112(4):371-375. doi:10.1016/j.anai.2013.05.028
5. Banerji A, Riedl MA, Bernstein JA, et al; HELP Investigators. Effect of lanadelumab compared with placebo on prevention of hereditary angioedema attacks: a randomized clinical trial. JAMA. 2018;320(20):2108-2121. doi:10.1001/jama.2018.16773
6. Busse PJ, Farkas H, Banerji A, et al. Lanadelumab for the prophylactic treatment of hereditary angioedema with C1 inhibitor deficiency: a review of preclinical and phase I studies. BioDrugs. 2019;33(1):33-43. doi:10.1007/s40259-018-0325-y
7. Riedl MA, Maurer M, Bernstein JA, et al. Lanadelumab demonstrates rapid and sustained prevention of hereditary angioedema attacks. Allergy. 2020;75(11):2879-2887. doi:10.1111/all.14416
8. Maurer M, Magerl M, Betschel S, et al. The international WAO/EAACI guideline for the management of hereditary angioedema—the 2021 revision and update. Allergy. 2022;77(7):1961-1990. doi:10.1111/all.15214
9. Iaboni A, Kanani A, Lacuesta G, Song C, Kan M, Betschel SD. Impact of lanadelumab in hereditary angioedema: a case series of 12 patients in Canada. Allergy Asthma Clin Immunol. 2021;17(1):78. Published 2021 Jul 23. doi:10.1186/s13223-021-00579-6