User login
Neonatal sepsis: WHO-recommended Rx needs a major rethink
First-line treatment of neonatal sepsis in low- and middle-income countries (LMICs) with ampicillin-gentamicin – as recommended by the World Health Organization – needs to be reassessed, a retrospective, observational cohort study suggests. Rates of resistance to this particular antibiotic combination are extremely high in LMICs, and this treatment is unlikely to save many neonatal patients, according to the study’s results.
“The WHO guidelines are over 10 years old, and they are actually based on high-income country data, whereas data reported from low-income countries are reported by private labs, and they do not cater to the lower socioeconomic groups within these countries, which is important data to capture,” Timothy Walsh, MD, University of Oxford, United Kingdom, told this news organization.
“The main take-home message from our data is that ampicillin-gentamicin doesn’t work for most of the Gram-negative isolates we tested, and while there are alternatives, their use is confounded by [a lack of] financial support,” he added.
The study was published online in The Lancet Infectious Diseases.
BARNARDS study
In this substudy of the Burden of Antibiotic Resistance in Neonates from Developing Societies (BARNARDS) study, investigators focused on the effectiveness of antibiotic therapies after taking into account the high prevalence of pathogen resistance to ampicillin-gentamicin. Participating countries included Bangladesh, Ethiopia, India, Nigeria, Pakistan, Rwanda, and South Africa.
“Blood samples were obtained from neonates presenting with clinical signs of sepsis,” the authors note, “and WGS [whole-genome sequencing] and MICs [minimum inhibitory concentrations] for antibiotic treatment were determined for bacterial isolates from culture-confirmed sepsis.” Between Nov. 2015 and Feb. 2018, 36,285 neonates were enrolled into the main BARNARDS study, of whom 9,874 had clinically diagnosed sepsis and 5,749 had antibiotic data.
A total of 2,483 neonates had culture-confirmed sepsis, and WGS data were available for 457 isolates taken from 442 neonates. Slightly over three-quarters of the 5,749 neonates who had antibiotic data received first-line ampicillin-gentamicin. The other three most commonly prescribed antibiotic combinations were ceftazidime-amikacin, piperacillin-tazobactam-amikacin, and amoxicillin-clavulanate-amikacin.
Neonates treated with ceftazidime-amikacin had a 68% lower reported mortality than those treated with ampicillin-gentamicin at an adjusted hazard ratio of 0.32 (95% confidence interval, 0.14-0.72; P = .006), the investigators report. In contrast, no significant differences in mortality rates were reported for neonates treated with amoxicillin-clavulanate-amikacin or piperacillin-tazobactam-amikacin compared to those treated with ampicillin-gentamicin.
Investigators were careful to suggest that mortality effects associated with the different antibiotic combinations might have been confounded by either country-specific effects or underreporting of mortality, as a large proportion of neonates who were treated with ampicillin-gentamicin were followed for fewer than 10 days. However, in an unreported aspect of the same study, neonatal mortality from sepsis dropped by over 50% in two federally funded sites in Nigeria that changed their treatment from the WHO-recommended ampicillin-gentamicin regimen to ceftazidime-amikacin – which Dr. Walsh suggested was an endorsement of ceftazidime-amikacin over ampicillin-gentamicin if ever there was one.
Gram-negative resistance
In looking at resistance patterns to the antibiotic combinations used in these countries, investigators found that almost all Gram-negative isolates tested were “overwhelmingly resistant” to ampicillin, and over 70% of them were resistant to gentamicin as well. Extremely high resistance rates were also found against Staphylococcus spp, which are regarded as intrinsically resistant to ampicillin, rendering it basically useless in this particular treatment setting.
Amikacin had much lower level of resistance, with only about 26% of Gram-negative isolates showing resistance. In terms of coverage against Gram-negative isolates, the lowest level of coverage was provided by ampicillin-gentamicin at slightly over 28%, compared with about 73% for amoxicillin-clavulanate-amikacin, 77% for ceftazidime-amikacin, and 80% for piperacillin-tazobactam-amikacin.
In contrast, “Gram-positive isolates generally had reduced levels of resistance,” the authors state. As Dr. Walsh noted, the consortium also did an analysis assessing how much the antibiotic combinations cost and how much payment was deferred to the parents. For example, in Nigeria, the entire cost of treatment is passed down to the parents, “so if they are earning, say, $5.00 a day and the infant needs ceftazidime-amikacin, where the cost per dose is about $6.00 or $7.00 a day, parents can’t afford it,” Dr. Walsh observed.
This part of the conversation, he added, tends to get lost in many studies of antibiotic resistance in LMICs, which is a critical omission, because in many instances, the choice of treatment does come down to affordability. “It’s all very well for the WHO to sit there and say, ampicillin-gentamicin is perfect, but the combination actually doesn’t work in over 70% of the Gram-negative bacteria we looked at in these countries,” Dr. Walsh emphasized.
“The fact is that we have to be a lot more internationally engaged as to what’s actually happening in poorer populations, because unless we do, neonates are going to continue to die,” he said.
Editorial commentary
Commenting on the findings, lead editorialist Luregn Schlapbach, MD, PhD, of University Children’s Hospital Zurich, Switzerland, pointed out that the study has a number of limitations, including a high rate of dropouts from follow-up. This could possibly result in underestimation of neonatal mortality as well as country-specific biases. Nevertheless, Dr. Schlapbach feels that the integration of sequential clinical, genomic, microbiologic, drug, and cost data across a large network in LMIC settings is “exceptional” and will serve to inform “urgently needed” clinical trials in the field of neonatal sepsis.
“At present, increasing global antibiotic resistance is threatening progress against neonatal sepsis, prompting urgency to develop improved measures to effectively prevent and treat life-threatening infections in this high-risk group,” Dr. Schlapbach and colleagues write.
“The findings from the BARNARDS study call for randomized trials comparing mortality benefit and cost efficiency of different antibiotic combinations and management algorithms to safely reduce unnecessary antibiotic exposure for neonatal sepsis,” the editorialists concluded.
The authors and editorialists have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
First-line treatment of neonatal sepsis in low- and middle-income countries (LMICs) with ampicillin-gentamicin – as recommended by the World Health Organization – needs to be reassessed, a retrospective, observational cohort study suggests. Rates of resistance to this particular antibiotic combination are extremely high in LMICs, and this treatment is unlikely to save many neonatal patients, according to the study’s results.
“The WHO guidelines are over 10 years old, and they are actually based on high-income country data, whereas data reported from low-income countries are reported by private labs, and they do not cater to the lower socioeconomic groups within these countries, which is important data to capture,” Timothy Walsh, MD, University of Oxford, United Kingdom, told this news organization.
“The main take-home message from our data is that ampicillin-gentamicin doesn’t work for most of the Gram-negative isolates we tested, and while there are alternatives, their use is confounded by [a lack of] financial support,” he added.
The study was published online in The Lancet Infectious Diseases.
BARNARDS study
In this substudy of the Burden of Antibiotic Resistance in Neonates from Developing Societies (BARNARDS) study, investigators focused on the effectiveness of antibiotic therapies after taking into account the high prevalence of pathogen resistance to ampicillin-gentamicin. Participating countries included Bangladesh, Ethiopia, India, Nigeria, Pakistan, Rwanda, and South Africa.
“Blood samples were obtained from neonates presenting with clinical signs of sepsis,” the authors note, “and WGS [whole-genome sequencing] and MICs [minimum inhibitory concentrations] for antibiotic treatment were determined for bacterial isolates from culture-confirmed sepsis.” Between Nov. 2015 and Feb. 2018, 36,285 neonates were enrolled into the main BARNARDS study, of whom 9,874 had clinically diagnosed sepsis and 5,749 had antibiotic data.
A total of 2,483 neonates had culture-confirmed sepsis, and WGS data were available for 457 isolates taken from 442 neonates. Slightly over three-quarters of the 5,749 neonates who had antibiotic data received first-line ampicillin-gentamicin. The other three most commonly prescribed antibiotic combinations were ceftazidime-amikacin, piperacillin-tazobactam-amikacin, and amoxicillin-clavulanate-amikacin.
Neonates treated with ceftazidime-amikacin had a 68% lower reported mortality than those treated with ampicillin-gentamicin at an adjusted hazard ratio of 0.32 (95% confidence interval, 0.14-0.72; P = .006), the investigators report. In contrast, no significant differences in mortality rates were reported for neonates treated with amoxicillin-clavulanate-amikacin or piperacillin-tazobactam-amikacin compared to those treated with ampicillin-gentamicin.
Investigators were careful to suggest that mortality effects associated with the different antibiotic combinations might have been confounded by either country-specific effects or underreporting of mortality, as a large proportion of neonates who were treated with ampicillin-gentamicin were followed for fewer than 10 days. However, in an unreported aspect of the same study, neonatal mortality from sepsis dropped by over 50% in two federally funded sites in Nigeria that changed their treatment from the WHO-recommended ampicillin-gentamicin regimen to ceftazidime-amikacin – which Dr. Walsh suggested was an endorsement of ceftazidime-amikacin over ampicillin-gentamicin if ever there was one.
Gram-negative resistance
In looking at resistance patterns to the antibiotic combinations used in these countries, investigators found that almost all Gram-negative isolates tested were “overwhelmingly resistant” to ampicillin, and over 70% of them were resistant to gentamicin as well. Extremely high resistance rates were also found against Staphylococcus spp, which are regarded as intrinsically resistant to ampicillin, rendering it basically useless in this particular treatment setting.
Amikacin had much lower level of resistance, with only about 26% of Gram-negative isolates showing resistance. In terms of coverage against Gram-negative isolates, the lowest level of coverage was provided by ampicillin-gentamicin at slightly over 28%, compared with about 73% for amoxicillin-clavulanate-amikacin, 77% for ceftazidime-amikacin, and 80% for piperacillin-tazobactam-amikacin.
In contrast, “Gram-positive isolates generally had reduced levels of resistance,” the authors state. As Dr. Walsh noted, the consortium also did an analysis assessing how much the antibiotic combinations cost and how much payment was deferred to the parents. For example, in Nigeria, the entire cost of treatment is passed down to the parents, “so if they are earning, say, $5.00 a day and the infant needs ceftazidime-amikacin, where the cost per dose is about $6.00 or $7.00 a day, parents can’t afford it,” Dr. Walsh observed.
This part of the conversation, he added, tends to get lost in many studies of antibiotic resistance in LMICs, which is a critical omission, because in many instances, the choice of treatment does come down to affordability. “It’s all very well for the WHO to sit there and say, ampicillin-gentamicin is perfect, but the combination actually doesn’t work in over 70% of the Gram-negative bacteria we looked at in these countries,” Dr. Walsh emphasized.
“The fact is that we have to be a lot more internationally engaged as to what’s actually happening in poorer populations, because unless we do, neonates are going to continue to die,” he said.
Editorial commentary
Commenting on the findings, lead editorialist Luregn Schlapbach, MD, PhD, of University Children’s Hospital Zurich, Switzerland, pointed out that the study has a number of limitations, including a high rate of dropouts from follow-up. This could possibly result in underestimation of neonatal mortality as well as country-specific biases. Nevertheless, Dr. Schlapbach feels that the integration of sequential clinical, genomic, microbiologic, drug, and cost data across a large network in LMIC settings is “exceptional” and will serve to inform “urgently needed” clinical trials in the field of neonatal sepsis.
“At present, increasing global antibiotic resistance is threatening progress against neonatal sepsis, prompting urgency to develop improved measures to effectively prevent and treat life-threatening infections in this high-risk group,” Dr. Schlapbach and colleagues write.
“The findings from the BARNARDS study call for randomized trials comparing mortality benefit and cost efficiency of different antibiotic combinations and management algorithms to safely reduce unnecessary antibiotic exposure for neonatal sepsis,” the editorialists concluded.
The authors and editorialists have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
First-line treatment of neonatal sepsis in low- and middle-income countries (LMICs) with ampicillin-gentamicin – as recommended by the World Health Organization – needs to be reassessed, a retrospective, observational cohort study suggests. Rates of resistance to this particular antibiotic combination are extremely high in LMICs, and this treatment is unlikely to save many neonatal patients, according to the study’s results.
“The WHO guidelines are over 10 years old, and they are actually based on high-income country data, whereas data reported from low-income countries are reported by private labs, and they do not cater to the lower socioeconomic groups within these countries, which is important data to capture,” Timothy Walsh, MD, University of Oxford, United Kingdom, told this news organization.
“The main take-home message from our data is that ampicillin-gentamicin doesn’t work for most of the Gram-negative isolates we tested, and while there are alternatives, their use is confounded by [a lack of] financial support,” he added.
The study was published online in The Lancet Infectious Diseases.
BARNARDS study
In this substudy of the Burden of Antibiotic Resistance in Neonates from Developing Societies (BARNARDS) study, investigators focused on the effectiveness of antibiotic therapies after taking into account the high prevalence of pathogen resistance to ampicillin-gentamicin. Participating countries included Bangladesh, Ethiopia, India, Nigeria, Pakistan, Rwanda, and South Africa.
“Blood samples were obtained from neonates presenting with clinical signs of sepsis,” the authors note, “and WGS [whole-genome sequencing] and MICs [minimum inhibitory concentrations] for antibiotic treatment were determined for bacterial isolates from culture-confirmed sepsis.” Between Nov. 2015 and Feb. 2018, 36,285 neonates were enrolled into the main BARNARDS study, of whom 9,874 had clinically diagnosed sepsis and 5,749 had antibiotic data.
A total of 2,483 neonates had culture-confirmed sepsis, and WGS data were available for 457 isolates taken from 442 neonates. Slightly over three-quarters of the 5,749 neonates who had antibiotic data received first-line ampicillin-gentamicin. The other three most commonly prescribed antibiotic combinations were ceftazidime-amikacin, piperacillin-tazobactam-amikacin, and amoxicillin-clavulanate-amikacin.
Neonates treated with ceftazidime-amikacin had a 68% lower reported mortality than those treated with ampicillin-gentamicin at an adjusted hazard ratio of 0.32 (95% confidence interval, 0.14-0.72; P = .006), the investigators report. In contrast, no significant differences in mortality rates were reported for neonates treated with amoxicillin-clavulanate-amikacin or piperacillin-tazobactam-amikacin compared to those treated with ampicillin-gentamicin.
Investigators were careful to suggest that mortality effects associated with the different antibiotic combinations might have been confounded by either country-specific effects or underreporting of mortality, as a large proportion of neonates who were treated with ampicillin-gentamicin were followed for fewer than 10 days. However, in an unreported aspect of the same study, neonatal mortality from sepsis dropped by over 50% in two federally funded sites in Nigeria that changed their treatment from the WHO-recommended ampicillin-gentamicin regimen to ceftazidime-amikacin – which Dr. Walsh suggested was an endorsement of ceftazidime-amikacin over ampicillin-gentamicin if ever there was one.
Gram-negative resistance
In looking at resistance patterns to the antibiotic combinations used in these countries, investigators found that almost all Gram-negative isolates tested were “overwhelmingly resistant” to ampicillin, and over 70% of them were resistant to gentamicin as well. Extremely high resistance rates were also found against Staphylococcus spp, which are regarded as intrinsically resistant to ampicillin, rendering it basically useless in this particular treatment setting.
Amikacin had much lower level of resistance, with only about 26% of Gram-negative isolates showing resistance. In terms of coverage against Gram-negative isolates, the lowest level of coverage was provided by ampicillin-gentamicin at slightly over 28%, compared with about 73% for amoxicillin-clavulanate-amikacin, 77% for ceftazidime-amikacin, and 80% for piperacillin-tazobactam-amikacin.
In contrast, “Gram-positive isolates generally had reduced levels of resistance,” the authors state. As Dr. Walsh noted, the consortium also did an analysis assessing how much the antibiotic combinations cost and how much payment was deferred to the parents. For example, in Nigeria, the entire cost of treatment is passed down to the parents, “so if they are earning, say, $5.00 a day and the infant needs ceftazidime-amikacin, where the cost per dose is about $6.00 or $7.00 a day, parents can’t afford it,” Dr. Walsh observed.
This part of the conversation, he added, tends to get lost in many studies of antibiotic resistance in LMICs, which is a critical omission, because in many instances, the choice of treatment does come down to affordability. “It’s all very well for the WHO to sit there and say, ampicillin-gentamicin is perfect, but the combination actually doesn’t work in over 70% of the Gram-negative bacteria we looked at in these countries,” Dr. Walsh emphasized.
“The fact is that we have to be a lot more internationally engaged as to what’s actually happening in poorer populations, because unless we do, neonates are going to continue to die,” he said.
Editorial commentary
Commenting on the findings, lead editorialist Luregn Schlapbach, MD, PhD, of University Children’s Hospital Zurich, Switzerland, pointed out that the study has a number of limitations, including a high rate of dropouts from follow-up. This could possibly result in underestimation of neonatal mortality as well as country-specific biases. Nevertheless, Dr. Schlapbach feels that the integration of sequential clinical, genomic, microbiologic, drug, and cost data across a large network in LMIC settings is “exceptional” and will serve to inform “urgently needed” clinical trials in the field of neonatal sepsis.
“At present, increasing global antibiotic resistance is threatening progress against neonatal sepsis, prompting urgency to develop improved measures to effectively prevent and treat life-threatening infections in this high-risk group,” Dr. Schlapbach and colleagues write.
“The findings from the BARNARDS study call for randomized trials comparing mortality benefit and cost efficiency of different antibiotic combinations and management algorithms to safely reduce unnecessary antibiotic exposure for neonatal sepsis,” the editorialists concluded.
The authors and editorialists have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Frail COPD patients at high risk of disability and death
, a prospective cohort study of community-dwelling adults has shown.
“Frailty, a widely recognized geriatric syndrome characterized by multidimensional functional decline in bio-psycho-social factors, is associated with functional disability and mortality,” senior author Tze Pin Ng, MD, National University of Singapore, and colleagues explain.“Our results ... suggest that beyond traditional prognostic markers such as FEV1% (forced expiratory volume in 1 second) and dyspnea, the physical frailty phenotype provides additional useful prognostic information on future risks of disability and mortality,” the authors suggest.
The study was published online Dec. 12 in the journal CHEST®.
SLAS-1 and SLAS-2
Data from the Singapore Longitudinal Ageing Study (SLAS-1) and SLAS-2 were collected and analyzed. SLAS-1 recruited 2,804 participants 55 years of age and older from Sept. 2003 through Dec. 2004, while SLAS-2 recruited 3,270 participants of the same age between March 2009 and June 2013. “Follow-up visits and assessments were conducted approximately 3-5 years apart,” the investigators noted.
Mortality was determined at a mean of 9.5 years of follow-up for SLAS-1 participants and a mean of 6.5 years’ follow-up for SLAS-2 participants. A total of 4,627 participants were eventually included in the analysis, of whom 1,162 patients had COPD and 3,465 patients did not. COPD was classified as mild if FEV1% was greater than or equal to 80%; moderate if FEV1% was greater than or equal to 50% to less than 80%, and severe if FEV1% was less than 50%.
Frailty in turn was based on five clinical criteria, including weakness, slowness, low physical activity, exhaustion, and shrinking. Participants were classified as frail if they met three or more of these criteria and prefrail if they met one or two criteria.
Adverse health outcomes were judged on the basis of instrumental or basic activities of daily living (IADL/ADL), while disability was judged by self-reported difficulties in or requiring assistance with at least one IADL or ADL.
Frail or prefrail
Almost half of the participants were frail or prefrail, as the authors reported, while 25% had COPD. Among the participants with COPD, 30% had moderate to severe COPD, 6.4% had dyspnea, and almost half had prefrailty, while approximately 7% were classified as frail.
This percentage was 86% higher than it was for participants without COPD, among whom just 3.2% were assessed as frail, at an odds ratio of 1.86 (95% CI, 1.35-2.56). Further adjustments for possible confounders reduced the gap between frail COPD and frail non-COPD participants, but frailty remained significantly associated with COPD, at an OR of 1.61 (95% CI, 1.15-2.26), the investigators note.
Furthermore, compared to those without COPD, a diagnosis of COPD without and with dyspnea was associated with a 1.5- and 4.2-fold increase in prevalent frailty (95% CI, 1.04-2.08; 1.84-9.19), respectively, although not with prefrailty. Again, adjusting for multiple confounders, FEV1%, dyspnea, and both prefrailty and frailty were associated with an approximately twofold higher prevalence of IADL/ADL disability, while the prevalence of IADL/ADL disability for participants with COPD was approximately fourfold higher in those with co-occurring FEV1% less than 80% with either prefrailty, frailty, or dyspnea.
Furthermore, the presence of prefrailty or frailty in combination with a lower FEV1% or dyspnea was associated with a 3.7- to 3.8-fold increased risk of having an IADL or ADL disability.
Frailty and mortality
Some 1,116 participants with COPD were followed for a mean of 2,981 days for mortality outcomes. Both FEV1% less than 50% and the presence of prefrailty and frailty almost doubled the risk of mortality, at an adjusted hazard ratio of 1.8 (95% CI, 1.24-2.68) compared to patients with an FEV1% greater than or equal to 80%. In combination with either FEV1% less than 80% or prefrailty/frailty, dyspnea almost more than doubled the risk of mortality, at an HR of 2.4 for both combinations.
“However, the mortality risk of participants with COPD was highest among those with FEV1% less than 80% and prefrailty/frailty,” the authors note, more than tripling mortality risk at an adjusted HR of 3.25 (95% CI, 1.97-5.36). Interestingly, FEV1 less than 80% and prefrailty/frailty – both alone and in combination – were also associated with a twofold to fourfold increased risk of IADL or ADL disability in participants without COPD but were less strongly associated with mortality.
Researchers then went on to create a summary risk score containing all relevant variables with values ranging from 0 to 5. The highest risk category of 3 to 5 was associated with a 7- to 8.5-fold increased risk for IADL and ADL disability and mortality among participants with COPD, and that risk remained high after adjusting for multiple confounders.
Interestingly, frailty did not significantly predict mortality in women, while dyspnea did not significantly predict mortality in men. “Recognition and assessment of physical frailty in addition to FEV1% and dyspnea would allow for more accurate identification and targeted treatment of COPD at risk of future adverse outcomes,” the authors suggest.
Frailty scoring system
Asked to comment on the study, Sachin Gupta, MD, a pulmonologist and critical care specialist at Alameda Health System in Oakland, Calif., noted that the current study adds to the body of literature that outcomes in patients with COPD depend as much on objectively measured variables as on qualitative measures. “By applying a frailty scoring system, these researchers were able to categorize frailty and study its impact on patient characteristics and outcomes,” he told this news organization in an email.
The summary risk assessment tool developed and assessed is familiar: It carries parallels to the widely utilized BODE Index, replacing body mass index and 6-minute walk distance with the frailty scale, he added. “Findings from this study support the idea that what meets the eye in face-to-face visits – frailty – can be codified and be part of a tool that is predictive of outcomes,” Dr. Gupta underscored.
The authors had no conflicts of interest to declare. Dr. Gupta disclosed that he is also an employee and shareholder at Genentech.
A version of this article first appeared on Medscape.com.
, a prospective cohort study of community-dwelling adults has shown.
“Frailty, a widely recognized geriatric syndrome characterized by multidimensional functional decline in bio-psycho-social factors, is associated with functional disability and mortality,” senior author Tze Pin Ng, MD, National University of Singapore, and colleagues explain.“Our results ... suggest that beyond traditional prognostic markers such as FEV1% (forced expiratory volume in 1 second) and dyspnea, the physical frailty phenotype provides additional useful prognostic information on future risks of disability and mortality,” the authors suggest.
The study was published online Dec. 12 in the journal CHEST®.
SLAS-1 and SLAS-2
Data from the Singapore Longitudinal Ageing Study (SLAS-1) and SLAS-2 were collected and analyzed. SLAS-1 recruited 2,804 participants 55 years of age and older from Sept. 2003 through Dec. 2004, while SLAS-2 recruited 3,270 participants of the same age between March 2009 and June 2013. “Follow-up visits and assessments were conducted approximately 3-5 years apart,” the investigators noted.
Mortality was determined at a mean of 9.5 years of follow-up for SLAS-1 participants and a mean of 6.5 years’ follow-up for SLAS-2 participants. A total of 4,627 participants were eventually included in the analysis, of whom 1,162 patients had COPD and 3,465 patients did not. COPD was classified as mild if FEV1% was greater than or equal to 80%; moderate if FEV1% was greater than or equal to 50% to less than 80%, and severe if FEV1% was less than 50%.
Frailty in turn was based on five clinical criteria, including weakness, slowness, low physical activity, exhaustion, and shrinking. Participants were classified as frail if they met three or more of these criteria and prefrail if they met one or two criteria.
Adverse health outcomes were judged on the basis of instrumental or basic activities of daily living (IADL/ADL), while disability was judged by self-reported difficulties in or requiring assistance with at least one IADL or ADL.
Frail or prefrail
Almost half of the participants were frail or prefrail, as the authors reported, while 25% had COPD. Among the participants with COPD, 30% had moderate to severe COPD, 6.4% had dyspnea, and almost half had prefrailty, while approximately 7% were classified as frail.
This percentage was 86% higher than it was for participants without COPD, among whom just 3.2% were assessed as frail, at an odds ratio of 1.86 (95% CI, 1.35-2.56). Further adjustments for possible confounders reduced the gap between frail COPD and frail non-COPD participants, but frailty remained significantly associated with COPD, at an OR of 1.61 (95% CI, 1.15-2.26), the investigators note.
Furthermore, compared to those without COPD, a diagnosis of COPD without and with dyspnea was associated with a 1.5- and 4.2-fold increase in prevalent frailty (95% CI, 1.04-2.08; 1.84-9.19), respectively, although not with prefrailty. Again, adjusting for multiple confounders, FEV1%, dyspnea, and both prefrailty and frailty were associated with an approximately twofold higher prevalence of IADL/ADL disability, while the prevalence of IADL/ADL disability for participants with COPD was approximately fourfold higher in those with co-occurring FEV1% less than 80% with either prefrailty, frailty, or dyspnea.
Furthermore, the presence of prefrailty or frailty in combination with a lower FEV1% or dyspnea was associated with a 3.7- to 3.8-fold increased risk of having an IADL or ADL disability.
Frailty and mortality
Some 1,116 participants with COPD were followed for a mean of 2,981 days for mortality outcomes. Both FEV1% less than 50% and the presence of prefrailty and frailty almost doubled the risk of mortality, at an adjusted hazard ratio of 1.8 (95% CI, 1.24-2.68) compared to patients with an FEV1% greater than or equal to 80%. In combination with either FEV1% less than 80% or prefrailty/frailty, dyspnea almost more than doubled the risk of mortality, at an HR of 2.4 for both combinations.
“However, the mortality risk of participants with COPD was highest among those with FEV1% less than 80% and prefrailty/frailty,” the authors note, more than tripling mortality risk at an adjusted HR of 3.25 (95% CI, 1.97-5.36). Interestingly, FEV1 less than 80% and prefrailty/frailty – both alone and in combination – were also associated with a twofold to fourfold increased risk of IADL or ADL disability in participants without COPD but were less strongly associated with mortality.
Researchers then went on to create a summary risk score containing all relevant variables with values ranging from 0 to 5. The highest risk category of 3 to 5 was associated with a 7- to 8.5-fold increased risk for IADL and ADL disability and mortality among participants with COPD, and that risk remained high after adjusting for multiple confounders.
Interestingly, frailty did not significantly predict mortality in women, while dyspnea did not significantly predict mortality in men. “Recognition and assessment of physical frailty in addition to FEV1% and dyspnea would allow for more accurate identification and targeted treatment of COPD at risk of future adverse outcomes,” the authors suggest.
Frailty scoring system
Asked to comment on the study, Sachin Gupta, MD, a pulmonologist and critical care specialist at Alameda Health System in Oakland, Calif., noted that the current study adds to the body of literature that outcomes in patients with COPD depend as much on objectively measured variables as on qualitative measures. “By applying a frailty scoring system, these researchers were able to categorize frailty and study its impact on patient characteristics and outcomes,” he told this news organization in an email.
The summary risk assessment tool developed and assessed is familiar: It carries parallels to the widely utilized BODE Index, replacing body mass index and 6-minute walk distance with the frailty scale, he added. “Findings from this study support the idea that what meets the eye in face-to-face visits – frailty – can be codified and be part of a tool that is predictive of outcomes,” Dr. Gupta underscored.
The authors had no conflicts of interest to declare. Dr. Gupta disclosed that he is also an employee and shareholder at Genentech.
A version of this article first appeared on Medscape.com.
, a prospective cohort study of community-dwelling adults has shown.
“Frailty, a widely recognized geriatric syndrome characterized by multidimensional functional decline in bio-psycho-social factors, is associated with functional disability and mortality,” senior author Tze Pin Ng, MD, National University of Singapore, and colleagues explain.“Our results ... suggest that beyond traditional prognostic markers such as FEV1% (forced expiratory volume in 1 second) and dyspnea, the physical frailty phenotype provides additional useful prognostic information on future risks of disability and mortality,” the authors suggest.
The study was published online Dec. 12 in the journal CHEST®.
SLAS-1 and SLAS-2
Data from the Singapore Longitudinal Ageing Study (SLAS-1) and SLAS-2 were collected and analyzed. SLAS-1 recruited 2,804 participants 55 years of age and older from Sept. 2003 through Dec. 2004, while SLAS-2 recruited 3,270 participants of the same age between March 2009 and June 2013. “Follow-up visits and assessments were conducted approximately 3-5 years apart,” the investigators noted.
Mortality was determined at a mean of 9.5 years of follow-up for SLAS-1 participants and a mean of 6.5 years’ follow-up for SLAS-2 participants. A total of 4,627 participants were eventually included in the analysis, of whom 1,162 patients had COPD and 3,465 patients did not. COPD was classified as mild if FEV1% was greater than or equal to 80%; moderate if FEV1% was greater than or equal to 50% to less than 80%, and severe if FEV1% was less than 50%.
Frailty in turn was based on five clinical criteria, including weakness, slowness, low physical activity, exhaustion, and shrinking. Participants were classified as frail if they met three or more of these criteria and prefrail if they met one or two criteria.
Adverse health outcomes were judged on the basis of instrumental or basic activities of daily living (IADL/ADL), while disability was judged by self-reported difficulties in or requiring assistance with at least one IADL or ADL.
Frail or prefrail
Almost half of the participants were frail or prefrail, as the authors reported, while 25% had COPD. Among the participants with COPD, 30% had moderate to severe COPD, 6.4% had dyspnea, and almost half had prefrailty, while approximately 7% were classified as frail.
This percentage was 86% higher than it was for participants without COPD, among whom just 3.2% were assessed as frail, at an odds ratio of 1.86 (95% CI, 1.35-2.56). Further adjustments for possible confounders reduced the gap between frail COPD and frail non-COPD participants, but frailty remained significantly associated with COPD, at an OR of 1.61 (95% CI, 1.15-2.26), the investigators note.
Furthermore, compared to those without COPD, a diagnosis of COPD without and with dyspnea was associated with a 1.5- and 4.2-fold increase in prevalent frailty (95% CI, 1.04-2.08; 1.84-9.19), respectively, although not with prefrailty. Again, adjusting for multiple confounders, FEV1%, dyspnea, and both prefrailty and frailty were associated with an approximately twofold higher prevalence of IADL/ADL disability, while the prevalence of IADL/ADL disability for participants with COPD was approximately fourfold higher in those with co-occurring FEV1% less than 80% with either prefrailty, frailty, or dyspnea.
Furthermore, the presence of prefrailty or frailty in combination with a lower FEV1% or dyspnea was associated with a 3.7- to 3.8-fold increased risk of having an IADL or ADL disability.
Frailty and mortality
Some 1,116 participants with COPD were followed for a mean of 2,981 days for mortality outcomes. Both FEV1% less than 50% and the presence of prefrailty and frailty almost doubled the risk of mortality, at an adjusted hazard ratio of 1.8 (95% CI, 1.24-2.68) compared to patients with an FEV1% greater than or equal to 80%. In combination with either FEV1% less than 80% or prefrailty/frailty, dyspnea almost more than doubled the risk of mortality, at an HR of 2.4 for both combinations.
“However, the mortality risk of participants with COPD was highest among those with FEV1% less than 80% and prefrailty/frailty,” the authors note, more than tripling mortality risk at an adjusted HR of 3.25 (95% CI, 1.97-5.36). Interestingly, FEV1 less than 80% and prefrailty/frailty – both alone and in combination – were also associated with a twofold to fourfold increased risk of IADL or ADL disability in participants without COPD but were less strongly associated with mortality.
Researchers then went on to create a summary risk score containing all relevant variables with values ranging from 0 to 5. The highest risk category of 3 to 5 was associated with a 7- to 8.5-fold increased risk for IADL and ADL disability and mortality among participants with COPD, and that risk remained high after adjusting for multiple confounders.
Interestingly, frailty did not significantly predict mortality in women, while dyspnea did not significantly predict mortality in men. “Recognition and assessment of physical frailty in addition to FEV1% and dyspnea would allow for more accurate identification and targeted treatment of COPD at risk of future adverse outcomes,” the authors suggest.
Frailty scoring system
Asked to comment on the study, Sachin Gupta, MD, a pulmonologist and critical care specialist at Alameda Health System in Oakland, Calif., noted that the current study adds to the body of literature that outcomes in patients with COPD depend as much on objectively measured variables as on qualitative measures. “By applying a frailty scoring system, these researchers were able to categorize frailty and study its impact on patient characteristics and outcomes,” he told this news organization in an email.
The summary risk assessment tool developed and assessed is familiar: It carries parallels to the widely utilized BODE Index, replacing body mass index and 6-minute walk distance with the frailty scale, he added. “Findings from this study support the idea that what meets the eye in face-to-face visits – frailty – can be codified and be part of a tool that is predictive of outcomes,” Dr. Gupta underscored.
The authors had no conflicts of interest to declare. Dr. Gupta disclosed that he is also an employee and shareholder at Genentech.
A version of this article first appeared on Medscape.com.
FROM CHEST
Low BMI, weight loss predict mortality risk in ILD
A low body mass index (BMI) indicative of being underweight as well as a weight loss of 2 kg or more over the course of 1 year were both independently associated with a higher mortality risk in the following year in patients with fibrotic interstitial lung disease (ILD). In contrast, being both overweight and obese appeared to be protective against mortality at the same 1-year endpoint, according to the results of an observational, retrospective cohort study.
Compared with patients with a normal BMI, patients who were underweight at a BMI of less than 18.5 kg/m2 were over three times more likely to die at 1 year, at a hazard ratio of 3.19 (P < .001), senior author Christopher Ryerson, MD, University of British Columbia, Vancouver, and colleagues reported in the journal Chest.
In contrast, patients who were overweight with a BMI of 25-29 had roughly half the mortality risk as those who were underweight, at an HR of 0.52 (P < .001). Results were roughly similar among the patients with obesity with a BMI in excess of 30, among whom the HR for mortality at 1 year was 0.55 (P < .001), compared with those who were underweight.
“All patients with fibrotic ILD should still engage in exercise and eat an appropriate diet and it is still okay if you are obese and lose weight as a consequence of these lifestyle choices,” Dr. Ryerson told this news organization. “But physicians should be concerned about patients who have severe ILD and who start to lose weight unintentionally since this often represents end-stage fibrosis or some other major comorbidity such as cancer.”
Two large cohorts
Patients from two large cohorts, including the six-center Canadian Registry for Pulmonary Fibrosis (CARE-PF) and the ILD registry at the University of California, San Francisco, were enrolled in the study. A total of 1,786 patients were included from the CARE-PF registry, which served as the derivation cohort, while another 1,779 patients from the UCSF registry served as the validation cohort. In the CARE-PF cohort, 21% of all ILD patients experienced a weight loss of at least 1 kg in the first year of follow-up, including 31% of patients with idiopathic pulmonary fibrosis (IPF).
“Fewer patients experienced a weight loss of at least 1 kg during the first year of the study period in the UCSF cohort,” the authors noted, at only 12% of all ILD patients, some 14% of those with IPF losing at least 1 kg of weight over the course of the year. At 2 years’ follow-up, 35% of all ILD patients had lost at least 1 kg, as had 46% of all IPF patients. Looking at BMI, “a higher value was associated with decreased 1-year mortality in both cohorts on unadjusted analysis,” the investigators observed.
In the CARE-PF cohort, the HR for 1-year mortality was 0.96 per unit difference in BMI (P < .001), while in the UCSF cohort, the HR for 1-year mortality was exactly the same, at 0.96 per unit difference in BMI (P < .001). The authors then adjusted findings for the ILD-GAP index, which included gender, age, and physiology index. After adjusting for this index, the HR for 1-year mortality in the CARE-PF cohort was 0.93 per unit change in BMI (95% CI, 0.90-0.967; P < .001), while in the UCSF cohort, the HR was 0.96 per unit change in BMI (95% CI, 0.94-0.98; P = .001).
Indeed, each 1-kg change above a BMI of 30, adjusted for the ILD-GAP index, was associated with a reduced risk of mortality at 1 year in both cohorts, at an HR of 0.98 (P = .001) in the CARE-PF cohort and an HR of 0.98 (P < .001) in the UCSF cohort. In contrast, patients who experienced a BMI weight loss of 2 kg or more within 1 year had a 41% increased risk of death in the subsequent year after adjusting for the ILD-GAP index and baseline BMI category, at an HR of 1.41 (P = .04). “The absolute change in mortality is much smaller than this,” Dr. Ryerson acknowledged.
“However, the magnitude [in mortality risk] did impress us and this illustrates how weight loss is a frequent consequence of end-stage disease which is something that we have all observed clinically as well,” he added.
Mortality risk plateaued in patients with a greater weight loss, the investigators observed, and there was no association between weight and subsequent 1-year mortality in either cohort on unadjusted analysis.
On the other hand, being underweight was associated with between a 13% and 16% higher mortality risk at 1 year after adjusting for the ILD-GAP, at an HR of 0.84 per 10 kg (P = .001) in the CARE-PF cohort and an HR of 0.87 per 10 kg (P < .001) in the UCSF cohort. “Results were similar in the two studied cohorts, suggesting a robust and generalizable association of both low BMI and weight loss with mortality,” the authors emphasized.
“Together these studies highlight the potential link between obesity and ILD pathogenesis and further suggest the possibility that nutritional support may have a more specific and important role in the management of fibrotic ILD,” the authors wrote. Dr. Ryerson in turn noted that being able to determine mortality risk more accurately than current mortality risk prediction models are able to do is very helpful when dealing with what are sometimes life-and-death decisions.
He also said that having more insight into a patient’s prognosis can change how physicians manage patients with respect to either transplantation or palliation and potentially the need to be more aggressive with pharmacotherapy as well.
Addressing weight loss
Asked to comment on the findings, Elizabeth Volkmann, MD, associate professor of medicine, University of California, Los Angeles, said that this was a very important study and something that she feels does not get adequate attention in clinical practice.
“Weight loss and malnutrition occur in many patients with ILD due to various factors such as gastrointestinal side effects from antifibrotic therapies, decreased oral intake due to psychosocial issues including depression, and increased caloric requirements due to increased work of breathing,” she said in an interview. That said, weight loss and malnutrition are still often underaddressed during clinical encounters for patients with ILD where the focus is on lung health.
“This study illuminates the importance of addressing weight loss in all patients with ILD as it can contribute to heightened risk of mortality,” Dr. Volkmann reemphasized. Dr. Volkmann and colleagues themselves recently reported that radiographic progression of scleroderma lung disease over the course of 1-2 years is associated with an increased risk of long-term mortality, based on two independent studies of systemic sclerosis–interstitial lung disease with extensive follow-up.
Over 8 years of follow-up, patients in the Scleroderma Lung Study II who exhibited an increase of 2% or more in the QILD score – a score that reflects the sum of all abnormally classified scores, including those for fibrosis, ground glass opacity, and honeycombing – for the whole lung at 24 months had an almost fourfold increased risk in mortality, which was significant (P = .014).
The association of an increase in the QILD of at least 2% at 12 months was suggestive in its association with mortality in the SLS I cohort at 12 years of follow-up, a finding that suggests that radiographic progression measured at 2 years is a better predictor of long-term mortality than at 1 year, as the authors concluded.
The CARR-PF is funded by Boehringer Ingelheim. Dr. Ryerson reported receiving personal fees from Boehringer Ingelheim. Dr. Volkmann consults or has received speaker fees from Boehringer Ingelheim and has received grant support from Kadmon and Horizon Therapeutics.
A version of this article first appeared on Medscape.com.
A low body mass index (BMI) indicative of being underweight as well as a weight loss of 2 kg or more over the course of 1 year were both independently associated with a higher mortality risk in the following year in patients with fibrotic interstitial lung disease (ILD). In contrast, being both overweight and obese appeared to be protective against mortality at the same 1-year endpoint, according to the results of an observational, retrospective cohort study.
Compared with patients with a normal BMI, patients who were underweight at a BMI of less than 18.5 kg/m2 were over three times more likely to die at 1 year, at a hazard ratio of 3.19 (P < .001), senior author Christopher Ryerson, MD, University of British Columbia, Vancouver, and colleagues reported in the journal Chest.
In contrast, patients who were overweight with a BMI of 25-29 had roughly half the mortality risk as those who were underweight, at an HR of 0.52 (P < .001). Results were roughly similar among the patients with obesity with a BMI in excess of 30, among whom the HR for mortality at 1 year was 0.55 (P < .001), compared with those who were underweight.
“All patients with fibrotic ILD should still engage in exercise and eat an appropriate diet and it is still okay if you are obese and lose weight as a consequence of these lifestyle choices,” Dr. Ryerson told this news organization. “But physicians should be concerned about patients who have severe ILD and who start to lose weight unintentionally since this often represents end-stage fibrosis or some other major comorbidity such as cancer.”
Two large cohorts
Patients from two large cohorts, including the six-center Canadian Registry for Pulmonary Fibrosis (CARE-PF) and the ILD registry at the University of California, San Francisco, were enrolled in the study. A total of 1,786 patients were included from the CARE-PF registry, which served as the derivation cohort, while another 1,779 patients from the UCSF registry served as the validation cohort. In the CARE-PF cohort, 21% of all ILD patients experienced a weight loss of at least 1 kg in the first year of follow-up, including 31% of patients with idiopathic pulmonary fibrosis (IPF).
“Fewer patients experienced a weight loss of at least 1 kg during the first year of the study period in the UCSF cohort,” the authors noted, at only 12% of all ILD patients, some 14% of those with IPF losing at least 1 kg of weight over the course of the year. At 2 years’ follow-up, 35% of all ILD patients had lost at least 1 kg, as had 46% of all IPF patients. Looking at BMI, “a higher value was associated with decreased 1-year mortality in both cohorts on unadjusted analysis,” the investigators observed.
In the CARE-PF cohort, the HR for 1-year mortality was 0.96 per unit difference in BMI (P < .001), while in the UCSF cohort, the HR for 1-year mortality was exactly the same, at 0.96 per unit difference in BMI (P < .001). The authors then adjusted findings for the ILD-GAP index, which included gender, age, and physiology index. After adjusting for this index, the HR for 1-year mortality in the CARE-PF cohort was 0.93 per unit change in BMI (95% CI, 0.90-0.967; P < .001), while in the UCSF cohort, the HR was 0.96 per unit change in BMI (95% CI, 0.94-0.98; P = .001).
Indeed, each 1-kg change above a BMI of 30, adjusted for the ILD-GAP index, was associated with a reduced risk of mortality at 1 year in both cohorts, at an HR of 0.98 (P = .001) in the CARE-PF cohort and an HR of 0.98 (P < .001) in the UCSF cohort. In contrast, patients who experienced a BMI weight loss of 2 kg or more within 1 year had a 41% increased risk of death in the subsequent year after adjusting for the ILD-GAP index and baseline BMI category, at an HR of 1.41 (P = .04). “The absolute change in mortality is much smaller than this,” Dr. Ryerson acknowledged.
“However, the magnitude [in mortality risk] did impress us and this illustrates how weight loss is a frequent consequence of end-stage disease which is something that we have all observed clinically as well,” he added.
Mortality risk plateaued in patients with a greater weight loss, the investigators observed, and there was no association between weight and subsequent 1-year mortality in either cohort on unadjusted analysis.
On the other hand, being underweight was associated with between a 13% and 16% higher mortality risk at 1 year after adjusting for the ILD-GAP, at an HR of 0.84 per 10 kg (P = .001) in the CARE-PF cohort and an HR of 0.87 per 10 kg (P < .001) in the UCSF cohort. “Results were similar in the two studied cohorts, suggesting a robust and generalizable association of both low BMI and weight loss with mortality,” the authors emphasized.
“Together these studies highlight the potential link between obesity and ILD pathogenesis and further suggest the possibility that nutritional support may have a more specific and important role in the management of fibrotic ILD,” the authors wrote. Dr. Ryerson in turn noted that being able to determine mortality risk more accurately than current mortality risk prediction models are able to do is very helpful when dealing with what are sometimes life-and-death decisions.
He also said that having more insight into a patient’s prognosis can change how physicians manage patients with respect to either transplantation or palliation and potentially the need to be more aggressive with pharmacotherapy as well.
Addressing weight loss
Asked to comment on the findings, Elizabeth Volkmann, MD, associate professor of medicine, University of California, Los Angeles, said that this was a very important study and something that she feels does not get adequate attention in clinical practice.
“Weight loss and malnutrition occur in many patients with ILD due to various factors such as gastrointestinal side effects from antifibrotic therapies, decreased oral intake due to psychosocial issues including depression, and increased caloric requirements due to increased work of breathing,” she said in an interview. That said, weight loss and malnutrition are still often underaddressed during clinical encounters for patients with ILD where the focus is on lung health.
“This study illuminates the importance of addressing weight loss in all patients with ILD as it can contribute to heightened risk of mortality,” Dr. Volkmann reemphasized. Dr. Volkmann and colleagues themselves recently reported that radiographic progression of scleroderma lung disease over the course of 1-2 years is associated with an increased risk of long-term mortality, based on two independent studies of systemic sclerosis–interstitial lung disease with extensive follow-up.
Over 8 years of follow-up, patients in the Scleroderma Lung Study II who exhibited an increase of 2% or more in the QILD score – a score that reflects the sum of all abnormally classified scores, including those for fibrosis, ground glass opacity, and honeycombing – for the whole lung at 24 months had an almost fourfold increased risk in mortality, which was significant (P = .014).
The association of an increase in the QILD of at least 2% at 12 months was suggestive in its association with mortality in the SLS I cohort at 12 years of follow-up, a finding that suggests that radiographic progression measured at 2 years is a better predictor of long-term mortality than at 1 year, as the authors concluded.
The CARR-PF is funded by Boehringer Ingelheim. Dr. Ryerson reported receiving personal fees from Boehringer Ingelheim. Dr. Volkmann consults or has received speaker fees from Boehringer Ingelheim and has received grant support from Kadmon and Horizon Therapeutics.
A version of this article first appeared on Medscape.com.
A low body mass index (BMI) indicative of being underweight as well as a weight loss of 2 kg or more over the course of 1 year were both independently associated with a higher mortality risk in the following year in patients with fibrotic interstitial lung disease (ILD). In contrast, being both overweight and obese appeared to be protective against mortality at the same 1-year endpoint, according to the results of an observational, retrospective cohort study.
Compared with patients with a normal BMI, patients who were underweight at a BMI of less than 18.5 kg/m2 were over three times more likely to die at 1 year, at a hazard ratio of 3.19 (P < .001), senior author Christopher Ryerson, MD, University of British Columbia, Vancouver, and colleagues reported in the journal Chest.
In contrast, patients who were overweight with a BMI of 25-29 had roughly half the mortality risk as those who were underweight, at an HR of 0.52 (P < .001). Results were roughly similar among the patients with obesity with a BMI in excess of 30, among whom the HR for mortality at 1 year was 0.55 (P < .001), compared with those who were underweight.
“All patients with fibrotic ILD should still engage in exercise and eat an appropriate diet and it is still okay if you are obese and lose weight as a consequence of these lifestyle choices,” Dr. Ryerson told this news organization. “But physicians should be concerned about patients who have severe ILD and who start to lose weight unintentionally since this often represents end-stage fibrosis or some other major comorbidity such as cancer.”
Two large cohorts
Patients from two large cohorts, including the six-center Canadian Registry for Pulmonary Fibrosis (CARE-PF) and the ILD registry at the University of California, San Francisco, were enrolled in the study. A total of 1,786 patients were included from the CARE-PF registry, which served as the derivation cohort, while another 1,779 patients from the UCSF registry served as the validation cohort. In the CARE-PF cohort, 21% of all ILD patients experienced a weight loss of at least 1 kg in the first year of follow-up, including 31% of patients with idiopathic pulmonary fibrosis (IPF).
“Fewer patients experienced a weight loss of at least 1 kg during the first year of the study period in the UCSF cohort,” the authors noted, at only 12% of all ILD patients, some 14% of those with IPF losing at least 1 kg of weight over the course of the year. At 2 years’ follow-up, 35% of all ILD patients had lost at least 1 kg, as had 46% of all IPF patients. Looking at BMI, “a higher value was associated with decreased 1-year mortality in both cohorts on unadjusted analysis,” the investigators observed.
In the CARE-PF cohort, the HR for 1-year mortality was 0.96 per unit difference in BMI (P < .001), while in the UCSF cohort, the HR for 1-year mortality was exactly the same, at 0.96 per unit difference in BMI (P < .001). The authors then adjusted findings for the ILD-GAP index, which included gender, age, and physiology index. After adjusting for this index, the HR for 1-year mortality in the CARE-PF cohort was 0.93 per unit change in BMI (95% CI, 0.90-0.967; P < .001), while in the UCSF cohort, the HR was 0.96 per unit change in BMI (95% CI, 0.94-0.98; P = .001).
Indeed, each 1-kg change above a BMI of 30, adjusted for the ILD-GAP index, was associated with a reduced risk of mortality at 1 year in both cohorts, at an HR of 0.98 (P = .001) in the CARE-PF cohort and an HR of 0.98 (P < .001) in the UCSF cohort. In contrast, patients who experienced a BMI weight loss of 2 kg or more within 1 year had a 41% increased risk of death in the subsequent year after adjusting for the ILD-GAP index and baseline BMI category, at an HR of 1.41 (P = .04). “The absolute change in mortality is much smaller than this,” Dr. Ryerson acknowledged.
“However, the magnitude [in mortality risk] did impress us and this illustrates how weight loss is a frequent consequence of end-stage disease which is something that we have all observed clinically as well,” he added.
Mortality risk plateaued in patients with a greater weight loss, the investigators observed, and there was no association between weight and subsequent 1-year mortality in either cohort on unadjusted analysis.
On the other hand, being underweight was associated with between a 13% and 16% higher mortality risk at 1 year after adjusting for the ILD-GAP, at an HR of 0.84 per 10 kg (P = .001) in the CARE-PF cohort and an HR of 0.87 per 10 kg (P < .001) in the UCSF cohort. “Results were similar in the two studied cohorts, suggesting a robust and generalizable association of both low BMI and weight loss with mortality,” the authors emphasized.
“Together these studies highlight the potential link between obesity and ILD pathogenesis and further suggest the possibility that nutritional support may have a more specific and important role in the management of fibrotic ILD,” the authors wrote. Dr. Ryerson in turn noted that being able to determine mortality risk more accurately than current mortality risk prediction models are able to do is very helpful when dealing with what are sometimes life-and-death decisions.
He also said that having more insight into a patient’s prognosis can change how physicians manage patients with respect to either transplantation or palliation and potentially the need to be more aggressive with pharmacotherapy as well.
Addressing weight loss
Asked to comment on the findings, Elizabeth Volkmann, MD, associate professor of medicine, University of California, Los Angeles, said that this was a very important study and something that she feels does not get adequate attention in clinical practice.
“Weight loss and malnutrition occur in many patients with ILD due to various factors such as gastrointestinal side effects from antifibrotic therapies, decreased oral intake due to psychosocial issues including depression, and increased caloric requirements due to increased work of breathing,” she said in an interview. That said, weight loss and malnutrition are still often underaddressed during clinical encounters for patients with ILD where the focus is on lung health.
“This study illuminates the importance of addressing weight loss in all patients with ILD as it can contribute to heightened risk of mortality,” Dr. Volkmann reemphasized. Dr. Volkmann and colleagues themselves recently reported that radiographic progression of scleroderma lung disease over the course of 1-2 years is associated with an increased risk of long-term mortality, based on two independent studies of systemic sclerosis–interstitial lung disease with extensive follow-up.
Over 8 years of follow-up, patients in the Scleroderma Lung Study II who exhibited an increase of 2% or more in the QILD score – a score that reflects the sum of all abnormally classified scores, including those for fibrosis, ground glass opacity, and honeycombing – for the whole lung at 24 months had an almost fourfold increased risk in mortality, which was significant (P = .014).
The association of an increase in the QILD of at least 2% at 12 months was suggestive in its association with mortality in the SLS I cohort at 12 years of follow-up, a finding that suggests that radiographic progression measured at 2 years is a better predictor of long-term mortality than at 1 year, as the authors concluded.
The CARR-PF is funded by Boehringer Ingelheim. Dr. Ryerson reported receiving personal fees from Boehringer Ingelheim. Dr. Volkmann consults or has received speaker fees from Boehringer Ingelheim and has received grant support from Kadmon and Horizon Therapeutics.
A version of this article first appeared on Medscape.com.
FROM CHEST
Appendicitis: Up-front antibiotics OK in select patients
a comprehensive review of the literature suggests.
“I think this is a wonderful thing that we have for our patients now, because think about the patient who had a heart attack yesterday and has appendicitis today – you don’t want to operate on that patient – so this gives us a wonderful option in an environment where sometimes surgery is just bad timing,” Theodore Pappas, MD, professor of surgery, Duke University, Durham, N.C., told this news organization.
“It’s not that every 25-year-old who comes in should get antibiotics instead of surgery. It’s really better to say that this gives us flexibility for patients who we may not want to operate on immediately, and now we have a great option,” he stressed.
The study was published Dec. 14, 2021, in JAMA.
Acute appendicitis is the most common abdominal surgical emergency in the world, as the authors pointed out.
“We think it’s going to be 60%-70% of patients who are good candidates for consideration of antibiotics,” they speculated.
Current evidence
The review summarizes current evidence regarding the diagnosis and management of acute appendicitis based on a total of 71 articles including 10 systematic reviews, 9 meta-analyses, and 11 practice guidelines. “Appendicitis is classified as uncomplicated or complicated,” the authors explained. Uncomplicated appendicitis is acute appendicitis in the absence of clinical or radiographic signs of perforation.
In contrast, complicated appendicitis is when there is appendiceal rupture with subsequent abscess of phlegmon formation, the definitive diagnosis of which can be confirmed by CT scan. “In cases of diagnostic uncertainty imaging should be performed,” investigators cautioned – usually with ultrasound and CT scans.
If uncomplicated appendicitis is confirmed, three different guidelines now support the role of an antibiotics-first approach, including guidelines from the American Association for Surgery of Trauma. For this group of patients, empirical broad-spectrum antibiotic coverage that can be transitioned to outpatient treatment is commonly used. For example, patients may be initially treated with intravenous ertapenem monotherapy or intravenous cephalosporin plus metronidazole, then on discharge put on oral fluoroquinolones plus metronidazole.
Antibiotics that cover streptococci, nonresistant Enterobacteriaceae, and the anaerobes are usually adequate, they added. “The recommended duration of antibiotics is 10 days,” they noted. In most of the clinical trials comparing antibiotics first to surgery, the primary endpoint was treatment failure at 1 year, in other words, recurrence of symptoms during that year-long period. Across a number of clinical trials, that recurrence rate ranged from a low of 15% to a high of 41%.
In contrast, recurrence rarely occurs after surgical appendectomy. Early treatment failure, defined as clinical deterioration or lack of clinical improvement within 24-72 hours following initiation of antibiotics, is much less likely to occur, with a reported rate of between 8% and 12% of patients. The only long-term follow-up of an antibiotics-first approach in uncomplicated appendicitis was done in the Appendicitis Acuta (APPAC) trial, where at 5 years, the recurrence rate of acute appendicitis was 39% (95% confidence interval, 33.1%-45.3%) in patients initially treated with antibiotics alone.
Typically, there have been no differences in the length of hospital stay in most of the clinical trials reviewed. As Dr. Pappas explained, following a standard appendectomy, patients are typically sent home within 24 hours of undergoing surgery. On the other hand, if treated with intravenous antibiotics first, patients are usually admitted overnight then switched to oral antibiotics on discharge – suggesting that there is little difference in the time spent in hospital between the two groups.
However, there are groups of patients who predictably will not do well on antibiotics first, he cautioned. For example, patients who present with a high fever, shaking and chills, and severe abdominal pain do not have a mild case of appendicitis. Neither do patients who may not look sick but on CT scan, they have a hard piece of stool jammed into the end of the appendix that’s causing the blockage: These patients are also more likely to fail antibiotics, Dr. Pappas added.
“There is also a group of patients who have a much more dilated appendix with some fluid around it,” he noted, “and these patients are less likely to be managed with antibiotics successfully as well.” Lastly, though not part of this review and for whom an antibiotics-first protocol has long been in place, there is a subset of patients who have a perforated appendix, and that perforation has been walled off in a pocket of pus.
“These patients are treated with an antibiotic first because if you operate on them, it’s a mess, whereas if patients are reasonably stable, you can drain the abscess and then put them on antibiotics, and then you can decide 6-8 weeks later if you are going to take the appendix out,” Dr. Pappas said, adding: “Most of the time, what should be happening is the surgeon should consult with the patient and then they can weigh in – here are the options and here’s what I recommend.
“But patients will pick what they pick, and surgery is a very compelling argument: It’s laparoscopic surgery, patients are home in 24 hours, and the complication rate [and the recurrence rate] are incredibly low, so you have to think through all sorts of issues and when you come to a certain conclusion, it has to make a lot of sense to the patient,” Dr. Pappas emphasized.
Asked to comment on the findings, Ram Nirula, MD, D. Rees and Eleanor T. Jensen Presidential Chair in Surgery, University of Utah, Salt Lake City, noted that, as with all things in medicine, nothing is 100%.
“There are times where antibiotics for uncomplicated appendicitis may be appropriate, and times where appendectomy is most appropriate,” he said in an interview. Most of the evidence now shows that the risk of treatment failure following nonoperative management for uncomplicated appendicitis is significant, ranging from 15% to 40%, as Dr. Nirula reaffirmed.
A more recent randomized controlled trial from the CODA collaborative found that quality of life was similar for patients who got up-front antibiotics as for those who got surgery at 30 days, but the failure rate was high, particularly for those with appendicolith (what review authors would have classified as complicated appendicitis).
Moreover, when looking at this subset of patients, quality of life and patient satisfaction in the antibiotic treatment group were lower than it was for surgical controls, as Dr. Nirula also pointed out. While length of hospital stay was similar, overall health care resource utilization was higher in the antibiotic group. “So, if it were me, I would want my appendix removed at this stage in my life, however, for those who are poor surgical candidates, I would favor antibiotics,” Dr. Nirula stressed. He added that the presence of an appendicolith makes the argument for surgery more compelling, although he would still try antibiotics in patients with an appendicolith who are poor surgical candidates.
Dr. Pappas reported serving as a paid consultant for Transenterix. Dr. Nirula disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
a comprehensive review of the literature suggests.
“I think this is a wonderful thing that we have for our patients now, because think about the patient who had a heart attack yesterday and has appendicitis today – you don’t want to operate on that patient – so this gives us a wonderful option in an environment where sometimes surgery is just bad timing,” Theodore Pappas, MD, professor of surgery, Duke University, Durham, N.C., told this news organization.
“It’s not that every 25-year-old who comes in should get antibiotics instead of surgery. It’s really better to say that this gives us flexibility for patients who we may not want to operate on immediately, and now we have a great option,” he stressed.
The study was published Dec. 14, 2021, in JAMA.
Acute appendicitis is the most common abdominal surgical emergency in the world, as the authors pointed out.
“We think it’s going to be 60%-70% of patients who are good candidates for consideration of antibiotics,” they speculated.
Current evidence
The review summarizes current evidence regarding the diagnosis and management of acute appendicitis based on a total of 71 articles including 10 systematic reviews, 9 meta-analyses, and 11 practice guidelines. “Appendicitis is classified as uncomplicated or complicated,” the authors explained. Uncomplicated appendicitis is acute appendicitis in the absence of clinical or radiographic signs of perforation.
In contrast, complicated appendicitis is when there is appendiceal rupture with subsequent abscess of phlegmon formation, the definitive diagnosis of which can be confirmed by CT scan. “In cases of diagnostic uncertainty imaging should be performed,” investigators cautioned – usually with ultrasound and CT scans.
If uncomplicated appendicitis is confirmed, three different guidelines now support the role of an antibiotics-first approach, including guidelines from the American Association for Surgery of Trauma. For this group of patients, empirical broad-spectrum antibiotic coverage that can be transitioned to outpatient treatment is commonly used. For example, patients may be initially treated with intravenous ertapenem monotherapy or intravenous cephalosporin plus metronidazole, then on discharge put on oral fluoroquinolones plus metronidazole.
Antibiotics that cover streptococci, nonresistant Enterobacteriaceae, and the anaerobes are usually adequate, they added. “The recommended duration of antibiotics is 10 days,” they noted. In most of the clinical trials comparing antibiotics first to surgery, the primary endpoint was treatment failure at 1 year, in other words, recurrence of symptoms during that year-long period. Across a number of clinical trials, that recurrence rate ranged from a low of 15% to a high of 41%.
In contrast, recurrence rarely occurs after surgical appendectomy. Early treatment failure, defined as clinical deterioration or lack of clinical improvement within 24-72 hours following initiation of antibiotics, is much less likely to occur, with a reported rate of between 8% and 12% of patients. The only long-term follow-up of an antibiotics-first approach in uncomplicated appendicitis was done in the Appendicitis Acuta (APPAC) trial, where at 5 years, the recurrence rate of acute appendicitis was 39% (95% confidence interval, 33.1%-45.3%) in patients initially treated with antibiotics alone.
Typically, there have been no differences in the length of hospital stay in most of the clinical trials reviewed. As Dr. Pappas explained, following a standard appendectomy, patients are typically sent home within 24 hours of undergoing surgery. On the other hand, if treated with intravenous antibiotics first, patients are usually admitted overnight then switched to oral antibiotics on discharge – suggesting that there is little difference in the time spent in hospital between the two groups.
However, there are groups of patients who predictably will not do well on antibiotics first, he cautioned. For example, patients who present with a high fever, shaking and chills, and severe abdominal pain do not have a mild case of appendicitis. Neither do patients who may not look sick but on CT scan, they have a hard piece of stool jammed into the end of the appendix that’s causing the blockage: These patients are also more likely to fail antibiotics, Dr. Pappas added.
“There is also a group of patients who have a much more dilated appendix with some fluid around it,” he noted, “and these patients are less likely to be managed with antibiotics successfully as well.” Lastly, though not part of this review and for whom an antibiotics-first protocol has long been in place, there is a subset of patients who have a perforated appendix, and that perforation has been walled off in a pocket of pus.
“These patients are treated with an antibiotic first because if you operate on them, it’s a mess, whereas if patients are reasonably stable, you can drain the abscess and then put them on antibiotics, and then you can decide 6-8 weeks later if you are going to take the appendix out,” Dr. Pappas said, adding: “Most of the time, what should be happening is the surgeon should consult with the patient and then they can weigh in – here are the options and here’s what I recommend.
“But patients will pick what they pick, and surgery is a very compelling argument: It’s laparoscopic surgery, patients are home in 24 hours, and the complication rate [and the recurrence rate] are incredibly low, so you have to think through all sorts of issues and when you come to a certain conclusion, it has to make a lot of sense to the patient,” Dr. Pappas emphasized.
Asked to comment on the findings, Ram Nirula, MD, D. Rees and Eleanor T. Jensen Presidential Chair in Surgery, University of Utah, Salt Lake City, noted that, as with all things in medicine, nothing is 100%.
“There are times where antibiotics for uncomplicated appendicitis may be appropriate, and times where appendectomy is most appropriate,” he said in an interview. Most of the evidence now shows that the risk of treatment failure following nonoperative management for uncomplicated appendicitis is significant, ranging from 15% to 40%, as Dr. Nirula reaffirmed.
A more recent randomized controlled trial from the CODA collaborative found that quality of life was similar for patients who got up-front antibiotics as for those who got surgery at 30 days, but the failure rate was high, particularly for those with appendicolith (what review authors would have classified as complicated appendicitis).
Moreover, when looking at this subset of patients, quality of life and patient satisfaction in the antibiotic treatment group were lower than it was for surgical controls, as Dr. Nirula also pointed out. While length of hospital stay was similar, overall health care resource utilization was higher in the antibiotic group. “So, if it were me, I would want my appendix removed at this stage in my life, however, for those who are poor surgical candidates, I would favor antibiotics,” Dr. Nirula stressed. He added that the presence of an appendicolith makes the argument for surgery more compelling, although he would still try antibiotics in patients with an appendicolith who are poor surgical candidates.
Dr. Pappas reported serving as a paid consultant for Transenterix. Dr. Nirula disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
a comprehensive review of the literature suggests.
“I think this is a wonderful thing that we have for our patients now, because think about the patient who had a heart attack yesterday and has appendicitis today – you don’t want to operate on that patient – so this gives us a wonderful option in an environment where sometimes surgery is just bad timing,” Theodore Pappas, MD, professor of surgery, Duke University, Durham, N.C., told this news organization.
“It’s not that every 25-year-old who comes in should get antibiotics instead of surgery. It’s really better to say that this gives us flexibility for patients who we may not want to operate on immediately, and now we have a great option,” he stressed.
The study was published Dec. 14, 2021, in JAMA.
Acute appendicitis is the most common abdominal surgical emergency in the world, as the authors pointed out.
“We think it’s going to be 60%-70% of patients who are good candidates for consideration of antibiotics,” they speculated.
Current evidence
The review summarizes current evidence regarding the diagnosis and management of acute appendicitis based on a total of 71 articles including 10 systematic reviews, 9 meta-analyses, and 11 practice guidelines. “Appendicitis is classified as uncomplicated or complicated,” the authors explained. Uncomplicated appendicitis is acute appendicitis in the absence of clinical or radiographic signs of perforation.
In contrast, complicated appendicitis is when there is appendiceal rupture with subsequent abscess of phlegmon formation, the definitive diagnosis of which can be confirmed by CT scan. “In cases of diagnostic uncertainty imaging should be performed,” investigators cautioned – usually with ultrasound and CT scans.
If uncomplicated appendicitis is confirmed, three different guidelines now support the role of an antibiotics-first approach, including guidelines from the American Association for Surgery of Trauma. For this group of patients, empirical broad-spectrum antibiotic coverage that can be transitioned to outpatient treatment is commonly used. For example, patients may be initially treated with intravenous ertapenem monotherapy or intravenous cephalosporin plus metronidazole, then on discharge put on oral fluoroquinolones plus metronidazole.
Antibiotics that cover streptococci, nonresistant Enterobacteriaceae, and the anaerobes are usually adequate, they added. “The recommended duration of antibiotics is 10 days,” they noted. In most of the clinical trials comparing antibiotics first to surgery, the primary endpoint was treatment failure at 1 year, in other words, recurrence of symptoms during that year-long period. Across a number of clinical trials, that recurrence rate ranged from a low of 15% to a high of 41%.
In contrast, recurrence rarely occurs after surgical appendectomy. Early treatment failure, defined as clinical deterioration or lack of clinical improvement within 24-72 hours following initiation of antibiotics, is much less likely to occur, with a reported rate of between 8% and 12% of patients. The only long-term follow-up of an antibiotics-first approach in uncomplicated appendicitis was done in the Appendicitis Acuta (APPAC) trial, where at 5 years, the recurrence rate of acute appendicitis was 39% (95% confidence interval, 33.1%-45.3%) in patients initially treated with antibiotics alone.
Typically, there have been no differences in the length of hospital stay in most of the clinical trials reviewed. As Dr. Pappas explained, following a standard appendectomy, patients are typically sent home within 24 hours of undergoing surgery. On the other hand, if treated with intravenous antibiotics first, patients are usually admitted overnight then switched to oral antibiotics on discharge – suggesting that there is little difference in the time spent in hospital between the two groups.
However, there are groups of patients who predictably will not do well on antibiotics first, he cautioned. For example, patients who present with a high fever, shaking and chills, and severe abdominal pain do not have a mild case of appendicitis. Neither do patients who may not look sick but on CT scan, they have a hard piece of stool jammed into the end of the appendix that’s causing the blockage: These patients are also more likely to fail antibiotics, Dr. Pappas added.
“There is also a group of patients who have a much more dilated appendix with some fluid around it,” he noted, “and these patients are less likely to be managed with antibiotics successfully as well.” Lastly, though not part of this review and for whom an antibiotics-first protocol has long been in place, there is a subset of patients who have a perforated appendix, and that perforation has been walled off in a pocket of pus.
“These patients are treated with an antibiotic first because if you operate on them, it’s a mess, whereas if patients are reasonably stable, you can drain the abscess and then put them on antibiotics, and then you can decide 6-8 weeks later if you are going to take the appendix out,” Dr. Pappas said, adding: “Most of the time, what should be happening is the surgeon should consult with the patient and then they can weigh in – here are the options and here’s what I recommend.
“But patients will pick what they pick, and surgery is a very compelling argument: It’s laparoscopic surgery, patients are home in 24 hours, and the complication rate [and the recurrence rate] are incredibly low, so you have to think through all sorts of issues and when you come to a certain conclusion, it has to make a lot of sense to the patient,” Dr. Pappas emphasized.
Asked to comment on the findings, Ram Nirula, MD, D. Rees and Eleanor T. Jensen Presidential Chair in Surgery, University of Utah, Salt Lake City, noted that, as with all things in medicine, nothing is 100%.
“There are times where antibiotics for uncomplicated appendicitis may be appropriate, and times where appendectomy is most appropriate,” he said in an interview. Most of the evidence now shows that the risk of treatment failure following nonoperative management for uncomplicated appendicitis is significant, ranging from 15% to 40%, as Dr. Nirula reaffirmed.
A more recent randomized controlled trial from the CODA collaborative found that quality of life was similar for patients who got up-front antibiotics as for those who got surgery at 30 days, but the failure rate was high, particularly for those with appendicolith (what review authors would have classified as complicated appendicitis).
Moreover, when looking at this subset of patients, quality of life and patient satisfaction in the antibiotic treatment group were lower than it was for surgical controls, as Dr. Nirula also pointed out. While length of hospital stay was similar, overall health care resource utilization was higher in the antibiotic group. “So, if it were me, I would want my appendix removed at this stage in my life, however, for those who are poor surgical candidates, I would favor antibiotics,” Dr. Nirula stressed. He added that the presence of an appendicolith makes the argument for surgery more compelling, although he would still try antibiotics in patients with an appendicolith who are poor surgical candidates.
Dr. Pappas reported serving as a paid consultant for Transenterix. Dr. Nirula disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA
Genomic profiling can improve PFS in metastatic breast cancer
“The message is very simple,” lead study author Fabrice Andre, MD, PhD, research director, Gustave Roussy Cancer Campus, Villejuif, France, told this news organization during a virtual press briefing. “If a genomic alteration is validated, it is useful to give targeted therapy, but if the genomic alteration is not validated, we should not give a targeted therapy.”
The study, which pooled results from phase 2 randomized trials SAFIR02-BREAST and SAFIR-P13K, was presented Dec. 7 in a virtual press briefing at the San Antonio Breast Cancer Symposium (SABCS) 2021.
The new analysis explored two key questions: Is genomic testing of a cancer effective? And how should oncologists interpret a genomic report?
A total of 1,462 patients with metastatic HER2-negative breast cancer underwent next-generation sequencing. After receiving six to eight cycles of chemotherapy, 238 patients (16%) were randomized to one of nine targeted therapies matched to the genomic alteration identified on testing or to maintenance chemotherapy.
Genomic alterations in the patients’ tumors were classified using the ESMO Scale of Actionability of Molecular Targets (ESCAT). A tier I ranking indicates that the alteration-drug match is associated with improved outcomes in clinical trials, while a tier II ranking means that the alteration-drug match is associated with antitumor activity but the magnitude of benefit remains unknown.
In an analysis of the overall trial population, Dr. Andre and colleagues found an improvement in progression-free survival in the targeted therapy group (median of 5.5 months) in comparison with the maintenance chemotherapy group (2.9 months), but the difference was not significant (P = .109).
In a subgroup of 115 patients presenting with I- or II-tier genomic alterations, median progression-free survival was 59% longer, at 9.1 months, among patients receiving targeted therapy, compared with 2.8 months in the maintenance chemotherapy group (hazard ratio, 0.41; P < .001).
In addition, the team carried out single-nucleotide polymorphism (SNP) array analyses on 926 patients. They identified 21 genes that were altered more frequently in the metastases compared with the primary tumors, and they observed that a high homologous recombination deficiency score in patients with BCRA 1 or 2 mutations was associated with a longer progression-free survival in patients treated with olaparib.
“We also identified a subset of patients who are resistant to CDK4/6 inhibitors who presented with CDK4 amplification, and this amplification is associated with overexpression,” Dr. Andre explained.
When asked whether most oncologists were using genomic profiling to tailor treatment for breast cancer patients, Dr. Andre said that multigene sequencing is now widely used.
“The issue is not so much whether we should use or not use genomics; the issue here is to force everyone to put the genomic alteration in the right context in terms of its level of evidence,” Dr. Andre told this news organization.
Oncologists may overinterpret the genomic activation identified and give a targeted therapy that is not validated, but “oncologists should not use genomic information when the target has not been previously validated in a therapeutic trial,” he added.
Virginia Kaklamani, MD, professor of medicine at the University of Texas Health Sciences Center in San Antonio, said in an interview that approximately 5 years ago, Dr. Andre was part of the first debate at the SABCS discussing whether oncologists should be conducting next-generation sequencing for their patients with breast cancer.
“At the time, [Dr.] Andre’s comment was that we should not be doing it,” recalled Dr. Kaklamani, who is also leader of the breast cancer program at the Mays Cancer Center at the University of Texas Health San Antonio MD Anderson. “At that point, I think it was clear that we did not have the data we needed to be able to use next-generation sequencing to change our clinical management.”
However, the evidence has evolved. “Based on this clinical trial, I think we now do have the data,” she said. “I think that [next-generation sequencing] is something we will be using more and more in practice and treating our patients based on [validated] genomic alterations.”
Dr. Andre has received grants or advisory board/speaker honoraria from Daiichi Sankyo, Roche, Pfizer, AstraZeneca, Lily, and Novartis. Dr. Kaklamani has served as a consultant for Puma, AstraZeneca, Athenex, and Immunomedics, has received research funding from Eisai, and has served as a speaker for Pfizer, Celgene, Genentech, and Genomic Health, among other companies.
A version of this article first appeared on Medscape.com.
“The message is very simple,” lead study author Fabrice Andre, MD, PhD, research director, Gustave Roussy Cancer Campus, Villejuif, France, told this news organization during a virtual press briefing. “If a genomic alteration is validated, it is useful to give targeted therapy, but if the genomic alteration is not validated, we should not give a targeted therapy.”
The study, which pooled results from phase 2 randomized trials SAFIR02-BREAST and SAFIR-P13K, was presented Dec. 7 in a virtual press briefing at the San Antonio Breast Cancer Symposium (SABCS) 2021.
The new analysis explored two key questions: Is genomic testing of a cancer effective? And how should oncologists interpret a genomic report?
A total of 1,462 patients with metastatic HER2-negative breast cancer underwent next-generation sequencing. After receiving six to eight cycles of chemotherapy, 238 patients (16%) were randomized to one of nine targeted therapies matched to the genomic alteration identified on testing or to maintenance chemotherapy.
Genomic alterations in the patients’ tumors were classified using the ESMO Scale of Actionability of Molecular Targets (ESCAT). A tier I ranking indicates that the alteration-drug match is associated with improved outcomes in clinical trials, while a tier II ranking means that the alteration-drug match is associated with antitumor activity but the magnitude of benefit remains unknown.
In an analysis of the overall trial population, Dr. Andre and colleagues found an improvement in progression-free survival in the targeted therapy group (median of 5.5 months) in comparison with the maintenance chemotherapy group (2.9 months), but the difference was not significant (P = .109).
In a subgroup of 115 patients presenting with I- or II-tier genomic alterations, median progression-free survival was 59% longer, at 9.1 months, among patients receiving targeted therapy, compared with 2.8 months in the maintenance chemotherapy group (hazard ratio, 0.41; P < .001).
In addition, the team carried out single-nucleotide polymorphism (SNP) array analyses on 926 patients. They identified 21 genes that were altered more frequently in the metastases compared with the primary tumors, and they observed that a high homologous recombination deficiency score in patients with BCRA 1 or 2 mutations was associated with a longer progression-free survival in patients treated with olaparib.
“We also identified a subset of patients who are resistant to CDK4/6 inhibitors who presented with CDK4 amplification, and this amplification is associated with overexpression,” Dr. Andre explained.
When asked whether most oncologists were using genomic profiling to tailor treatment for breast cancer patients, Dr. Andre said that multigene sequencing is now widely used.
“The issue is not so much whether we should use or not use genomics; the issue here is to force everyone to put the genomic alteration in the right context in terms of its level of evidence,” Dr. Andre told this news organization.
Oncologists may overinterpret the genomic activation identified and give a targeted therapy that is not validated, but “oncologists should not use genomic information when the target has not been previously validated in a therapeutic trial,” he added.
Virginia Kaklamani, MD, professor of medicine at the University of Texas Health Sciences Center in San Antonio, said in an interview that approximately 5 years ago, Dr. Andre was part of the first debate at the SABCS discussing whether oncologists should be conducting next-generation sequencing for their patients with breast cancer.
“At the time, [Dr.] Andre’s comment was that we should not be doing it,” recalled Dr. Kaklamani, who is also leader of the breast cancer program at the Mays Cancer Center at the University of Texas Health San Antonio MD Anderson. “At that point, I think it was clear that we did not have the data we needed to be able to use next-generation sequencing to change our clinical management.”
However, the evidence has evolved. “Based on this clinical trial, I think we now do have the data,” she said. “I think that [next-generation sequencing] is something we will be using more and more in practice and treating our patients based on [validated] genomic alterations.”
Dr. Andre has received grants or advisory board/speaker honoraria from Daiichi Sankyo, Roche, Pfizer, AstraZeneca, Lily, and Novartis. Dr. Kaklamani has served as a consultant for Puma, AstraZeneca, Athenex, and Immunomedics, has received research funding from Eisai, and has served as a speaker for Pfizer, Celgene, Genentech, and Genomic Health, among other companies.
A version of this article first appeared on Medscape.com.
“The message is very simple,” lead study author Fabrice Andre, MD, PhD, research director, Gustave Roussy Cancer Campus, Villejuif, France, told this news organization during a virtual press briefing. “If a genomic alteration is validated, it is useful to give targeted therapy, but if the genomic alteration is not validated, we should not give a targeted therapy.”
The study, which pooled results from phase 2 randomized trials SAFIR02-BREAST and SAFIR-P13K, was presented Dec. 7 in a virtual press briefing at the San Antonio Breast Cancer Symposium (SABCS) 2021.
The new analysis explored two key questions: Is genomic testing of a cancer effective? And how should oncologists interpret a genomic report?
A total of 1,462 patients with metastatic HER2-negative breast cancer underwent next-generation sequencing. After receiving six to eight cycles of chemotherapy, 238 patients (16%) were randomized to one of nine targeted therapies matched to the genomic alteration identified on testing or to maintenance chemotherapy.
Genomic alterations in the patients’ tumors were classified using the ESMO Scale of Actionability of Molecular Targets (ESCAT). A tier I ranking indicates that the alteration-drug match is associated with improved outcomes in clinical trials, while a tier II ranking means that the alteration-drug match is associated with antitumor activity but the magnitude of benefit remains unknown.
In an analysis of the overall trial population, Dr. Andre and colleagues found an improvement in progression-free survival in the targeted therapy group (median of 5.5 months) in comparison with the maintenance chemotherapy group (2.9 months), but the difference was not significant (P = .109).
In a subgroup of 115 patients presenting with I- or II-tier genomic alterations, median progression-free survival was 59% longer, at 9.1 months, among patients receiving targeted therapy, compared with 2.8 months in the maintenance chemotherapy group (hazard ratio, 0.41; P < .001).
In addition, the team carried out single-nucleotide polymorphism (SNP) array analyses on 926 patients. They identified 21 genes that were altered more frequently in the metastases compared with the primary tumors, and they observed that a high homologous recombination deficiency score in patients with BCRA 1 or 2 mutations was associated with a longer progression-free survival in patients treated with olaparib.
“We also identified a subset of patients who are resistant to CDK4/6 inhibitors who presented with CDK4 amplification, and this amplification is associated with overexpression,” Dr. Andre explained.
When asked whether most oncologists were using genomic profiling to tailor treatment for breast cancer patients, Dr. Andre said that multigene sequencing is now widely used.
“The issue is not so much whether we should use or not use genomics; the issue here is to force everyone to put the genomic alteration in the right context in terms of its level of evidence,” Dr. Andre told this news organization.
Oncologists may overinterpret the genomic activation identified and give a targeted therapy that is not validated, but “oncologists should not use genomic information when the target has not been previously validated in a therapeutic trial,” he added.
Virginia Kaklamani, MD, professor of medicine at the University of Texas Health Sciences Center in San Antonio, said in an interview that approximately 5 years ago, Dr. Andre was part of the first debate at the SABCS discussing whether oncologists should be conducting next-generation sequencing for their patients with breast cancer.
“At the time, [Dr.] Andre’s comment was that we should not be doing it,” recalled Dr. Kaklamani, who is also leader of the breast cancer program at the Mays Cancer Center at the University of Texas Health San Antonio MD Anderson. “At that point, I think it was clear that we did not have the data we needed to be able to use next-generation sequencing to change our clinical management.”
However, the evidence has evolved. “Based on this clinical trial, I think we now do have the data,” she said. “I think that [next-generation sequencing] is something we will be using more and more in practice and treating our patients based on [validated] genomic alterations.”
Dr. Andre has received grants or advisory board/speaker honoraria from Daiichi Sankyo, Roche, Pfizer, AstraZeneca, Lily, and Novartis. Dr. Kaklamani has served as a consultant for Puma, AstraZeneca, Athenex, and Immunomedics, has received research funding from Eisai, and has served as a speaker for Pfizer, Celgene, Genentech, and Genomic Health, among other companies.
A version of this article first appeared on Medscape.com.
Black women most at risk for lymphedema after ALND
“Axillary lymph node dissection remains the main risk factor for the development of lymphedema,” Andrea Barrio, MD, associate attending physician, Memorial Sloan Kettering Cancer Center, New York, said at a virtual press briefing at the San Antonio Breast Cancer Symposium (SABCS) 2021.
“We observed a higher incidence of lymphedema in Black women treated with ALND and RT [radiotherapy] after adjustment for other variables,” Dr. Barrio added. “While the etiology for this increased incidence is largely unknown, future studies should address the biologic mechanisms behind racial disparities in lymphedema development.”
Dr. Barrio and colleagues included 276 patients in the analysis – 60% were White, 20% Black, 11% Asian, and 6% Hispanic. The remaining 3% did not report race or ethnicity. Patients’ median age at baseline was 48 years, and the median body mass index was 26.4 kg/m2. Slightly over-two thirds of participants had hormone receptor (HR)–positive/HER2-negative breast cancer.
All patients underwent unilateral ALND. About 70% received neoadjuvant chemotherapy (NAC), and the remainder had upfront surgery followed by adjuvant chemotherapy. Ninety-five percent of patients received radiotherapy, and almost all underwent nodal radiotherapy as well.
The median number of lymph nodes removed was 18, and the median number of positive lymph nodes was two. Using a perometer, arm volume was measured at baseline, postoperatively, and every 6 months for a total of 2 years. Lymphedema was defined as a relative increase in arm volume of greater than or equal to 10% from baseline.
At 24 months, almost 25% of the group had lymphedema, but the incidence differed significantly by race and ethnicity. The highest incidence was observed among Black women, at 39.4%, compared to 27.7% of Hispanic women, 23.4% of Asian women, and 20.5% of White women in the study.
The incidence of lymphedema also varied significantly by treatment group. The incidence was twofold greater among women treated with NAC in comparison with those who underwent upfront surgery (30.9% vs. 11.1%), Dr. Barrio noted.
On multivariate analysis, Black race was the strongest predictor of lymphedema. Compared to White women, Black women had a 3.5-fold greater risk of lymphedema. Hispanic women also had a threefold increased risk compared to White women, but Dr. Barrio cautioned that there were only 16 Hispanic patients in the study.
Older age and increasing time from surgery were also both modestly associated with an increased risk of lymphedema. Among women who ultimately developed lymphedema, “severity did not vary across race or ethnicity with similar relative volume changes observed,” Dr. Barrio said.
Given that the study found that NAC was an independent predictor of lymphedema, should alternatives to NAC be favored?
Although oncologists provide NAC for a variety of reasons, women with HR-positive/HER2-negative disease – which represent the majority of patients in the current analysis – are most likely to have residual disease after NAC, Dr. Barrio noted. This suggests that oncologists need to start looking at surgical de-escalation trials in this group of patients to help them avoid ALND.
Asked whether oncologists still underestimate the impact that lymphedema has on patients’ quality of life, Virginia Kaklamani, MD, professor of medicine, UT Health San Antonio MD Anderson Cancer Center, Texas, said the oncology community has come a long way.
“Any surgeon or medical oncologist will tell you that in the 1960s and 70s, women were having much higher rates of lymphedema than they are now, so this is something that we do recognize and we are a lot more careful about,” she told this news organization.
Surgical techniques are also better now, and the number of lymph nodes that are being removed is much reduced. Nevertheless, when physicians add ALND and radiation to the axilla, “rates of lymphedema go up,” Dr. Kaklamani acknowledged. “We need these women to have physical therapy before they develop lymphedema.”
Dr. Barrio agreed, adding that if oncologists could identify earlier thresholds for lymphedema, before patients develop arm swelling, “we may be able to intervene and see a reduction in its development.”
In the meantime, Dr. Barrio and colleagues are testing the protective value of offering immediate lymphatic reconstruction following ALND versus no reconstruction. In addition, they will be studying banked tissue from Black women to better understand any racial differences in inflammatory responses, the risk of fibrosis, and the reaction to radiotherapy.
“I think we see that inflammation is a key driver of lymphedema development, and so maybe Black women are predisposed to a different inflammatory reaction to treatment or perhaps have higher levels of inflammation at baseline,” Dr. Barrio speculated.
“I think it’s also important to stratify a woman’s risk for lymphedema, and once we can tailor that risk, we can start to identify which patients might benefit from preventative strategies,” she added.
Dr. Barrio has disclosed no relevant financial relationships. Dr. Kaklamani has served as a consultant for Puma, AstraZeneca, Athenex, and Immunomedics and as a speaker for Pfizer, Celgene, Genentech, Genomic Health, Puma, Eisai, Novartis, AstraZeneca, Daiichi Sankyo, and Seattle Genetics. She has also received research funding from Eisai.
A version of this article first appeared on Medscape.com.
“Axillary lymph node dissection remains the main risk factor for the development of lymphedema,” Andrea Barrio, MD, associate attending physician, Memorial Sloan Kettering Cancer Center, New York, said at a virtual press briefing at the San Antonio Breast Cancer Symposium (SABCS) 2021.
“We observed a higher incidence of lymphedema in Black women treated with ALND and RT [radiotherapy] after adjustment for other variables,” Dr. Barrio added. “While the etiology for this increased incidence is largely unknown, future studies should address the biologic mechanisms behind racial disparities in lymphedema development.”
Dr. Barrio and colleagues included 276 patients in the analysis – 60% were White, 20% Black, 11% Asian, and 6% Hispanic. The remaining 3% did not report race or ethnicity. Patients’ median age at baseline was 48 years, and the median body mass index was 26.4 kg/m2. Slightly over-two thirds of participants had hormone receptor (HR)–positive/HER2-negative breast cancer.
All patients underwent unilateral ALND. About 70% received neoadjuvant chemotherapy (NAC), and the remainder had upfront surgery followed by adjuvant chemotherapy. Ninety-five percent of patients received radiotherapy, and almost all underwent nodal radiotherapy as well.
The median number of lymph nodes removed was 18, and the median number of positive lymph nodes was two. Using a perometer, arm volume was measured at baseline, postoperatively, and every 6 months for a total of 2 years. Lymphedema was defined as a relative increase in arm volume of greater than or equal to 10% from baseline.
At 24 months, almost 25% of the group had lymphedema, but the incidence differed significantly by race and ethnicity. The highest incidence was observed among Black women, at 39.4%, compared to 27.7% of Hispanic women, 23.4% of Asian women, and 20.5% of White women in the study.
The incidence of lymphedema also varied significantly by treatment group. The incidence was twofold greater among women treated with NAC in comparison with those who underwent upfront surgery (30.9% vs. 11.1%), Dr. Barrio noted.
On multivariate analysis, Black race was the strongest predictor of lymphedema. Compared to White women, Black women had a 3.5-fold greater risk of lymphedema. Hispanic women also had a threefold increased risk compared to White women, but Dr. Barrio cautioned that there were only 16 Hispanic patients in the study.
Older age and increasing time from surgery were also both modestly associated with an increased risk of lymphedema. Among women who ultimately developed lymphedema, “severity did not vary across race or ethnicity with similar relative volume changes observed,” Dr. Barrio said.
Given that the study found that NAC was an independent predictor of lymphedema, should alternatives to NAC be favored?
Although oncologists provide NAC for a variety of reasons, women with HR-positive/HER2-negative disease – which represent the majority of patients in the current analysis – are most likely to have residual disease after NAC, Dr. Barrio noted. This suggests that oncologists need to start looking at surgical de-escalation trials in this group of patients to help them avoid ALND.
Asked whether oncologists still underestimate the impact that lymphedema has on patients’ quality of life, Virginia Kaklamani, MD, professor of medicine, UT Health San Antonio MD Anderson Cancer Center, Texas, said the oncology community has come a long way.
“Any surgeon or medical oncologist will tell you that in the 1960s and 70s, women were having much higher rates of lymphedema than they are now, so this is something that we do recognize and we are a lot more careful about,” she told this news organization.
Surgical techniques are also better now, and the number of lymph nodes that are being removed is much reduced. Nevertheless, when physicians add ALND and radiation to the axilla, “rates of lymphedema go up,” Dr. Kaklamani acknowledged. “We need these women to have physical therapy before they develop lymphedema.”
Dr. Barrio agreed, adding that if oncologists could identify earlier thresholds for lymphedema, before patients develop arm swelling, “we may be able to intervene and see a reduction in its development.”
In the meantime, Dr. Barrio and colleagues are testing the protective value of offering immediate lymphatic reconstruction following ALND versus no reconstruction. In addition, they will be studying banked tissue from Black women to better understand any racial differences in inflammatory responses, the risk of fibrosis, and the reaction to radiotherapy.
“I think we see that inflammation is a key driver of lymphedema development, and so maybe Black women are predisposed to a different inflammatory reaction to treatment or perhaps have higher levels of inflammation at baseline,” Dr. Barrio speculated.
“I think it’s also important to stratify a woman’s risk for lymphedema, and once we can tailor that risk, we can start to identify which patients might benefit from preventative strategies,” she added.
Dr. Barrio has disclosed no relevant financial relationships. Dr. Kaklamani has served as a consultant for Puma, AstraZeneca, Athenex, and Immunomedics and as a speaker for Pfizer, Celgene, Genentech, Genomic Health, Puma, Eisai, Novartis, AstraZeneca, Daiichi Sankyo, and Seattle Genetics. She has also received research funding from Eisai.
A version of this article first appeared on Medscape.com.
“Axillary lymph node dissection remains the main risk factor for the development of lymphedema,” Andrea Barrio, MD, associate attending physician, Memorial Sloan Kettering Cancer Center, New York, said at a virtual press briefing at the San Antonio Breast Cancer Symposium (SABCS) 2021.
“We observed a higher incidence of lymphedema in Black women treated with ALND and RT [radiotherapy] after adjustment for other variables,” Dr. Barrio added. “While the etiology for this increased incidence is largely unknown, future studies should address the biologic mechanisms behind racial disparities in lymphedema development.”
Dr. Barrio and colleagues included 276 patients in the analysis – 60% were White, 20% Black, 11% Asian, and 6% Hispanic. The remaining 3% did not report race or ethnicity. Patients’ median age at baseline was 48 years, and the median body mass index was 26.4 kg/m2. Slightly over-two thirds of participants had hormone receptor (HR)–positive/HER2-negative breast cancer.
All patients underwent unilateral ALND. About 70% received neoadjuvant chemotherapy (NAC), and the remainder had upfront surgery followed by adjuvant chemotherapy. Ninety-five percent of patients received radiotherapy, and almost all underwent nodal radiotherapy as well.
The median number of lymph nodes removed was 18, and the median number of positive lymph nodes was two. Using a perometer, arm volume was measured at baseline, postoperatively, and every 6 months for a total of 2 years. Lymphedema was defined as a relative increase in arm volume of greater than or equal to 10% from baseline.
At 24 months, almost 25% of the group had lymphedema, but the incidence differed significantly by race and ethnicity. The highest incidence was observed among Black women, at 39.4%, compared to 27.7% of Hispanic women, 23.4% of Asian women, and 20.5% of White women in the study.
The incidence of lymphedema also varied significantly by treatment group. The incidence was twofold greater among women treated with NAC in comparison with those who underwent upfront surgery (30.9% vs. 11.1%), Dr. Barrio noted.
On multivariate analysis, Black race was the strongest predictor of lymphedema. Compared to White women, Black women had a 3.5-fold greater risk of lymphedema. Hispanic women also had a threefold increased risk compared to White women, but Dr. Barrio cautioned that there were only 16 Hispanic patients in the study.
Older age and increasing time from surgery were also both modestly associated with an increased risk of lymphedema. Among women who ultimately developed lymphedema, “severity did not vary across race or ethnicity with similar relative volume changes observed,” Dr. Barrio said.
Given that the study found that NAC was an independent predictor of lymphedema, should alternatives to NAC be favored?
Although oncologists provide NAC for a variety of reasons, women with HR-positive/HER2-negative disease – which represent the majority of patients in the current analysis – are most likely to have residual disease after NAC, Dr. Barrio noted. This suggests that oncologists need to start looking at surgical de-escalation trials in this group of patients to help them avoid ALND.
Asked whether oncologists still underestimate the impact that lymphedema has on patients’ quality of life, Virginia Kaklamani, MD, professor of medicine, UT Health San Antonio MD Anderson Cancer Center, Texas, said the oncology community has come a long way.
“Any surgeon or medical oncologist will tell you that in the 1960s and 70s, women were having much higher rates of lymphedema than they are now, so this is something that we do recognize and we are a lot more careful about,” she told this news organization.
Surgical techniques are also better now, and the number of lymph nodes that are being removed is much reduced. Nevertheless, when physicians add ALND and radiation to the axilla, “rates of lymphedema go up,” Dr. Kaklamani acknowledged. “We need these women to have physical therapy before they develop lymphedema.”
Dr. Barrio agreed, adding that if oncologists could identify earlier thresholds for lymphedema, before patients develop arm swelling, “we may be able to intervene and see a reduction in its development.”
In the meantime, Dr. Barrio and colleagues are testing the protective value of offering immediate lymphatic reconstruction following ALND versus no reconstruction. In addition, they will be studying banked tissue from Black women to better understand any racial differences in inflammatory responses, the risk of fibrosis, and the reaction to radiotherapy.
“I think we see that inflammation is a key driver of lymphedema development, and so maybe Black women are predisposed to a different inflammatory reaction to treatment or perhaps have higher levels of inflammation at baseline,” Dr. Barrio speculated.
“I think it’s also important to stratify a woman’s risk for lymphedema, and once we can tailor that risk, we can start to identify which patients might benefit from preventative strategies,” she added.
Dr. Barrio has disclosed no relevant financial relationships. Dr. Kaklamani has served as a consultant for Puma, AstraZeneca, Athenex, and Immunomedics and as a speaker for Pfizer, Celgene, Genentech, Genomic Health, Puma, Eisai, Novartis, AstraZeneca, Daiichi Sankyo, and Seattle Genetics. She has also received research funding from Eisai.
A version of this article first appeared on Medscape.com.
Poor night’s sleep impairs glucose control the next morning
Going to bed later than usual and/or getting a poor night’s sleep are both associated with impaired glycemic response to breakfast the following morning in healthy adults, according to a multiple test-meal challenge study conducted over 14 days.
“Our data suggest that sleep duration, efficiency, and midpoint are important determinants of postprandial glycemic control at a population level,” Neil Tsereteli, MD, Lund University Diabetes Centre, Malmo, Sweden, and colleagues wrote in their article, published online Nov. 30, 2021, in Diabetologia.
“And [the results] suggest that one-size-fits-all sleep recommendations are suboptimal, particularly in the context of postprandial glycemic control, a key component of diabetes prevention,” they added.
Prior research on sleep quality and control of glucose lacking
Diet, exercise, and sleep are fundamental components of a healthy lifestyle; however, the role that sleep plays in affecting blood glucose control in generally healthy people has been studied little so far, the researchers wrote.
Sleep disorders can act as a measure of general health as they often occur alongside other health problems. Sleep quality also has a direct causal effect on many conditions such as cardiovascular disease, obesity, and type 2 diabetes. And disturbed sleep caused by conditions such as obstructive sleep apnea is associated with the prevalence of type 2 diabetes and risk of associated complications.
This and other evidence suggest a strong link between glucose regulation and the quality and duration of sleep.
Dr. Tsereteli and colleagues set out to examine this further in the Personalized Responses to Dietary Composition Trial 1, which involved 953 healthy adults who consumed standardized meals over 2 weeks in a clinic setting and at home.
“The meals were consumed either for breakfast or lunch in a randomized meal order and consisted of eight different standardized meals,” the researchers wrote.
Activity and sleep were monitored using a wearable device with an accelerometer. Postprandial blood glucose levels were measured using a continuous glucose monitor.
Sleep variables including quality, duration, and timing and their impact on glycemic response to breakfast the following morning, and were compared between participants and within each individual.
Better sleep efficiency, better glucose control
The study found that, although there was no significant association between length of sleep period and postmeal glycemic response, there was a significant interaction when the nutritional content of the breakfast meal was also considered.
Longer sleep periods were associated with lower blood glucose following high-carbohydrate and high-fat breakfasts, indicating better blood glucose control.
Additionally, the researchers observed a within-person effect in which a study participant who slept for longer than they typically would was likely to have reduced postprandial blood glucose following a high-carbohydrate or high-fat breakfast the next day.
The authors also found a significant link between sleep efficiency (ratio of time asleep to total length of sleep period) and glycemic control. When a participant slept more efficiently than usual, their postprandial blood glucose also tended to be lower than usual.
“This effect was largely driven by sleep onset (going to bed later) rather than sleep offset (waking up later),” Dr. Tsereteli and colleagues noted.
Sleep a key pillar of health
Asked whether these particular sleep effects might be exacerbated in patients with diabetes, senior author Paul Franks, MD, also from the Lund University Diabetes Centre, felt they could not meaningfully extrapolate results to people with diabetes, given that many take glucose-lowering medications.
“However, it is likely that these results would be similar or exacerbated in people with prediabetes, as glucose fluctuations in this subgroup of patients are generally greater than in people with normoglycemia,” he noted in an interview.
“Sleep is a key pillar of health, and focusing on both sleep and diet is key for healthy blood glucose control,” he added.
“Compensating for a bad night’s sleep by consuming a very sugary breakfast or energy drinks is likely to be especially detrimental for blood glucose control,” Dr. Franks said.
The study was funded by Lund University. Dr. Tsereteli and Dr. Franks reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Going to bed later than usual and/or getting a poor night’s sleep are both associated with impaired glycemic response to breakfast the following morning in healthy adults, according to a multiple test-meal challenge study conducted over 14 days.
“Our data suggest that sleep duration, efficiency, and midpoint are important determinants of postprandial glycemic control at a population level,” Neil Tsereteli, MD, Lund University Diabetes Centre, Malmo, Sweden, and colleagues wrote in their article, published online Nov. 30, 2021, in Diabetologia.
“And [the results] suggest that one-size-fits-all sleep recommendations are suboptimal, particularly in the context of postprandial glycemic control, a key component of diabetes prevention,” they added.
Prior research on sleep quality and control of glucose lacking
Diet, exercise, and sleep are fundamental components of a healthy lifestyle; however, the role that sleep plays in affecting blood glucose control in generally healthy people has been studied little so far, the researchers wrote.
Sleep disorders can act as a measure of general health as they often occur alongside other health problems. Sleep quality also has a direct causal effect on many conditions such as cardiovascular disease, obesity, and type 2 diabetes. And disturbed sleep caused by conditions such as obstructive sleep apnea is associated with the prevalence of type 2 diabetes and risk of associated complications.
This and other evidence suggest a strong link between glucose regulation and the quality and duration of sleep.
Dr. Tsereteli and colleagues set out to examine this further in the Personalized Responses to Dietary Composition Trial 1, which involved 953 healthy adults who consumed standardized meals over 2 weeks in a clinic setting and at home.
“The meals were consumed either for breakfast or lunch in a randomized meal order and consisted of eight different standardized meals,” the researchers wrote.
Activity and sleep were monitored using a wearable device with an accelerometer. Postprandial blood glucose levels were measured using a continuous glucose monitor.
Sleep variables including quality, duration, and timing and their impact on glycemic response to breakfast the following morning, and were compared between participants and within each individual.
Better sleep efficiency, better glucose control
The study found that, although there was no significant association between length of sleep period and postmeal glycemic response, there was a significant interaction when the nutritional content of the breakfast meal was also considered.
Longer sleep periods were associated with lower blood glucose following high-carbohydrate and high-fat breakfasts, indicating better blood glucose control.
Additionally, the researchers observed a within-person effect in which a study participant who slept for longer than they typically would was likely to have reduced postprandial blood glucose following a high-carbohydrate or high-fat breakfast the next day.
The authors also found a significant link between sleep efficiency (ratio of time asleep to total length of sleep period) and glycemic control. When a participant slept more efficiently than usual, their postprandial blood glucose also tended to be lower than usual.
“This effect was largely driven by sleep onset (going to bed later) rather than sleep offset (waking up later),” Dr. Tsereteli and colleagues noted.
Sleep a key pillar of health
Asked whether these particular sleep effects might be exacerbated in patients with diabetes, senior author Paul Franks, MD, also from the Lund University Diabetes Centre, felt they could not meaningfully extrapolate results to people with diabetes, given that many take glucose-lowering medications.
“However, it is likely that these results would be similar or exacerbated in people with prediabetes, as glucose fluctuations in this subgroup of patients are generally greater than in people with normoglycemia,” he noted in an interview.
“Sleep is a key pillar of health, and focusing on both sleep and diet is key for healthy blood glucose control,” he added.
“Compensating for a bad night’s sleep by consuming a very sugary breakfast or energy drinks is likely to be especially detrimental for blood glucose control,” Dr. Franks said.
The study was funded by Lund University. Dr. Tsereteli and Dr. Franks reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Going to bed later than usual and/or getting a poor night’s sleep are both associated with impaired glycemic response to breakfast the following morning in healthy adults, according to a multiple test-meal challenge study conducted over 14 days.
“Our data suggest that sleep duration, efficiency, and midpoint are important determinants of postprandial glycemic control at a population level,” Neil Tsereteli, MD, Lund University Diabetes Centre, Malmo, Sweden, and colleagues wrote in their article, published online Nov. 30, 2021, in Diabetologia.
“And [the results] suggest that one-size-fits-all sleep recommendations are suboptimal, particularly in the context of postprandial glycemic control, a key component of diabetes prevention,” they added.
Prior research on sleep quality and control of glucose lacking
Diet, exercise, and sleep are fundamental components of a healthy lifestyle; however, the role that sleep plays in affecting blood glucose control in generally healthy people has been studied little so far, the researchers wrote.
Sleep disorders can act as a measure of general health as they often occur alongside other health problems. Sleep quality also has a direct causal effect on many conditions such as cardiovascular disease, obesity, and type 2 diabetes. And disturbed sleep caused by conditions such as obstructive sleep apnea is associated with the prevalence of type 2 diabetes and risk of associated complications.
This and other evidence suggest a strong link between glucose regulation and the quality and duration of sleep.
Dr. Tsereteli and colleagues set out to examine this further in the Personalized Responses to Dietary Composition Trial 1, which involved 953 healthy adults who consumed standardized meals over 2 weeks in a clinic setting and at home.
“The meals were consumed either for breakfast or lunch in a randomized meal order and consisted of eight different standardized meals,” the researchers wrote.
Activity and sleep were monitored using a wearable device with an accelerometer. Postprandial blood glucose levels were measured using a continuous glucose monitor.
Sleep variables including quality, duration, and timing and their impact on glycemic response to breakfast the following morning, and were compared between participants and within each individual.
Better sleep efficiency, better glucose control
The study found that, although there was no significant association between length of sleep period and postmeal glycemic response, there was a significant interaction when the nutritional content of the breakfast meal was also considered.
Longer sleep periods were associated with lower blood glucose following high-carbohydrate and high-fat breakfasts, indicating better blood glucose control.
Additionally, the researchers observed a within-person effect in which a study participant who slept for longer than they typically would was likely to have reduced postprandial blood glucose following a high-carbohydrate or high-fat breakfast the next day.
The authors also found a significant link between sleep efficiency (ratio of time asleep to total length of sleep period) and glycemic control. When a participant slept more efficiently than usual, their postprandial blood glucose also tended to be lower than usual.
“This effect was largely driven by sleep onset (going to bed later) rather than sleep offset (waking up later),” Dr. Tsereteli and colleagues noted.
Sleep a key pillar of health
Asked whether these particular sleep effects might be exacerbated in patients with diabetes, senior author Paul Franks, MD, also from the Lund University Diabetes Centre, felt they could not meaningfully extrapolate results to people with diabetes, given that many take glucose-lowering medications.
“However, it is likely that these results would be similar or exacerbated in people with prediabetes, as glucose fluctuations in this subgroup of patients are generally greater than in people with normoglycemia,” he noted in an interview.
“Sleep is a key pillar of health, and focusing on both sleep and diet is key for healthy blood glucose control,” he added.
“Compensating for a bad night’s sleep by consuming a very sugary breakfast or energy drinks is likely to be especially detrimental for blood glucose control,” Dr. Franks said.
The study was funded by Lund University. Dr. Tsereteli and Dr. Franks reported no relevant financial relationships.
A version of this article first appeared on Medscape.com.
HPV vaccines reduce cervical cancer rates in young females
Two different studies have found that, provided young females are immunized with the human papilloma virus (HPV) vaccine at a young enough age, both the incidence of and mortality from cervical cancer can be dramatically curtailed, data from the United Kingdom and to a lesser extent, the United States indicate.
In the U.K. study, published online in The Lancet, researchers showed that the national vaccination program against HPV, initiated in England in 2008, has all but eradicated cervical cancer and cervical intraepithelial neoplasia (CIN3) in young girls who received the vaccine at the age of 12 and 13 years (school year 8) prior to their sexual debut.
In this age group, cervical cancer rates were 87% lower than rates among previously nonvaccinated generations, while CIN3 rates were reduced by 97%, as researchers report. “It’s been incredible to see the impact of HPV vaccination, and now we can prove it prevented hundreds of women from developing cancer in England,” senior author Peter Sasieni, MD, King’s College London, said in a statement. “To see the real-life impact of the vaccine has been truly rewarding,” he added.
“This study provides the first direct evidence of the impact of the UK HPV vaccination campaign on cervical cancer incidence, showing a large reduction in cervical cancer rates in vaccinated cohorts,” Kate Soldan, MD, UK Health Security Agency, London, said in the same statement.
“This represents an important step forward in cervical cancer prevention, and we hope that these new results encourage uptake as the success of the vaccination programme relies not only on the efficacy of the vaccine but also the proportion of the population vaccinated,” she added.
Vanessa Saliba, MD, a consultant epidemiologist for the UK Health Security Agency, agreed, adding that “these remarkable findings confirm that the HPV vaccine saves lives by dramatically reducing cervical cancer rates among women.”
“This reminds us that vaccines are one of the most important tools we have to help us live longer, healthier lives,” she reemphasized.
British HPV program
When initiated in 2008, the national HPV vaccination program used the bivalent, Cervarix vaccine against HPV 16 and 18. As researchers noted, these two HPV types are responsible for 70%-80% of all cervical cancers in England.
However, in 2012, the program switched to the quadrivalent HPV vaccine (Gardasil) which is also effective against two additional HPV types, 6 and 11, both of which cause genital warts. The program also originally recommended the three-dose regimen for both HPV vaccines.
Now, only two doses of the vaccine are given to girls under the age of 15 even though it has been shown that a single dose of the HPV vaccine provides good protection against persistent infection, with efficacy rates that are similar to that of three doses, as the authors point out.
Among the cohort eligible for vaccination at 12 or 13 years of age, 89% received at least one dose of the HPV vaccine while 85% of the same age group received all three shots.
Cancer registry
Data from a population-based cancer registry was used to estimate the early effect of the bivalent HPV program on the incidence of cervical cancer and CIN3 in England between January 2006 and June 2019. During the study interval, there were 27,946 diagnoses of cervical cancer and 318,058 diagnoses of CIN3, lead author Milena Falcaro, MD, King’s College London, and colleagues report. Participants were then analyzed separately according to their age at the time of vaccination and the incidence rates calculated for both cervical cancer and CIN3 in the three separate groups.
For slightly older girls who received the vaccine between 14 and 16 years of age (school year 10-11), cervical cancer was reduced by 62% while CIN3 rates were reduced by 75%. For those who received the vaccine between 16 and 18 years of age (school year 12-13), cervical cancer rates were reduced by 34% while CIN3 rates were reduced by 39%, study authors add.
Indeed, the authors estimate that by June 2019 there were approximately 450 fewer cases of cervical cancer and 17,200 fewer cases of CIN3 than would otherwise have been expected in the vaccinated population in England.
The authors acknowledge that cervical cancer is rare in young women and vaccinated populations are still young. For example, the youngest recipients would have been immunized at the age of 12 in 2008 and would still be only 23 years old in 2019 when the study ended.
Thus, the authors emphasize that, because the vaccinated populations are still young, it’s too early to assess the full effect of HPV vaccination on cervical cancer rates.
Asked to comment on the study, Maurice Markman, MD, president, Medicine and Science Cancer Treatment Centers of America, pointed out that results from the British study are very similar to those from a Swedish study assessing the effect of the quadrivalent vaccine alone.
“You can put any superlatives you want in here, but these are stunningly positive results,” Dr. Markman said in an interview. As an oncologist who has been treating cervical cancer for 40 years – particularly advanced cervical cancer – “I can tell you this is one of the most devastating diseases to women, and the ability to eliminate this cancer with something as simple as a vaccine is the goal of cancer therapy, and it’s been remarkably successful,” he stressed.
Editorial commentary
Commenting on the findings, editorialists Maggie Cruickshank, MD, University of Aberdeen (Scotland), and Mihaela Grigore, MD, University of Medicine and Pharmacy, Lasi, Romania, point out that published reports evaluating the effect of HPV vaccination on cervical cancer rates have been scarce until now.
“The most important issue, besides the availability of the vaccine ... is the education of the population to accept vaccination because a high rate of immunization is a key element of success,” they emphasize. “Even in a wealthy country such as England with free access to HPV immunization, uptake has not reached the 90% vaccination target of girls aged 15 years set by the WHO [World Health Organization],” the editorialists add.
Dr. Cruickshank and Dr. Grigore also suggest that the effect HPV vaccination is having on cervical cancer rates as shown in this study should also stimulate vaccination programs in low- and middle-income countries where cervical cancer is a far greater public health issue than it is in countries with established systems of vaccination and screening.
HPV vaccination in the United States
The HPV vaccination program is similarly reducing the incidence of and mortality from cervical cancer among younger women in the United States who are most likely to have received the vaccine. As reported by lead author, Justin Barnes, MD, Washington University, St. Louis, the incidence of cervical cancer dropped by 37.7% from 2001 through 2005 to 2010 through 2017 in girls and young women between 15 and 24 years of age.
The U.S. study was published online in JAMA Pediatrics.
“HPV vaccine coverage in the U.S. has improved over the last few years although it was quite poor for many years,” senior author of the U.K. study, Peter Sasieni, MD, King’s College London, said in an interview. “Thus, one would anticipate a lower impact on the population in the U.S., because vaccine uptake, particularly in those aged 11-14 years was so much lower than it was in the U.K.,” he noted.
SEER databases
National age-adjusted cervical cancer incidence and mortality data from January 2001 through December 2017 for women and girls between 15 and 39 years of age were obtained from the combined Surveillance, Epidemiology, and End Results as well as the National Program of Cancer Registries databases. Mortality data was obtained from the National Center for Health Statistics.
Investigators then compared percentage changes in the incidence of and mortality from cervical cancer from January 2001 through December 2005 during the prevaccination years to that observed between January 2010 through December 2017 during the postvaccination years. They also compared incidence and mortality rates in three different cohorts: females between 15 and 24 years of age, those between 25 and 29 years of age, and those between 30 and 39 years of age.
“The older two groups were included as comparison, given their low vaccination rates,” the authors explained. Results showed that, during the same study interval from 2001 through 2005 to 2010 through 2017, the incidence of cervical cancer dropped by only 16.1% in women between 25 and 29 years of age and by only 8% for women between 30 and 39 years of age, the investigators report.
Reductions in mortality from cervical cancer were only strikingly so in the youngest age group of females between 15 and 24 years of age, among whom there was a 43.3% reduction in mortality from 2001-2005 to 2010-2017, as Dr. Barnes and colleagues note.
This pattern changed substantially in women between the ages of 25 and 29, among whom there was a 4.3% increase in mortality from cervical cancer during the same study interval and a small, 4.7% reduction among women between 30 and 39 years of age, investigators add. In actual numbers, mortality rates from cervical cancer were very low at only 0.6 per 100,000 in females between 15 and 24 years of age.
This compared to a mortality rate of 0.57 per 100,000 in women between 25 and 29 years of age and 1.89 per 100,000 in the oldest age group. “These nationwide data showed decreased cervical cancer incidence and mortality among women and girls aged 15-24 years after HPV vaccine introduction,” Dr. Barnes notes.
“Thus, the current study adds to knowledge by quantitatively comparing changes in cervical cancer incidence by age-based vaccine eligibility and providing suggestive evidence for vaccine-associated decreases in cervical cancer mortality,” investigators add.
However, as the authors also point out, while the reduction in mortality from cervical cancer associated with HPV vaccination may translate to older age groups as HPV-vaccinated cohorts age, “the number of deaths and hence the number of potentially averted deaths in young women and girls was small,” they caution, “and efforts to further improve vaccination uptake remain important.”
None of the authors or the editorialists had any conflicts of interest to declare.
Two different studies have found that, provided young females are immunized with the human papilloma virus (HPV) vaccine at a young enough age, both the incidence of and mortality from cervical cancer can be dramatically curtailed, data from the United Kingdom and to a lesser extent, the United States indicate.
In the U.K. study, published online in The Lancet, researchers showed that the national vaccination program against HPV, initiated in England in 2008, has all but eradicated cervical cancer and cervical intraepithelial neoplasia (CIN3) in young girls who received the vaccine at the age of 12 and 13 years (school year 8) prior to their sexual debut.
In this age group, cervical cancer rates were 87% lower than rates among previously nonvaccinated generations, while CIN3 rates were reduced by 97%, as researchers report. “It’s been incredible to see the impact of HPV vaccination, and now we can prove it prevented hundreds of women from developing cancer in England,” senior author Peter Sasieni, MD, King’s College London, said in a statement. “To see the real-life impact of the vaccine has been truly rewarding,” he added.
“This study provides the first direct evidence of the impact of the UK HPV vaccination campaign on cervical cancer incidence, showing a large reduction in cervical cancer rates in vaccinated cohorts,” Kate Soldan, MD, UK Health Security Agency, London, said in the same statement.
“This represents an important step forward in cervical cancer prevention, and we hope that these new results encourage uptake as the success of the vaccination programme relies not only on the efficacy of the vaccine but also the proportion of the population vaccinated,” she added.
Vanessa Saliba, MD, a consultant epidemiologist for the UK Health Security Agency, agreed, adding that “these remarkable findings confirm that the HPV vaccine saves lives by dramatically reducing cervical cancer rates among women.”
“This reminds us that vaccines are one of the most important tools we have to help us live longer, healthier lives,” she reemphasized.
British HPV program
When initiated in 2008, the national HPV vaccination program used the bivalent, Cervarix vaccine against HPV 16 and 18. As researchers noted, these two HPV types are responsible for 70%-80% of all cervical cancers in England.
However, in 2012, the program switched to the quadrivalent HPV vaccine (Gardasil) which is also effective against two additional HPV types, 6 and 11, both of which cause genital warts. The program also originally recommended the three-dose regimen for both HPV vaccines.
Now, only two doses of the vaccine are given to girls under the age of 15 even though it has been shown that a single dose of the HPV vaccine provides good protection against persistent infection, with efficacy rates that are similar to that of three doses, as the authors point out.
Among the cohort eligible for vaccination at 12 or 13 years of age, 89% received at least one dose of the HPV vaccine while 85% of the same age group received all three shots.
Cancer registry
Data from a population-based cancer registry was used to estimate the early effect of the bivalent HPV program on the incidence of cervical cancer and CIN3 in England between January 2006 and June 2019. During the study interval, there were 27,946 diagnoses of cervical cancer and 318,058 diagnoses of CIN3, lead author Milena Falcaro, MD, King’s College London, and colleagues report. Participants were then analyzed separately according to their age at the time of vaccination and the incidence rates calculated for both cervical cancer and CIN3 in the three separate groups.
For slightly older girls who received the vaccine between 14 and 16 years of age (school year 10-11), cervical cancer was reduced by 62% while CIN3 rates were reduced by 75%. For those who received the vaccine between 16 and 18 years of age (school year 12-13), cervical cancer rates were reduced by 34% while CIN3 rates were reduced by 39%, study authors add.
Indeed, the authors estimate that by June 2019 there were approximately 450 fewer cases of cervical cancer and 17,200 fewer cases of CIN3 than would otherwise have been expected in the vaccinated population in England.
The authors acknowledge that cervical cancer is rare in young women and vaccinated populations are still young. For example, the youngest recipients would have been immunized at the age of 12 in 2008 and would still be only 23 years old in 2019 when the study ended.
Thus, the authors emphasize that, because the vaccinated populations are still young, it’s too early to assess the full effect of HPV vaccination on cervical cancer rates.
Asked to comment on the study, Maurice Markman, MD, president, Medicine and Science Cancer Treatment Centers of America, pointed out that results from the British study are very similar to those from a Swedish study assessing the effect of the quadrivalent vaccine alone.
“You can put any superlatives you want in here, but these are stunningly positive results,” Dr. Markman said in an interview. As an oncologist who has been treating cervical cancer for 40 years – particularly advanced cervical cancer – “I can tell you this is one of the most devastating diseases to women, and the ability to eliminate this cancer with something as simple as a vaccine is the goal of cancer therapy, and it’s been remarkably successful,” he stressed.
Editorial commentary
Commenting on the findings, editorialists Maggie Cruickshank, MD, University of Aberdeen (Scotland), and Mihaela Grigore, MD, University of Medicine and Pharmacy, Lasi, Romania, point out that published reports evaluating the effect of HPV vaccination on cervical cancer rates have been scarce until now.
“The most important issue, besides the availability of the vaccine ... is the education of the population to accept vaccination because a high rate of immunization is a key element of success,” they emphasize. “Even in a wealthy country such as England with free access to HPV immunization, uptake has not reached the 90% vaccination target of girls aged 15 years set by the WHO [World Health Organization],” the editorialists add.
Dr. Cruickshank and Dr. Grigore also suggest that the effect HPV vaccination is having on cervical cancer rates as shown in this study should also stimulate vaccination programs in low- and middle-income countries where cervical cancer is a far greater public health issue than it is in countries with established systems of vaccination and screening.
HPV vaccination in the United States
The HPV vaccination program is similarly reducing the incidence of and mortality from cervical cancer among younger women in the United States who are most likely to have received the vaccine. As reported by lead author, Justin Barnes, MD, Washington University, St. Louis, the incidence of cervical cancer dropped by 37.7% from 2001 through 2005 to 2010 through 2017 in girls and young women between 15 and 24 years of age.
The U.S. study was published online in JAMA Pediatrics.
“HPV vaccine coverage in the U.S. has improved over the last few years although it was quite poor for many years,” senior author of the U.K. study, Peter Sasieni, MD, King’s College London, said in an interview. “Thus, one would anticipate a lower impact on the population in the U.S., because vaccine uptake, particularly in those aged 11-14 years was so much lower than it was in the U.K.,” he noted.
SEER databases
National age-adjusted cervical cancer incidence and mortality data from January 2001 through December 2017 for women and girls between 15 and 39 years of age were obtained from the combined Surveillance, Epidemiology, and End Results as well as the National Program of Cancer Registries databases. Mortality data was obtained from the National Center for Health Statistics.
Investigators then compared percentage changes in the incidence of and mortality from cervical cancer from January 2001 through December 2005 during the prevaccination years to that observed between January 2010 through December 2017 during the postvaccination years. They also compared incidence and mortality rates in three different cohorts: females between 15 and 24 years of age, those between 25 and 29 years of age, and those between 30 and 39 years of age.
“The older two groups were included as comparison, given their low vaccination rates,” the authors explained. Results showed that, during the same study interval from 2001 through 2005 to 2010 through 2017, the incidence of cervical cancer dropped by only 16.1% in women between 25 and 29 years of age and by only 8% for women between 30 and 39 years of age, the investigators report.
Reductions in mortality from cervical cancer were only strikingly so in the youngest age group of females between 15 and 24 years of age, among whom there was a 43.3% reduction in mortality from 2001-2005 to 2010-2017, as Dr. Barnes and colleagues note.
This pattern changed substantially in women between the ages of 25 and 29, among whom there was a 4.3% increase in mortality from cervical cancer during the same study interval and a small, 4.7% reduction among women between 30 and 39 years of age, investigators add. In actual numbers, mortality rates from cervical cancer were very low at only 0.6 per 100,000 in females between 15 and 24 years of age.
This compared to a mortality rate of 0.57 per 100,000 in women between 25 and 29 years of age and 1.89 per 100,000 in the oldest age group. “These nationwide data showed decreased cervical cancer incidence and mortality among women and girls aged 15-24 years after HPV vaccine introduction,” Dr. Barnes notes.
“Thus, the current study adds to knowledge by quantitatively comparing changes in cervical cancer incidence by age-based vaccine eligibility and providing suggestive evidence for vaccine-associated decreases in cervical cancer mortality,” investigators add.
However, as the authors also point out, while the reduction in mortality from cervical cancer associated with HPV vaccination may translate to older age groups as HPV-vaccinated cohorts age, “the number of deaths and hence the number of potentially averted deaths in young women and girls was small,” they caution, “and efforts to further improve vaccination uptake remain important.”
None of the authors or the editorialists had any conflicts of interest to declare.
Two different studies have found that, provided young females are immunized with the human papilloma virus (HPV) vaccine at a young enough age, both the incidence of and mortality from cervical cancer can be dramatically curtailed, data from the United Kingdom and to a lesser extent, the United States indicate.
In the U.K. study, published online in The Lancet, researchers showed that the national vaccination program against HPV, initiated in England in 2008, has all but eradicated cervical cancer and cervical intraepithelial neoplasia (CIN3) in young girls who received the vaccine at the age of 12 and 13 years (school year 8) prior to their sexual debut.
In this age group, cervical cancer rates were 87% lower than rates among previously nonvaccinated generations, while CIN3 rates were reduced by 97%, as researchers report. “It’s been incredible to see the impact of HPV vaccination, and now we can prove it prevented hundreds of women from developing cancer in England,” senior author Peter Sasieni, MD, King’s College London, said in a statement. “To see the real-life impact of the vaccine has been truly rewarding,” he added.
“This study provides the first direct evidence of the impact of the UK HPV vaccination campaign on cervical cancer incidence, showing a large reduction in cervical cancer rates in vaccinated cohorts,” Kate Soldan, MD, UK Health Security Agency, London, said in the same statement.
“This represents an important step forward in cervical cancer prevention, and we hope that these new results encourage uptake as the success of the vaccination programme relies not only on the efficacy of the vaccine but also the proportion of the population vaccinated,” she added.
Vanessa Saliba, MD, a consultant epidemiologist for the UK Health Security Agency, agreed, adding that “these remarkable findings confirm that the HPV vaccine saves lives by dramatically reducing cervical cancer rates among women.”
“This reminds us that vaccines are one of the most important tools we have to help us live longer, healthier lives,” she reemphasized.
British HPV program
When initiated in 2008, the national HPV vaccination program used the bivalent, Cervarix vaccine against HPV 16 and 18. As researchers noted, these two HPV types are responsible for 70%-80% of all cervical cancers in England.
However, in 2012, the program switched to the quadrivalent HPV vaccine (Gardasil) which is also effective against two additional HPV types, 6 and 11, both of which cause genital warts. The program also originally recommended the three-dose regimen for both HPV vaccines.
Now, only two doses of the vaccine are given to girls under the age of 15 even though it has been shown that a single dose of the HPV vaccine provides good protection against persistent infection, with efficacy rates that are similar to that of three doses, as the authors point out.
Among the cohort eligible for vaccination at 12 or 13 years of age, 89% received at least one dose of the HPV vaccine while 85% of the same age group received all three shots.
Cancer registry
Data from a population-based cancer registry was used to estimate the early effect of the bivalent HPV program on the incidence of cervical cancer and CIN3 in England between January 2006 and June 2019. During the study interval, there were 27,946 diagnoses of cervical cancer and 318,058 diagnoses of CIN3, lead author Milena Falcaro, MD, King’s College London, and colleagues report. Participants were then analyzed separately according to their age at the time of vaccination and the incidence rates calculated for both cervical cancer and CIN3 in the three separate groups.
For slightly older girls who received the vaccine between 14 and 16 years of age (school year 10-11), cervical cancer was reduced by 62% while CIN3 rates were reduced by 75%. For those who received the vaccine between 16 and 18 years of age (school year 12-13), cervical cancer rates were reduced by 34% while CIN3 rates were reduced by 39%, study authors add.
Indeed, the authors estimate that by June 2019 there were approximately 450 fewer cases of cervical cancer and 17,200 fewer cases of CIN3 than would otherwise have been expected in the vaccinated population in England.
The authors acknowledge that cervical cancer is rare in young women and vaccinated populations are still young. For example, the youngest recipients would have been immunized at the age of 12 in 2008 and would still be only 23 years old in 2019 when the study ended.
Thus, the authors emphasize that, because the vaccinated populations are still young, it’s too early to assess the full effect of HPV vaccination on cervical cancer rates.
Asked to comment on the study, Maurice Markman, MD, president, Medicine and Science Cancer Treatment Centers of America, pointed out that results from the British study are very similar to those from a Swedish study assessing the effect of the quadrivalent vaccine alone.
“You can put any superlatives you want in here, but these are stunningly positive results,” Dr. Markman said in an interview. As an oncologist who has been treating cervical cancer for 40 years – particularly advanced cervical cancer – “I can tell you this is one of the most devastating diseases to women, and the ability to eliminate this cancer with something as simple as a vaccine is the goal of cancer therapy, and it’s been remarkably successful,” he stressed.
Editorial commentary
Commenting on the findings, editorialists Maggie Cruickshank, MD, University of Aberdeen (Scotland), and Mihaela Grigore, MD, University of Medicine and Pharmacy, Lasi, Romania, point out that published reports evaluating the effect of HPV vaccination on cervical cancer rates have been scarce until now.
“The most important issue, besides the availability of the vaccine ... is the education of the population to accept vaccination because a high rate of immunization is a key element of success,” they emphasize. “Even in a wealthy country such as England with free access to HPV immunization, uptake has not reached the 90% vaccination target of girls aged 15 years set by the WHO [World Health Organization],” the editorialists add.
Dr. Cruickshank and Dr. Grigore also suggest that the effect HPV vaccination is having on cervical cancer rates as shown in this study should also stimulate vaccination programs in low- and middle-income countries where cervical cancer is a far greater public health issue than it is in countries with established systems of vaccination and screening.
HPV vaccination in the United States
The HPV vaccination program is similarly reducing the incidence of and mortality from cervical cancer among younger women in the United States who are most likely to have received the vaccine. As reported by lead author, Justin Barnes, MD, Washington University, St. Louis, the incidence of cervical cancer dropped by 37.7% from 2001 through 2005 to 2010 through 2017 in girls and young women between 15 and 24 years of age.
The U.S. study was published online in JAMA Pediatrics.
“HPV vaccine coverage in the U.S. has improved over the last few years although it was quite poor for many years,” senior author of the U.K. study, Peter Sasieni, MD, King’s College London, said in an interview. “Thus, one would anticipate a lower impact on the population in the U.S., because vaccine uptake, particularly in those aged 11-14 years was so much lower than it was in the U.K.,” he noted.
SEER databases
National age-adjusted cervical cancer incidence and mortality data from January 2001 through December 2017 for women and girls between 15 and 39 years of age were obtained from the combined Surveillance, Epidemiology, and End Results as well as the National Program of Cancer Registries databases. Mortality data was obtained from the National Center for Health Statistics.
Investigators then compared percentage changes in the incidence of and mortality from cervical cancer from January 2001 through December 2005 during the prevaccination years to that observed between January 2010 through December 2017 during the postvaccination years. They also compared incidence and mortality rates in three different cohorts: females between 15 and 24 years of age, those between 25 and 29 years of age, and those between 30 and 39 years of age.
“The older two groups were included as comparison, given their low vaccination rates,” the authors explained. Results showed that, during the same study interval from 2001 through 2005 to 2010 through 2017, the incidence of cervical cancer dropped by only 16.1% in women between 25 and 29 years of age and by only 8% for women between 30 and 39 years of age, the investigators report.
Reductions in mortality from cervical cancer were only strikingly so in the youngest age group of females between 15 and 24 years of age, among whom there was a 43.3% reduction in mortality from 2001-2005 to 2010-2017, as Dr. Barnes and colleagues note.
This pattern changed substantially in women between the ages of 25 and 29, among whom there was a 4.3% increase in mortality from cervical cancer during the same study interval and a small, 4.7% reduction among women between 30 and 39 years of age, investigators add. In actual numbers, mortality rates from cervical cancer were very low at only 0.6 per 100,000 in females between 15 and 24 years of age.
This compared to a mortality rate of 0.57 per 100,000 in women between 25 and 29 years of age and 1.89 per 100,000 in the oldest age group. “These nationwide data showed decreased cervical cancer incidence and mortality among women and girls aged 15-24 years after HPV vaccine introduction,” Dr. Barnes notes.
“Thus, the current study adds to knowledge by quantitatively comparing changes in cervical cancer incidence by age-based vaccine eligibility and providing suggestive evidence for vaccine-associated decreases in cervical cancer mortality,” investigators add.
However, as the authors also point out, while the reduction in mortality from cervical cancer associated with HPV vaccination may translate to older age groups as HPV-vaccinated cohorts age, “the number of deaths and hence the number of potentially averted deaths in young women and girls was small,” they caution, “and efforts to further improve vaccination uptake remain important.”
None of the authors or the editorialists had any conflicts of interest to declare.
Big drop in U.S. cervical cancer rates, mortality in younger women
The analysis adds to a growing body of evidence demonstrating vaccine-associated changes in cervical cancer incidence and mortality.
Previous data from the United Kingdom, published earlier in November, showed that cervical cancer rates were 87% lower among girls who received the HPV vaccine compared to previously unvaccinated generations. Based on the analysis, the authors concluded that the UK’s HPV immunization program “almost eliminated cervical cancer” in women born since September 1995.
The latest study, published Nov. 29 in JAMA Pediatrics , reports a 38% drop in cervical cancer incidence and a 43% decline in mortality among young women and girls after HPV vaccination was introduced in the United States.
“These results are encouraging,” Peter Sasieni, MD, of King’s College London, and senior author on the U.K. study, told this news organization in an email.
The difference in incidence rates between the U.K. and U.S. studies, Dr. Sasieni explained, is likely due to HPV vaccine coverage not expanding as significantly in the United States as it has in the United Kingdom, and “thus one would anticipate a lower impact on the population in the U.S.”
In the U.S. analysis, Justin Barnes, MD, a radiation oncology resident at Washington University, St. Louis, and colleagues examined cervical cancer incidence between January 2001 and December 2017 using Surveillance, Epidemiology, and End Results and National Program of Cancer Registries data as well as mortality data from the National Center for Health Statistics.
Dr. Barnes and colleagues then compared changes in cervical cancer incidence and mortality between prevaccination years (January 2001 to December 2005) and postvaccination years (January 2010 to December 2017) among three age cohorts – 15-24 years, 25-29 years, and 30-39 years.
“The older 2 groups were included as comparison, given their low vaccination rates,” Dr. Barnes and colleagues explained.
Results show that between the prevaccination and postvaccination periods, the incidence of cervical cancer dropped by 38% in the youngest cohort and by only 16% in the middle-aged group and 8% in the oldest cohort.
Women and girls in the youngest group saw a striking drop in mortality: a 43% decline, which translated to a mortality rate of 0.6 per 100,000.
On the other hand, the authors report a 4.7% decline in mortality in the oldest group and a 4.3% increase in mortality in the middle-aged group – translating to a mortality rate of 1.89 per 100,000 and 0.57 per 100,000, respectively.
Overall, “these nationwide data showed decreased cervical cancer incidence and mortality among women and girls aged 15-24 years after HPV vaccine introduction,” Dr. Barnes and colleagues wrote. The changes in cervical cancer incidence and mortality observed in the youngest age group “were greater than changes in those aged 25 to 29 years and 30 to 39 years, suggesting possible associations with HPV vaccination.”
This analysis lines up with previous evidence from U.S. epidemiologic data, which “have shown decreased cervical cancer incidence after vaccine implementation in women and girls aged 15 to 24 years but not older women.”
Although “the number of deaths and hence the number of potentially averted deaths in young women and girls was small,” the study adds to the current literature by “providing suggestive evidence for vaccine-associated decreases in cervical cancer mortality,” investigators concluded.
The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The analysis adds to a growing body of evidence demonstrating vaccine-associated changes in cervical cancer incidence and mortality.
Previous data from the United Kingdom, published earlier in November, showed that cervical cancer rates were 87% lower among girls who received the HPV vaccine compared to previously unvaccinated generations. Based on the analysis, the authors concluded that the UK’s HPV immunization program “almost eliminated cervical cancer” in women born since September 1995.
The latest study, published Nov. 29 in JAMA Pediatrics , reports a 38% drop in cervical cancer incidence and a 43% decline in mortality among young women and girls after HPV vaccination was introduced in the United States.
“These results are encouraging,” Peter Sasieni, MD, of King’s College London, and senior author on the U.K. study, told this news organization in an email.
The difference in incidence rates between the U.K. and U.S. studies, Dr. Sasieni explained, is likely due to HPV vaccine coverage not expanding as significantly in the United States as it has in the United Kingdom, and “thus one would anticipate a lower impact on the population in the U.S.”
In the U.S. analysis, Justin Barnes, MD, a radiation oncology resident at Washington University, St. Louis, and colleagues examined cervical cancer incidence between January 2001 and December 2017 using Surveillance, Epidemiology, and End Results and National Program of Cancer Registries data as well as mortality data from the National Center for Health Statistics.
Dr. Barnes and colleagues then compared changes in cervical cancer incidence and mortality between prevaccination years (January 2001 to December 2005) and postvaccination years (January 2010 to December 2017) among three age cohorts – 15-24 years, 25-29 years, and 30-39 years.
“The older 2 groups were included as comparison, given their low vaccination rates,” Dr. Barnes and colleagues explained.
Results show that between the prevaccination and postvaccination periods, the incidence of cervical cancer dropped by 38% in the youngest cohort and by only 16% in the middle-aged group and 8% in the oldest cohort.
Women and girls in the youngest group saw a striking drop in mortality: a 43% decline, which translated to a mortality rate of 0.6 per 100,000.
On the other hand, the authors report a 4.7% decline in mortality in the oldest group and a 4.3% increase in mortality in the middle-aged group – translating to a mortality rate of 1.89 per 100,000 and 0.57 per 100,000, respectively.
Overall, “these nationwide data showed decreased cervical cancer incidence and mortality among women and girls aged 15-24 years after HPV vaccine introduction,” Dr. Barnes and colleagues wrote. The changes in cervical cancer incidence and mortality observed in the youngest age group “were greater than changes in those aged 25 to 29 years and 30 to 39 years, suggesting possible associations with HPV vaccination.”
This analysis lines up with previous evidence from U.S. epidemiologic data, which “have shown decreased cervical cancer incidence after vaccine implementation in women and girls aged 15 to 24 years but not older women.”
Although “the number of deaths and hence the number of potentially averted deaths in young women and girls was small,” the study adds to the current literature by “providing suggestive evidence for vaccine-associated decreases in cervical cancer mortality,” investigators concluded.
The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
The analysis adds to a growing body of evidence demonstrating vaccine-associated changes in cervical cancer incidence and mortality.
Previous data from the United Kingdom, published earlier in November, showed that cervical cancer rates were 87% lower among girls who received the HPV vaccine compared to previously unvaccinated generations. Based on the analysis, the authors concluded that the UK’s HPV immunization program “almost eliminated cervical cancer” in women born since September 1995.
The latest study, published Nov. 29 in JAMA Pediatrics , reports a 38% drop in cervical cancer incidence and a 43% decline in mortality among young women and girls after HPV vaccination was introduced in the United States.
“These results are encouraging,” Peter Sasieni, MD, of King’s College London, and senior author on the U.K. study, told this news organization in an email.
The difference in incidence rates between the U.K. and U.S. studies, Dr. Sasieni explained, is likely due to HPV vaccine coverage not expanding as significantly in the United States as it has in the United Kingdom, and “thus one would anticipate a lower impact on the population in the U.S.”
In the U.S. analysis, Justin Barnes, MD, a radiation oncology resident at Washington University, St. Louis, and colleagues examined cervical cancer incidence between January 2001 and December 2017 using Surveillance, Epidemiology, and End Results and National Program of Cancer Registries data as well as mortality data from the National Center for Health Statistics.
Dr. Barnes and colleagues then compared changes in cervical cancer incidence and mortality between prevaccination years (January 2001 to December 2005) and postvaccination years (January 2010 to December 2017) among three age cohorts – 15-24 years, 25-29 years, and 30-39 years.
“The older 2 groups were included as comparison, given their low vaccination rates,” Dr. Barnes and colleagues explained.
Results show that between the prevaccination and postvaccination periods, the incidence of cervical cancer dropped by 38% in the youngest cohort and by only 16% in the middle-aged group and 8% in the oldest cohort.
Women and girls in the youngest group saw a striking drop in mortality: a 43% decline, which translated to a mortality rate of 0.6 per 100,000.
On the other hand, the authors report a 4.7% decline in mortality in the oldest group and a 4.3% increase in mortality in the middle-aged group – translating to a mortality rate of 1.89 per 100,000 and 0.57 per 100,000, respectively.
Overall, “these nationwide data showed decreased cervical cancer incidence and mortality among women and girls aged 15-24 years after HPV vaccine introduction,” Dr. Barnes and colleagues wrote. The changes in cervical cancer incidence and mortality observed in the youngest age group “were greater than changes in those aged 25 to 29 years and 30 to 39 years, suggesting possible associations with HPV vaccination.”
This analysis lines up with previous evidence from U.S. epidemiologic data, which “have shown decreased cervical cancer incidence after vaccine implementation in women and girls aged 15 to 24 years but not older women.”
Although “the number of deaths and hence the number of potentially averted deaths in young women and girls was small,” the study adds to the current literature by “providing suggestive evidence for vaccine-associated decreases in cervical cancer mortality,” investigators concluded.
The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
FROM JAMA PEDIATRICS
Rhinosinusitis without nasal polyps lowers QoL in COPD
Concomitant rhinosinusitis without nasal polyps (RSsNP) in patients with chronic obstructive pulmonary disease (COPD) is associated with a poorer, disease-specific, health-related quality of life (HRQoL), a Norwegian study is showing.
“Chronic rhinosinusitis has an impact on patients’ HRQoL,” lead author Marte Rystad Øie, Trondheim (Norway) University Hospital, said in an interview.
“We found that RSsNP in COPD was associated with more psychological issues, higher COPD symptom burden, and overall COPD-related HRQoL after adjusting for lung function, so RSsNP does have clinical relevance and [our findings] support previous studies that have suggested that rhinosinusitis should be recognized as a comorbidity in COPD,” she emphasized.
The study was published in the Nov. 1 issue of Respiratory Medicine.
Study sample
The study sample consisted of 90 patients with COPD and 93 control subjects, all age 40-80 years. “Generic HRQoL was measured with the Norwegian version of the SF-36v2 Health Survey Standard questionnaire,” the authors wrote, and responses were compared between patients with COPD and controls as well as between subgroups of patients who had COPD both with and without RSsNP.
Disease-specific HRQoL was assessed by the Sinonasal Outcome Test-22 (SNOT-22); the St. Georges Respiratory Questionnaire (SGRQ), and the COPD Assessment Test (CAT), and responses were again compared between patients who had COPD with and without RSsNP. In the COPD group, “severe” and “very severe” airflow obstruction was present in 56.5% of patients with RSsNP compared with 38.6% of patients without RSsNP, as Ms. Øie reported.
Furthermore, total SNOT-22 along with psychological subscale scores were both significantly higher in patients who had COPD with RSsNP than those without RSsNP. Among those with RSsNP, the mean value of the total SNOT-22 score was 36.8 whereas the mean value of the psychological subscale score was 22.6. Comparable mean values among patients who had COPD without RSsNP were 9.5 and 6.5, respectively (P < .05).
Total scores on the SGRQ were again significantly greater in patients who had COPD with RSsNP at a mean of 43.3 compared with a mean of 34 in those without RSsNP, investigators observe. Similarly, scores for the symptom and activity domains again on the SGRQ were significantly greater for patients who had COPD with RSsNP than those without nasal polyps. As for the total CAT score, once again it was significantly higher in patients who had COPD with RSsNP at a mean of 18.8 compared with a mean of 13.5 in those without RSsNP (P < .05).
Indeed, patients with RSsNP were four times more likely to have CAT scores indicating the condition was having a high or very high impact on their HRQoL compared with patients without RSsNP (P < .001). As the authors pointed out, having a high impact on HRQoL translates into patients having to stop their desired activities and having no good days in the week.
“This suggests that having RSsNP substantially adds to the activity limitation experienced by patients with COPD,” they emphasized. The authors also found that RSsNP was significantly associated with poorer physical functioning after adjusting for COPD as reflected by SF-36v2 findings, again suggesting that patients who had COPD with concomitant RSsNP have an additional limitation in activity and a heavier symptom burden.
As Ms. Øie explained, rhinosinusitis has two clinical phenotypes: that with nasal polyps and that without nasal polyps, the latter being twice as prevalent. In fact, rhinosinusitis with nasal polyps is associated with asthma, as she pointed out. Given, however, that rhinosinusitis without polyps is amenable to treatment with daily use of nasal steroids, it is possible to reduce the burden of symptoms and psychological stress associated with RSsNP in COPD.
Limitations of the study include the fact that investigators did not assess patients for the presence of any comorbidities that could contribute to poorer HRQoL in this patient population.
The study was funded by Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Concomitant rhinosinusitis without nasal polyps (RSsNP) in patients with chronic obstructive pulmonary disease (COPD) is associated with a poorer, disease-specific, health-related quality of life (HRQoL), a Norwegian study is showing.
“Chronic rhinosinusitis has an impact on patients’ HRQoL,” lead author Marte Rystad Øie, Trondheim (Norway) University Hospital, said in an interview.
“We found that RSsNP in COPD was associated with more psychological issues, higher COPD symptom burden, and overall COPD-related HRQoL after adjusting for lung function, so RSsNP does have clinical relevance and [our findings] support previous studies that have suggested that rhinosinusitis should be recognized as a comorbidity in COPD,” she emphasized.
The study was published in the Nov. 1 issue of Respiratory Medicine.
Study sample
The study sample consisted of 90 patients with COPD and 93 control subjects, all age 40-80 years. “Generic HRQoL was measured with the Norwegian version of the SF-36v2 Health Survey Standard questionnaire,” the authors wrote, and responses were compared between patients with COPD and controls as well as between subgroups of patients who had COPD both with and without RSsNP.
Disease-specific HRQoL was assessed by the Sinonasal Outcome Test-22 (SNOT-22); the St. Georges Respiratory Questionnaire (SGRQ), and the COPD Assessment Test (CAT), and responses were again compared between patients who had COPD with and without RSsNP. In the COPD group, “severe” and “very severe” airflow obstruction was present in 56.5% of patients with RSsNP compared with 38.6% of patients without RSsNP, as Ms. Øie reported.
Furthermore, total SNOT-22 along with psychological subscale scores were both significantly higher in patients who had COPD with RSsNP than those without RSsNP. Among those with RSsNP, the mean value of the total SNOT-22 score was 36.8 whereas the mean value of the psychological subscale score was 22.6. Comparable mean values among patients who had COPD without RSsNP were 9.5 and 6.5, respectively (P < .05).
Total scores on the SGRQ were again significantly greater in patients who had COPD with RSsNP at a mean of 43.3 compared with a mean of 34 in those without RSsNP, investigators observe. Similarly, scores for the symptom and activity domains again on the SGRQ were significantly greater for patients who had COPD with RSsNP than those without nasal polyps. As for the total CAT score, once again it was significantly higher in patients who had COPD with RSsNP at a mean of 18.8 compared with a mean of 13.5 in those without RSsNP (P < .05).
Indeed, patients with RSsNP were four times more likely to have CAT scores indicating the condition was having a high or very high impact on their HRQoL compared with patients without RSsNP (P < .001). As the authors pointed out, having a high impact on HRQoL translates into patients having to stop their desired activities and having no good days in the week.
“This suggests that having RSsNP substantially adds to the activity limitation experienced by patients with COPD,” they emphasized. The authors also found that RSsNP was significantly associated with poorer physical functioning after adjusting for COPD as reflected by SF-36v2 findings, again suggesting that patients who had COPD with concomitant RSsNP have an additional limitation in activity and a heavier symptom burden.
As Ms. Øie explained, rhinosinusitis has two clinical phenotypes: that with nasal polyps and that without nasal polyps, the latter being twice as prevalent. In fact, rhinosinusitis with nasal polyps is associated with asthma, as she pointed out. Given, however, that rhinosinusitis without polyps is amenable to treatment with daily use of nasal steroids, it is possible to reduce the burden of symptoms and psychological stress associated with RSsNP in COPD.
Limitations of the study include the fact that investigators did not assess patients for the presence of any comorbidities that could contribute to poorer HRQoL in this patient population.
The study was funded by Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.
Concomitant rhinosinusitis without nasal polyps (RSsNP) in patients with chronic obstructive pulmonary disease (COPD) is associated with a poorer, disease-specific, health-related quality of life (HRQoL), a Norwegian study is showing.
“Chronic rhinosinusitis has an impact on patients’ HRQoL,” lead author Marte Rystad Øie, Trondheim (Norway) University Hospital, said in an interview.
“We found that RSsNP in COPD was associated with more psychological issues, higher COPD symptom burden, and overall COPD-related HRQoL after adjusting for lung function, so RSsNP does have clinical relevance and [our findings] support previous studies that have suggested that rhinosinusitis should be recognized as a comorbidity in COPD,” she emphasized.
The study was published in the Nov. 1 issue of Respiratory Medicine.
Study sample
The study sample consisted of 90 patients with COPD and 93 control subjects, all age 40-80 years. “Generic HRQoL was measured with the Norwegian version of the SF-36v2 Health Survey Standard questionnaire,” the authors wrote, and responses were compared between patients with COPD and controls as well as between subgroups of patients who had COPD both with and without RSsNP.
Disease-specific HRQoL was assessed by the Sinonasal Outcome Test-22 (SNOT-22); the St. Georges Respiratory Questionnaire (SGRQ), and the COPD Assessment Test (CAT), and responses were again compared between patients who had COPD with and without RSsNP. In the COPD group, “severe” and “very severe” airflow obstruction was present in 56.5% of patients with RSsNP compared with 38.6% of patients without RSsNP, as Ms. Øie reported.
Furthermore, total SNOT-22 along with psychological subscale scores were both significantly higher in patients who had COPD with RSsNP than those without RSsNP. Among those with RSsNP, the mean value of the total SNOT-22 score was 36.8 whereas the mean value of the psychological subscale score was 22.6. Comparable mean values among patients who had COPD without RSsNP were 9.5 and 6.5, respectively (P < .05).
Total scores on the SGRQ were again significantly greater in patients who had COPD with RSsNP at a mean of 43.3 compared with a mean of 34 in those without RSsNP, investigators observe. Similarly, scores for the symptom and activity domains again on the SGRQ were significantly greater for patients who had COPD with RSsNP than those without nasal polyps. As for the total CAT score, once again it was significantly higher in patients who had COPD with RSsNP at a mean of 18.8 compared with a mean of 13.5 in those without RSsNP (P < .05).
Indeed, patients with RSsNP were four times more likely to have CAT scores indicating the condition was having a high or very high impact on their HRQoL compared with patients without RSsNP (P < .001). As the authors pointed out, having a high impact on HRQoL translates into patients having to stop their desired activities and having no good days in the week.
“This suggests that having RSsNP substantially adds to the activity limitation experienced by patients with COPD,” they emphasized. The authors also found that RSsNP was significantly associated with poorer physical functioning after adjusting for COPD as reflected by SF-36v2 findings, again suggesting that patients who had COPD with concomitant RSsNP have an additional limitation in activity and a heavier symptom burden.
As Ms. Øie explained, rhinosinusitis has two clinical phenotypes: that with nasal polyps and that without nasal polyps, the latter being twice as prevalent. In fact, rhinosinusitis with nasal polyps is associated with asthma, as she pointed out. Given, however, that rhinosinusitis without polyps is amenable to treatment with daily use of nasal steroids, it is possible to reduce the burden of symptoms and psychological stress associated with RSsNP in COPD.
Limitations of the study include the fact that investigators did not assess patients for the presence of any comorbidities that could contribute to poorer HRQoL in this patient population.
The study was funded by Liaison Committee between the Central Norway Regional Health Authority and the Norwegian University of Science and Technology. The authors have disclosed no relevant financial relationships.
A version of this article first appeared on Medscape.com.