Blood cancer patients, survivors hesitate over COVID-19 vaccine

Article Type
Changed
Thu, 08/26/2021 - 15:49

Nearly one in three patients with blood cancer, and survivors, say they are unlikely to get a COVID-19 vaccine or unsure about getting it if one were available. The findings come from a nationwide survey by The Leukemia & Lymphoma Society, which collected 6,517 responses.

“These findings are worrisome, to say the least,” Gwen Nichols, MD, chief medical officer of the society, said in a statement.

“We know cancer patients – and blood cancer patients in particular – are susceptible to the worst effects of the virus [and] all of us in the medical community need to help cancer patients understand the importance of getting vaccinated,” she added.

The survey – the largest ever done in which cancer patients and survivors were asked about their attitudes toward COVID-19 vaccines – was published online March 8 by The Leukemia & Lymphoma Society.
 

Survey sample

The survey asked patients with blood cancer, and survivors, about their attitudes regarding COVID-19 and COVID-19 vaccines.

“The main outcome [was] vaccine attitudes,” noted the authors, headed by Rena Conti, PhD, dean’s research scholar, Boston University.

Respondents were asked: “How likely are you to choose to get the vaccine?” Participants could indicate they were very unlikely, unlikely, neither likely nor unlikely, likely, or very likely to get vaccinated.

“We found that 17% of respondents indicate[d] that they [were] unlikely or very unlikely to take a vaccine,” Dr. Conti and colleagues observed.

Among the 17% – deemed to be “vaccine hesitant” – slightly over half (54%) stated they had concerns about the side effects associated with COVID-19 vaccination and believed neither of the two newly approved vaccines had been or would ever be tested properly.

The survey authors noted that there is no reason to believe COVID-19 vaccines are any less safe in patients with blood cancers, but concerns have been expressed that patients with some forms of blood cancer or those undergoing certain treatments may not achieve the same immune response to the vaccine as would noncancer controls.

Importantly, the survey was conducted Dec. 1-21, 2020, and responses differed depending on whether respondents answered the survey before or after the Pfizer-BioNTech and Moderna vaccines had been given emergency use authorization by the Food and Drug Administration starting Dec. 10, 2020. 

There was a slight increase in positive responses after the vaccines were granted regulatory approval. (One-third of those who responded to the survey after the approval were 3.7% more likely to indicate they would get vaccinated). “This suggests that hesitancy may be influenced by emerging information dissemination, government action, and vaccine availability, transforming the hypothetical opportunity of vaccination to a real one,” the survey authors speculated.

Survey respondents who were vaccine hesitant were also over 14% more likely to indicate that they didn’t think they would require hospitalization should they contract COVID-19. But clinical data have suggested that approximately half of patients with a hematological malignancy who required hospitalization for COVID-19 die from the infection, the authors noted.

“Vaccine hesitant respondents [were] also significantly less likely to engage in protective health behaviors,” the survey authors pointed out. For example, they were almost 4% less likely to have worn a face mask and 1.6% less likely to have taken other protective measures to guard against COVID-19 infection.
 

 

 

Need for clear messaging

To counter vaccine hesitancy, the authors suggest there is a need for clear, consistent messaging targeting patients with cancer that emphasize the risks of COVID-19 and underscore vaccine benefits.

Dr. Conti pointed out that patients with blood cancer are, in fact, being given preferential access to vaccines in many communities, although this clearly doesn’t mean patients are willing to get vaccinated, as she also noted.

“We need both adequate supply and strong demand to keep this vulnerable population safe,” Dr. Conti emphasized.

The Leukemia & Lymphoma Society plans to repeat the survey in the near future to assess patients’ and survivors’ access to vaccines as well as their willingness to get vaccinated.

The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Nearly one in three patients with blood cancer, and survivors, say they are unlikely to get a COVID-19 vaccine or unsure about getting it if one were available. The findings come from a nationwide survey by The Leukemia & Lymphoma Society, which collected 6,517 responses.

“These findings are worrisome, to say the least,” Gwen Nichols, MD, chief medical officer of the society, said in a statement.

“We know cancer patients – and blood cancer patients in particular – are susceptible to the worst effects of the virus [and] all of us in the medical community need to help cancer patients understand the importance of getting vaccinated,” she added.

The survey – the largest ever done in which cancer patients and survivors were asked about their attitudes toward COVID-19 vaccines – was published online March 8 by The Leukemia & Lymphoma Society.
 

Survey sample

The survey asked patients with blood cancer, and survivors, about their attitudes regarding COVID-19 and COVID-19 vaccines.

“The main outcome [was] vaccine attitudes,” noted the authors, headed by Rena Conti, PhD, dean’s research scholar, Boston University.

Respondents were asked: “How likely are you to choose to get the vaccine?” Participants could indicate they were very unlikely, unlikely, neither likely nor unlikely, likely, or very likely to get vaccinated.

“We found that 17% of respondents indicate[d] that they [were] unlikely or very unlikely to take a vaccine,” Dr. Conti and colleagues observed.

Among the 17% – deemed to be “vaccine hesitant” – slightly over half (54%) stated they had concerns about the side effects associated with COVID-19 vaccination and believed neither of the two newly approved vaccines had been or would ever be tested properly.

The survey authors noted that there is no reason to believe COVID-19 vaccines are any less safe in patients with blood cancers, but concerns have been expressed that patients with some forms of blood cancer or those undergoing certain treatments may not achieve the same immune response to the vaccine as would noncancer controls.

Importantly, the survey was conducted Dec. 1-21, 2020, and responses differed depending on whether respondents answered the survey before or after the Pfizer-BioNTech and Moderna vaccines had been given emergency use authorization by the Food and Drug Administration starting Dec. 10, 2020. 

There was a slight increase in positive responses after the vaccines were granted regulatory approval. (One-third of those who responded to the survey after the approval were 3.7% more likely to indicate they would get vaccinated). “This suggests that hesitancy may be influenced by emerging information dissemination, government action, and vaccine availability, transforming the hypothetical opportunity of vaccination to a real one,” the survey authors speculated.

Survey respondents who were vaccine hesitant were also over 14% more likely to indicate that they didn’t think they would require hospitalization should they contract COVID-19. But clinical data have suggested that approximately half of patients with a hematological malignancy who required hospitalization for COVID-19 die from the infection, the authors noted.

“Vaccine hesitant respondents [were] also significantly less likely to engage in protective health behaviors,” the survey authors pointed out. For example, they were almost 4% less likely to have worn a face mask and 1.6% less likely to have taken other protective measures to guard against COVID-19 infection.
 

 

 

Need for clear messaging

To counter vaccine hesitancy, the authors suggest there is a need for clear, consistent messaging targeting patients with cancer that emphasize the risks of COVID-19 and underscore vaccine benefits.

Dr. Conti pointed out that patients with blood cancer are, in fact, being given preferential access to vaccines in many communities, although this clearly doesn’t mean patients are willing to get vaccinated, as she also noted.

“We need both adequate supply and strong demand to keep this vulnerable population safe,” Dr. Conti emphasized.

The Leukemia & Lymphoma Society plans to repeat the survey in the near future to assess patients’ and survivors’ access to vaccines as well as their willingness to get vaccinated.

The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Nearly one in three patients with blood cancer, and survivors, say they are unlikely to get a COVID-19 vaccine or unsure about getting it if one were available. The findings come from a nationwide survey by The Leukemia & Lymphoma Society, which collected 6,517 responses.

“These findings are worrisome, to say the least,” Gwen Nichols, MD, chief medical officer of the society, said in a statement.

“We know cancer patients – and blood cancer patients in particular – are susceptible to the worst effects of the virus [and] all of us in the medical community need to help cancer patients understand the importance of getting vaccinated,” she added.

The survey – the largest ever done in which cancer patients and survivors were asked about their attitudes toward COVID-19 vaccines – was published online March 8 by The Leukemia & Lymphoma Society.
 

Survey sample

The survey asked patients with blood cancer, and survivors, about their attitudes regarding COVID-19 and COVID-19 vaccines.

“The main outcome [was] vaccine attitudes,” noted the authors, headed by Rena Conti, PhD, dean’s research scholar, Boston University.

Respondents were asked: “How likely are you to choose to get the vaccine?” Participants could indicate they were very unlikely, unlikely, neither likely nor unlikely, likely, or very likely to get vaccinated.

“We found that 17% of respondents indicate[d] that they [were] unlikely or very unlikely to take a vaccine,” Dr. Conti and colleagues observed.

Among the 17% – deemed to be “vaccine hesitant” – slightly over half (54%) stated they had concerns about the side effects associated with COVID-19 vaccination and believed neither of the two newly approved vaccines had been or would ever be tested properly.

The survey authors noted that there is no reason to believe COVID-19 vaccines are any less safe in patients with blood cancers, but concerns have been expressed that patients with some forms of blood cancer or those undergoing certain treatments may not achieve the same immune response to the vaccine as would noncancer controls.

Importantly, the survey was conducted Dec. 1-21, 2020, and responses differed depending on whether respondents answered the survey before or after the Pfizer-BioNTech and Moderna vaccines had been given emergency use authorization by the Food and Drug Administration starting Dec. 10, 2020. 

There was a slight increase in positive responses after the vaccines were granted regulatory approval. (One-third of those who responded to the survey after the approval were 3.7% more likely to indicate they would get vaccinated). “This suggests that hesitancy may be influenced by emerging information dissemination, government action, and vaccine availability, transforming the hypothetical opportunity of vaccination to a real one,” the survey authors speculated.

Survey respondents who were vaccine hesitant were also over 14% more likely to indicate that they didn’t think they would require hospitalization should they contract COVID-19. But clinical data have suggested that approximately half of patients with a hematological malignancy who required hospitalization for COVID-19 die from the infection, the authors noted.

“Vaccine hesitant respondents [were] also significantly less likely to engage in protective health behaviors,” the survey authors pointed out. For example, they were almost 4% less likely to have worn a face mask and 1.6% less likely to have taken other protective measures to guard against COVID-19 infection.
 

 

 

Need for clear messaging

To counter vaccine hesitancy, the authors suggest there is a need for clear, consistent messaging targeting patients with cancer that emphasize the risks of COVID-19 and underscore vaccine benefits.

Dr. Conti pointed out that patients with blood cancer are, in fact, being given preferential access to vaccines in many communities, although this clearly doesn’t mean patients are willing to get vaccinated, as she also noted.

“We need both adequate supply and strong demand to keep this vulnerable population safe,” Dr. Conti emphasized.

The Leukemia & Lymphoma Society plans to repeat the survey in the near future to assess patients’ and survivors’ access to vaccines as well as their willingness to get vaccinated.

The authors have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

ACG: CRC screening should start at age 45

Article Type
Changed
Wed, 05/26/2021 - 13:41

Colorectal cancer (CRC) screening is now recommended for average-risk individuals starting at age 45 years, according to the American College of Gastroenterology’s updated guideline.

The starting age was previously 50 years for most patients. However, for Black patients, the starting age was lowered to 45 years in 2005.

The new guidance brings the ACG in line with recommendations of the American Cancer Society, which lowered the starting age to 45 years for average-risk individuals in 2018.

However, the U.S. Preventive Services Task Force, the Multi-Specialty Task Force, and the American College of Physicians still recommend that CRC screening begin at the age of 50.

The new ACG guideline were published in March 2021 in the American Journal of Gastroenterology. The last time they were updated was in 2009.

The ACG said that the move was made in light of reports of an increase in the incidence of CRC in adults younger than 50.

“It has been estimated that [in the United States] persons born around 1990 have twice the risk of colon cancer and four times the risk of rectal cancer, compared with those born around 1950,” guideline author Aasma Shaukat, MD, MPH, University of Minnesota, Minneapolis, and colleagues pointed out.

“The fact that other developed countries are reporting similar increases in early-onset CRC and birth-cohort effects suggests that the Western lifestyle (especially exemplified by the obesity epidemic) is a significant contributor,” the authors added.

The new ACG guideline emphasize the importance of initiating CRC screening for average-risk patients aged 50-75 years. “Given that current rates of screening uptake are close to 60% (57.9% ages 50-64 and 62.4% ages 50-75), expanding the population to be screened may reduce these rates as emphasis shifts to screening 45- to 49-year-olds at the expense of efforts to screen the unscreened 50- to 75-year-olds,” the authors commented.

Now, however, the guideline suggests that the decision to continue screening after age 75 should be individualized. It notes that the benefits of screening are limited for those who are not expected to live for another 7-10 years. For patients with a family history of CRC, the guideline authors recommended initiating CRC screening at the age of 40 for patients with one or two first-degree relatives with either CRC or advanced colorectal polyps.

They also recommend screening colonoscopy over any other screening modality if the first-degree relative is younger than 60 or if two or more first-degree relatives of any age have CRC or advanced colorectal polyps. For such patients, screening should be repeated every 5 years.

For screening average-risk individuals, either colonoscopy or fecal immunochemical testing (FIT) is recommended. If colonoscopy is used, it should be repeated every 10 years. FIT should be conducted on an annual basis.

This is somewhat in contrast to recent changes proposed by the American Gastroenterological Association. The AGA recommends greater use of noninvasive testing, such as with fecal occult blood tests, initially. It recommends that initial colonoscopy be used only for patients at high risk for CRC.

For individuals unwilling or unable to undergo colonoscopy or FIT, the ACG suggests flexible sigmoidoscopy, multitarget stool DNA testing, CT colonography, or colon capsule. Only colonoscopy is a single-step test; all other screening modalities require a follow-up colonoscopy if test results are positive.

“We recommend against the use of aspirin as a substitute for CRC screening,” the ACG members emphasized. Rather, they suggest that the use of low-dose aspirin be considered only for patients aged 50-69 years whose risk for cardiovascular disease over the next 10 years is at least 10% and who are at low risk for bleeding.

To reduce their risk for CRC, patients need to take aspirin for at least 10 years, they pointed out.
 

 

 

Quality indicators

For endoscopists who perform colonoscopy, the ACG recommended that all operators determine their individual cecal intubation rates, adenoma detection rates, and withdrawal times. They also recommended that endoscopists spend at least 6 minutes inspecting the mucosa during withdrawal and achieve a cecal intubation rate of at least 95% for all patients screened.

The ACG recommended remedial training for any provider whose adenoma detection rate is less than 25%.
 

Screening rates dropped during pandemic

The authors of the new recommendations also pointed out that, despite public health initiatives to boost CRC screening in the United States and the availability of multiple screening modalities, almost one-third of individuals who are eligible for CRC screening do not undergo screening.

Moreover, the proportion of individuals not being screened has reportedly increased during the pandemic. In one report, claims data for colonoscopies dropped by 90% during April. “Colorectal cancer screening rates must be optimized to reach the aspirational target of >80%,” the authors emphasized.

“A recommendation to be screened by a PCP [primary care provider] – who is known and trusted by the person – is clearly effective in raising participation,” they added.

Dr. Shaukat has served as a scientific consultant for Iterative Scopes and Freenome. Other ACG guideline authors reported numerous financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Colorectal cancer (CRC) screening is now recommended for average-risk individuals starting at age 45 years, according to the American College of Gastroenterology’s updated guideline.

The starting age was previously 50 years for most patients. However, for Black patients, the starting age was lowered to 45 years in 2005.

The new guidance brings the ACG in line with recommendations of the American Cancer Society, which lowered the starting age to 45 years for average-risk individuals in 2018.

However, the U.S. Preventive Services Task Force, the Multi-Specialty Task Force, and the American College of Physicians still recommend that CRC screening begin at the age of 50.

The new ACG guideline were published in March 2021 in the American Journal of Gastroenterology. The last time they were updated was in 2009.

The ACG said that the move was made in light of reports of an increase in the incidence of CRC in adults younger than 50.

“It has been estimated that [in the United States] persons born around 1990 have twice the risk of colon cancer and four times the risk of rectal cancer, compared with those born around 1950,” guideline author Aasma Shaukat, MD, MPH, University of Minnesota, Minneapolis, and colleagues pointed out.

“The fact that other developed countries are reporting similar increases in early-onset CRC and birth-cohort effects suggests that the Western lifestyle (especially exemplified by the obesity epidemic) is a significant contributor,” the authors added.

The new ACG guideline emphasize the importance of initiating CRC screening for average-risk patients aged 50-75 years. “Given that current rates of screening uptake are close to 60% (57.9% ages 50-64 and 62.4% ages 50-75), expanding the population to be screened may reduce these rates as emphasis shifts to screening 45- to 49-year-olds at the expense of efforts to screen the unscreened 50- to 75-year-olds,” the authors commented.

Now, however, the guideline suggests that the decision to continue screening after age 75 should be individualized. It notes that the benefits of screening are limited for those who are not expected to live for another 7-10 years. For patients with a family history of CRC, the guideline authors recommended initiating CRC screening at the age of 40 for patients with one or two first-degree relatives with either CRC or advanced colorectal polyps.

They also recommend screening colonoscopy over any other screening modality if the first-degree relative is younger than 60 or if two or more first-degree relatives of any age have CRC or advanced colorectal polyps. For such patients, screening should be repeated every 5 years.

For screening average-risk individuals, either colonoscopy or fecal immunochemical testing (FIT) is recommended. If colonoscopy is used, it should be repeated every 10 years. FIT should be conducted on an annual basis.

This is somewhat in contrast to recent changes proposed by the American Gastroenterological Association. The AGA recommends greater use of noninvasive testing, such as with fecal occult blood tests, initially. It recommends that initial colonoscopy be used only for patients at high risk for CRC.

For individuals unwilling or unable to undergo colonoscopy or FIT, the ACG suggests flexible sigmoidoscopy, multitarget stool DNA testing, CT colonography, or colon capsule. Only colonoscopy is a single-step test; all other screening modalities require a follow-up colonoscopy if test results are positive.

“We recommend against the use of aspirin as a substitute for CRC screening,” the ACG members emphasized. Rather, they suggest that the use of low-dose aspirin be considered only for patients aged 50-69 years whose risk for cardiovascular disease over the next 10 years is at least 10% and who are at low risk for bleeding.

To reduce their risk for CRC, patients need to take aspirin for at least 10 years, they pointed out.
 

 

 

Quality indicators

For endoscopists who perform colonoscopy, the ACG recommended that all operators determine their individual cecal intubation rates, adenoma detection rates, and withdrawal times. They also recommended that endoscopists spend at least 6 minutes inspecting the mucosa during withdrawal and achieve a cecal intubation rate of at least 95% for all patients screened.

The ACG recommended remedial training for any provider whose adenoma detection rate is less than 25%.
 

Screening rates dropped during pandemic

The authors of the new recommendations also pointed out that, despite public health initiatives to boost CRC screening in the United States and the availability of multiple screening modalities, almost one-third of individuals who are eligible for CRC screening do not undergo screening.

Moreover, the proportion of individuals not being screened has reportedly increased during the pandemic. In one report, claims data for colonoscopies dropped by 90% during April. “Colorectal cancer screening rates must be optimized to reach the aspirational target of >80%,” the authors emphasized.

“A recommendation to be screened by a PCP [primary care provider] – who is known and trusted by the person – is clearly effective in raising participation,” they added.

Dr. Shaukat has served as a scientific consultant for Iterative Scopes and Freenome. Other ACG guideline authors reported numerous financial relationships.

A version of this article first appeared on Medscape.com.

Colorectal cancer (CRC) screening is now recommended for average-risk individuals starting at age 45 years, according to the American College of Gastroenterology’s updated guideline.

The starting age was previously 50 years for most patients. However, for Black patients, the starting age was lowered to 45 years in 2005.

The new guidance brings the ACG in line with recommendations of the American Cancer Society, which lowered the starting age to 45 years for average-risk individuals in 2018.

However, the U.S. Preventive Services Task Force, the Multi-Specialty Task Force, and the American College of Physicians still recommend that CRC screening begin at the age of 50.

The new ACG guideline were published in March 2021 in the American Journal of Gastroenterology. The last time they were updated was in 2009.

The ACG said that the move was made in light of reports of an increase in the incidence of CRC in adults younger than 50.

“It has been estimated that [in the United States] persons born around 1990 have twice the risk of colon cancer and four times the risk of rectal cancer, compared with those born around 1950,” guideline author Aasma Shaukat, MD, MPH, University of Minnesota, Minneapolis, and colleagues pointed out.

“The fact that other developed countries are reporting similar increases in early-onset CRC and birth-cohort effects suggests that the Western lifestyle (especially exemplified by the obesity epidemic) is a significant contributor,” the authors added.

The new ACG guideline emphasize the importance of initiating CRC screening for average-risk patients aged 50-75 years. “Given that current rates of screening uptake are close to 60% (57.9% ages 50-64 and 62.4% ages 50-75), expanding the population to be screened may reduce these rates as emphasis shifts to screening 45- to 49-year-olds at the expense of efforts to screen the unscreened 50- to 75-year-olds,” the authors commented.

Now, however, the guideline suggests that the decision to continue screening after age 75 should be individualized. It notes that the benefits of screening are limited for those who are not expected to live for another 7-10 years. For patients with a family history of CRC, the guideline authors recommended initiating CRC screening at the age of 40 for patients with one or two first-degree relatives with either CRC or advanced colorectal polyps.

They also recommend screening colonoscopy over any other screening modality if the first-degree relative is younger than 60 or if two or more first-degree relatives of any age have CRC or advanced colorectal polyps. For such patients, screening should be repeated every 5 years.

For screening average-risk individuals, either colonoscopy or fecal immunochemical testing (FIT) is recommended. If colonoscopy is used, it should be repeated every 10 years. FIT should be conducted on an annual basis.

This is somewhat in contrast to recent changes proposed by the American Gastroenterological Association. The AGA recommends greater use of noninvasive testing, such as with fecal occult blood tests, initially. It recommends that initial colonoscopy be used only for patients at high risk for CRC.

For individuals unwilling or unable to undergo colonoscopy or FIT, the ACG suggests flexible sigmoidoscopy, multitarget stool DNA testing, CT colonography, or colon capsule. Only colonoscopy is a single-step test; all other screening modalities require a follow-up colonoscopy if test results are positive.

“We recommend against the use of aspirin as a substitute for CRC screening,” the ACG members emphasized. Rather, they suggest that the use of low-dose aspirin be considered only for patients aged 50-69 years whose risk for cardiovascular disease over the next 10 years is at least 10% and who are at low risk for bleeding.

To reduce their risk for CRC, patients need to take aspirin for at least 10 years, they pointed out.
 

 

 

Quality indicators

For endoscopists who perform colonoscopy, the ACG recommended that all operators determine their individual cecal intubation rates, adenoma detection rates, and withdrawal times. They also recommended that endoscopists spend at least 6 minutes inspecting the mucosa during withdrawal and achieve a cecal intubation rate of at least 95% for all patients screened.

The ACG recommended remedial training for any provider whose adenoma detection rate is less than 25%.
 

Screening rates dropped during pandemic

The authors of the new recommendations also pointed out that, despite public health initiatives to boost CRC screening in the United States and the availability of multiple screening modalities, almost one-third of individuals who are eligible for CRC screening do not undergo screening.

Moreover, the proportion of individuals not being screened has reportedly increased during the pandemic. In one report, claims data for colonoscopies dropped by 90% during April. “Colorectal cancer screening rates must be optimized to reach the aspirational target of >80%,” the authors emphasized.

“A recommendation to be screened by a PCP [primary care provider] – who is known and trusted by the person – is clearly effective in raising participation,” they added.

Dr. Shaukat has served as a scientific consultant for Iterative Scopes and Freenome. Other ACG guideline authors reported numerous financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

Bone loss common in kidney stone patients, yet rarely detected

Article Type
Changed
Thu, 03/11/2021 - 16:25

Almost one in four men and women diagnosed with kidney stones have osteoporosis or a history of fracture at the time of their diagnosis, yet fewer than 10% undergo bone mineral density (BMD) screening, a retrospective analysis of a Veterans Health Administration database shows.

kgerakis/Getty Images

Because the majority of those analyzed in the VA dataset were men, this means that middle-aged and older men with kidney stones have about the same risk for osteoporosis as postmenopausal women do, but BMD screening for such men is not currently recommended, the study notes.

“These findings suggest that the risk of osteoporosis or fractures in patients with kidney stone disease is not restricted to postmenopausal women but is also observed in men, a group that is less well recognized to be at risk,” Calyani Ganesan, MD, of Stanford (Calif.) University and colleagues say in their article, published online March 3 in the Journal of Bone and Mineral Research.

“We hope this work raises awareness regarding the possibility of reduced bone strength in patients with kidney stones, [and] in our future work, we hope to identify which patients with kidney stones are at higher risk for osteoporosis or fracture to help guide bone density screening efforts by clinicians in this population,” Dr. Ganesan added in a statement.
 

VA dataset: Just 9.1% had DXA after kidney stone diagnosed

A total of 531,431 patients with a history of kidney stone disease were identified in the VA dataset. Of these, 23.6% either had been diagnosed with osteoporosis or had a history of fracture around the time of their kidney stone diagnosis. The most common diagnosis was a non-hip fracture, seen in 19% of patients, Dr. Ganesan and colleagues note, followed by osteoporosis in 6.1%, and hip fracture in 2.1%.

The mean age of the patients who concurrently had received a diagnosis of kidney stone disease and osteoporosis or had a fracture history was 64.2 years. In this cohort, more than 91% were men. The majority of the patients were White.



Among some 462,681 patients who had no prior history of either osteoporosis or fracture before their diagnosis of kidney stones, only 9.1% had undergone dual-energy x-ray absorptiometry (DXA) screening for BMD in the 5 years after their kidney stone diagnosis.

“Of those who completed DXA ... 20% were subsequently diagnosed with osteoporosis,” the authors note – 19% with non-hip fracture, and 2.4% with hip fracture.

Importantly, 85% of patients with kidney stone disease who were screened with DXA and were later diagnosed with osteoporosis were men.

“Given that almost 20% of patients in our cohort had a non-hip fracture, we contend that osteoporosis is underdiagnosed and undertreated in older men with kidney stone disease,” the authors stress.

Perform DXA screen in older men, even in absence of hypercalciuria

The authors also explain that the most common metabolic abnormality associated with kidney stones is high urine calcium excretion, or hypercalciuria.

“In a subset of patients with kidney stones, dysregulated calcium homeostasis may be present in which calcium is resorbed from bone and excreted into the urine, which can lead to osteoporosis and the formation of calcium stones,” they explain.

However, when they carried out a 24-hour assessment of urine calcium excretion on a small subset of patients with kidney stones, “we found no correlation between osteoporosis and the level of 24-hour urine calcium excretion,” they point out.

Even when the authors excluded patients who were taking a thiazide diuretic – a class of drugs that decreases urine calcium excretion – there was no correlation between osteoporosis and the level of 24-hour urine calcium excretion.

The investigators suggest it is possible that, in the majority of patients with kidney stones, the cause of hypercalciuria is more closely related to overabsorption of calcium from the gut, not to overresorption of calcium from the bone.

“Nonetheless, our findings indicate that patients with kidney stone disease could benefit from DXA screening even in the absence of hypercalciuria,” they state.

“And our findings provide support for wider use of bone mineral density screening in patients with kidney stone disease, including middle-aged and older men, for whom efforts to mitigate risks of osteoporosis and fractures are not commonly emphasized,” they reaffirm.

The study was funded by the VA Merit Review and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

Almost one in four men and women diagnosed with kidney stones have osteoporosis or a history of fracture at the time of their diagnosis, yet fewer than 10% undergo bone mineral density (BMD) screening, a retrospective analysis of a Veterans Health Administration database shows.

kgerakis/Getty Images

Because the majority of those analyzed in the VA dataset were men, this means that middle-aged and older men with kidney stones have about the same risk for osteoporosis as postmenopausal women do, but BMD screening for such men is not currently recommended, the study notes.

“These findings suggest that the risk of osteoporosis or fractures in patients with kidney stone disease is not restricted to postmenopausal women but is also observed in men, a group that is less well recognized to be at risk,” Calyani Ganesan, MD, of Stanford (Calif.) University and colleagues say in their article, published online March 3 in the Journal of Bone and Mineral Research.

“We hope this work raises awareness regarding the possibility of reduced bone strength in patients with kidney stones, [and] in our future work, we hope to identify which patients with kidney stones are at higher risk for osteoporosis or fracture to help guide bone density screening efforts by clinicians in this population,” Dr. Ganesan added in a statement.
 

VA dataset: Just 9.1% had DXA after kidney stone diagnosed

A total of 531,431 patients with a history of kidney stone disease were identified in the VA dataset. Of these, 23.6% either had been diagnosed with osteoporosis or had a history of fracture around the time of their kidney stone diagnosis. The most common diagnosis was a non-hip fracture, seen in 19% of patients, Dr. Ganesan and colleagues note, followed by osteoporosis in 6.1%, and hip fracture in 2.1%.

The mean age of the patients who concurrently had received a diagnosis of kidney stone disease and osteoporosis or had a fracture history was 64.2 years. In this cohort, more than 91% were men. The majority of the patients were White.



Among some 462,681 patients who had no prior history of either osteoporosis or fracture before their diagnosis of kidney stones, only 9.1% had undergone dual-energy x-ray absorptiometry (DXA) screening for BMD in the 5 years after their kidney stone diagnosis.

“Of those who completed DXA ... 20% were subsequently diagnosed with osteoporosis,” the authors note – 19% with non-hip fracture, and 2.4% with hip fracture.

Importantly, 85% of patients with kidney stone disease who were screened with DXA and were later diagnosed with osteoporosis were men.

“Given that almost 20% of patients in our cohort had a non-hip fracture, we contend that osteoporosis is underdiagnosed and undertreated in older men with kidney stone disease,” the authors stress.

Perform DXA screen in older men, even in absence of hypercalciuria

The authors also explain that the most common metabolic abnormality associated with kidney stones is high urine calcium excretion, or hypercalciuria.

“In a subset of patients with kidney stones, dysregulated calcium homeostasis may be present in which calcium is resorbed from bone and excreted into the urine, which can lead to osteoporosis and the formation of calcium stones,” they explain.

However, when they carried out a 24-hour assessment of urine calcium excretion on a small subset of patients with kidney stones, “we found no correlation between osteoporosis and the level of 24-hour urine calcium excretion,” they point out.

Even when the authors excluded patients who were taking a thiazide diuretic – a class of drugs that decreases urine calcium excretion – there was no correlation between osteoporosis and the level of 24-hour urine calcium excretion.

The investigators suggest it is possible that, in the majority of patients with kidney stones, the cause of hypercalciuria is more closely related to overabsorption of calcium from the gut, not to overresorption of calcium from the bone.

“Nonetheless, our findings indicate that patients with kidney stone disease could benefit from DXA screening even in the absence of hypercalciuria,” they state.

“And our findings provide support for wider use of bone mineral density screening in patients with kidney stone disease, including middle-aged and older men, for whom efforts to mitigate risks of osteoporosis and fractures are not commonly emphasized,” they reaffirm.

The study was funded by the VA Merit Review and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Almost one in four men and women diagnosed with kidney stones have osteoporosis or a history of fracture at the time of their diagnosis, yet fewer than 10% undergo bone mineral density (BMD) screening, a retrospective analysis of a Veterans Health Administration database shows.

kgerakis/Getty Images

Because the majority of those analyzed in the VA dataset were men, this means that middle-aged and older men with kidney stones have about the same risk for osteoporosis as postmenopausal women do, but BMD screening for such men is not currently recommended, the study notes.

“These findings suggest that the risk of osteoporosis or fractures in patients with kidney stone disease is not restricted to postmenopausal women but is also observed in men, a group that is less well recognized to be at risk,” Calyani Ganesan, MD, of Stanford (Calif.) University and colleagues say in their article, published online March 3 in the Journal of Bone and Mineral Research.

“We hope this work raises awareness regarding the possibility of reduced bone strength in patients with kidney stones, [and] in our future work, we hope to identify which patients with kidney stones are at higher risk for osteoporosis or fracture to help guide bone density screening efforts by clinicians in this population,” Dr. Ganesan added in a statement.
 

VA dataset: Just 9.1% had DXA after kidney stone diagnosed

A total of 531,431 patients with a history of kidney stone disease were identified in the VA dataset. Of these, 23.6% either had been diagnosed with osteoporosis or had a history of fracture around the time of their kidney stone diagnosis. The most common diagnosis was a non-hip fracture, seen in 19% of patients, Dr. Ganesan and colleagues note, followed by osteoporosis in 6.1%, and hip fracture in 2.1%.

The mean age of the patients who concurrently had received a diagnosis of kidney stone disease and osteoporosis or had a fracture history was 64.2 years. In this cohort, more than 91% were men. The majority of the patients were White.



Among some 462,681 patients who had no prior history of either osteoporosis or fracture before their diagnosis of kidney stones, only 9.1% had undergone dual-energy x-ray absorptiometry (DXA) screening for BMD in the 5 years after their kidney stone diagnosis.

“Of those who completed DXA ... 20% were subsequently diagnosed with osteoporosis,” the authors note – 19% with non-hip fracture, and 2.4% with hip fracture.

Importantly, 85% of patients with kidney stone disease who were screened with DXA and were later diagnosed with osteoporosis were men.

“Given that almost 20% of patients in our cohort had a non-hip fracture, we contend that osteoporosis is underdiagnosed and undertreated in older men with kidney stone disease,” the authors stress.

Perform DXA screen in older men, even in absence of hypercalciuria

The authors also explain that the most common metabolic abnormality associated with kidney stones is high urine calcium excretion, or hypercalciuria.

“In a subset of patients with kidney stones, dysregulated calcium homeostasis may be present in which calcium is resorbed from bone and excreted into the urine, which can lead to osteoporosis and the formation of calcium stones,” they explain.

However, when they carried out a 24-hour assessment of urine calcium excretion on a small subset of patients with kidney stones, “we found no correlation between osteoporosis and the level of 24-hour urine calcium excretion,” they point out.

Even when the authors excluded patients who were taking a thiazide diuretic – a class of drugs that decreases urine calcium excretion – there was no correlation between osteoporosis and the level of 24-hour urine calcium excretion.

The investigators suggest it is possible that, in the majority of patients with kidney stones, the cause of hypercalciuria is more closely related to overabsorption of calcium from the gut, not to overresorption of calcium from the bone.

“Nonetheless, our findings indicate that patients with kidney stone disease could benefit from DXA screening even in the absence of hypercalciuria,” they state.

“And our findings provide support for wider use of bone mineral density screening in patients with kidney stone disease, including middle-aged and older men, for whom efforts to mitigate risks of osteoporosis and fractures are not commonly emphasized,” they reaffirm.

The study was funded by the VA Merit Review and the National Institute of Diabetes and Digestive and Kidney Diseases. The authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer
Reuters content

Large study finds trans men on testosterone at risk for blood clots

Article Type
Changed
Tue, 02/23/2021 - 09:10

 

Over 10% of transgender men (females transitioning to male) who take testosterone develop high hematocrit levels that could put them at greater risk for a thrombotic event, and the largest increase in levels occurs in the first year after starting therapy, a new Dutch study indicates.

Erythrocytosis, defined as a hematocrit greater than 0.50 L/L, is a potentially serious side effect of testosterone therapy, say Milou Cecilia Madsen, MD, and colleagues in their article published online Feb. 18, 2021, in the Journal of Clinical Endocrinology & Metabolism.

When hematocrit was measured twice, 11.1% of the cohort of 1073 trans men had levels in excess of 0.50 L/L over a 20-year follow-up.

“Erythrocytosis is common in transgender men treated with testosterone, especially in those who smoke, have [a] high BMI [body mass index], and [who] use testosterone injections,” Dr. Madsen, of the VU University Medical Center Amsterdam, said in a statement from the Endocrine Society.

“A reasonable first step in the care of transgender men with high red blood cells while on testosterone is to advise them to quit smoking, switch injectable testosterone to gel, and, if BMI is high, to lose weight,” she added.
 

First large study of testosterone in trans men with 20-year follow-up

Transgender men often undergo testosterone therapy as part of gender-affirming treatment. 

Secondary erythrocytosis, a condition where the body makes too many red blood cells, is a common side effect of testosterone therapy that can increase the risk of thrombolic events, heart attack, and stroke, Dr. Madsen and colleagues explained.

This is the first study of a large cohort of trans men taking testosterone therapy followed for up to 20 years. Because of the large sample size, statistical analysis with many determinants could be performed. And because of the long follow-up, a clear time relation between initiation of testosterone therapy and hematocrit could be studied, they noted.

Participants were part of the Amsterdam Cohort of Gender Dysphoria study, a large cohort of individuals seen at the Center of Expertise on Gender Dysphoria at Amsterdam University Medical Center between 1972 and 2015.

Laboratory measurements taken between 2004 and 2018 were available for analysis. Trans men visited the center every 3-6 months during their first year of testosterone therapy and were then monitored every year or every other year.

Long-acting undecanoate injection was associated with the highest risk of a hematocrit level greater than 0.50 L/L, and the risk of erythrocytosis in those who took long-acting intramuscular injections was about threefold higher, compared with testosterone gel (adjusted odds ratio, 3.1).

In contrast, short-acting ester injections and oral administration of testosterone had a similar risk for erythrocytosis, as did testosterone gel.

Other determinants of elevated hematocrit included smoking, medical history of a number of comorbid conditions, and older age on initiation of testosterone.

In contrast, “higher testosterone levels per se were not associated with an increased odds of hematocrit greater than 0.50 L/L”, the authors noted.
 

Current advice for trans men based on old guidance for hypogonadism

The authors said that current advice for trans men is based on recommendations for testosterone-treated hypogonadal cis men (those assigned male at birth) from 2008, which advises a hematocrit greater than 0.50 L/L has a moderate to high risk of adverse outcome. For levels greater than 0.54 L/L, cessation of testosterone therapy, a dose reduction, or therapeutic phlebotomy to reduce the risk of adverse events is advised. For levels 0.50-0.54 L/L, no clear advice is given.

But questions remain as to whether these guidelines are applicable to trans men because the duration of testosterone therapy is much longer in trans men and hormone treatment often cannot be discontinued without causing distress.

Meanwhile, hematology guidelines indicate an upper limit for hematocrit for cis females of 0.48 L/L.

“It could be argued that the upper limit for cis females should be applied, as trans men are born with female genetics,” the authors said. “This is a subject for further research.”
 

Duration of testosterone therapy impacts risk of erythrocytosis

In the study, the researchers found that longer duration of testosterone therapy increased the risk of developing hematocrit levels greater than 0.50 L/L. For example, after 1 year, the cumulative incidence of erythrocytosis was 8%; after 10 years, it was 38%; and after 14 years, it was 50%.

Until more specific guidance is developed for trans men, if hematocrit levels rise to 0.50-0.54 L/L, the researchers suggested taking “reasonable” steps to prevent a further increase:

  • Consider switching patients who use injectable testosterone to transdermal products.
  • Advise patients with a BMI greater than 25 kg/m2 to lose weight to attain a BMI of 18.5-25.
  • Advise patients to stop smoking.
  • Pursue treatment optimization for chronic lung disease or sleep apnea.

The study had no external funding. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

Over 10% of transgender men (females transitioning to male) who take testosterone develop high hematocrit levels that could put them at greater risk for a thrombotic event, and the largest increase in levels occurs in the first year after starting therapy, a new Dutch study indicates.

Erythrocytosis, defined as a hematocrit greater than 0.50 L/L, is a potentially serious side effect of testosterone therapy, say Milou Cecilia Madsen, MD, and colleagues in their article published online Feb. 18, 2021, in the Journal of Clinical Endocrinology & Metabolism.

When hematocrit was measured twice, 11.1% of the cohort of 1073 trans men had levels in excess of 0.50 L/L over a 20-year follow-up.

“Erythrocytosis is common in transgender men treated with testosterone, especially in those who smoke, have [a] high BMI [body mass index], and [who] use testosterone injections,” Dr. Madsen, of the VU University Medical Center Amsterdam, said in a statement from the Endocrine Society.

“A reasonable first step in the care of transgender men with high red blood cells while on testosterone is to advise them to quit smoking, switch injectable testosterone to gel, and, if BMI is high, to lose weight,” she added.
 

First large study of testosterone in trans men with 20-year follow-up

Transgender men often undergo testosterone therapy as part of gender-affirming treatment. 

Secondary erythrocytosis, a condition where the body makes too many red blood cells, is a common side effect of testosterone therapy that can increase the risk of thrombolic events, heart attack, and stroke, Dr. Madsen and colleagues explained.

This is the first study of a large cohort of trans men taking testosterone therapy followed for up to 20 years. Because of the large sample size, statistical analysis with many determinants could be performed. And because of the long follow-up, a clear time relation between initiation of testosterone therapy and hematocrit could be studied, they noted.

Participants were part of the Amsterdam Cohort of Gender Dysphoria study, a large cohort of individuals seen at the Center of Expertise on Gender Dysphoria at Amsterdam University Medical Center between 1972 and 2015.

Laboratory measurements taken between 2004 and 2018 were available for analysis. Trans men visited the center every 3-6 months during their first year of testosterone therapy and were then monitored every year or every other year.

Long-acting undecanoate injection was associated with the highest risk of a hematocrit level greater than 0.50 L/L, and the risk of erythrocytosis in those who took long-acting intramuscular injections was about threefold higher, compared with testosterone gel (adjusted odds ratio, 3.1).

In contrast, short-acting ester injections and oral administration of testosterone had a similar risk for erythrocytosis, as did testosterone gel.

Other determinants of elevated hematocrit included smoking, medical history of a number of comorbid conditions, and older age on initiation of testosterone.

In contrast, “higher testosterone levels per se were not associated with an increased odds of hematocrit greater than 0.50 L/L”, the authors noted.
 

Current advice for trans men based on old guidance for hypogonadism

The authors said that current advice for trans men is based on recommendations for testosterone-treated hypogonadal cis men (those assigned male at birth) from 2008, which advises a hematocrit greater than 0.50 L/L has a moderate to high risk of adverse outcome. For levels greater than 0.54 L/L, cessation of testosterone therapy, a dose reduction, or therapeutic phlebotomy to reduce the risk of adverse events is advised. For levels 0.50-0.54 L/L, no clear advice is given.

But questions remain as to whether these guidelines are applicable to trans men because the duration of testosterone therapy is much longer in trans men and hormone treatment often cannot be discontinued without causing distress.

Meanwhile, hematology guidelines indicate an upper limit for hematocrit for cis females of 0.48 L/L.

“It could be argued that the upper limit for cis females should be applied, as trans men are born with female genetics,” the authors said. “This is a subject for further research.”
 

Duration of testosterone therapy impacts risk of erythrocytosis

In the study, the researchers found that longer duration of testosterone therapy increased the risk of developing hematocrit levels greater than 0.50 L/L. For example, after 1 year, the cumulative incidence of erythrocytosis was 8%; after 10 years, it was 38%; and after 14 years, it was 50%.

Until more specific guidance is developed for trans men, if hematocrit levels rise to 0.50-0.54 L/L, the researchers suggested taking “reasonable” steps to prevent a further increase:

  • Consider switching patients who use injectable testosterone to transdermal products.
  • Advise patients with a BMI greater than 25 kg/m2 to lose weight to attain a BMI of 18.5-25.
  • Advise patients to stop smoking.
  • Pursue treatment optimization for chronic lung disease or sleep apnea.

The study had no external funding. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

Over 10% of transgender men (females transitioning to male) who take testosterone develop high hematocrit levels that could put them at greater risk for a thrombotic event, and the largest increase in levels occurs in the first year after starting therapy, a new Dutch study indicates.

Erythrocytosis, defined as a hematocrit greater than 0.50 L/L, is a potentially serious side effect of testosterone therapy, say Milou Cecilia Madsen, MD, and colleagues in their article published online Feb. 18, 2021, in the Journal of Clinical Endocrinology & Metabolism.

When hematocrit was measured twice, 11.1% of the cohort of 1073 trans men had levels in excess of 0.50 L/L over a 20-year follow-up.

“Erythrocytosis is common in transgender men treated with testosterone, especially in those who smoke, have [a] high BMI [body mass index], and [who] use testosterone injections,” Dr. Madsen, of the VU University Medical Center Amsterdam, said in a statement from the Endocrine Society.

“A reasonable first step in the care of transgender men with high red blood cells while on testosterone is to advise them to quit smoking, switch injectable testosterone to gel, and, if BMI is high, to lose weight,” she added.
 

First large study of testosterone in trans men with 20-year follow-up

Transgender men often undergo testosterone therapy as part of gender-affirming treatment. 

Secondary erythrocytosis, a condition where the body makes too many red blood cells, is a common side effect of testosterone therapy that can increase the risk of thrombolic events, heart attack, and stroke, Dr. Madsen and colleagues explained.

This is the first study of a large cohort of trans men taking testosterone therapy followed for up to 20 years. Because of the large sample size, statistical analysis with many determinants could be performed. And because of the long follow-up, a clear time relation between initiation of testosterone therapy and hematocrit could be studied, they noted.

Participants were part of the Amsterdam Cohort of Gender Dysphoria study, a large cohort of individuals seen at the Center of Expertise on Gender Dysphoria at Amsterdam University Medical Center between 1972 and 2015.

Laboratory measurements taken between 2004 and 2018 were available for analysis. Trans men visited the center every 3-6 months during their first year of testosterone therapy and were then monitored every year or every other year.

Long-acting undecanoate injection was associated with the highest risk of a hematocrit level greater than 0.50 L/L, and the risk of erythrocytosis in those who took long-acting intramuscular injections was about threefold higher, compared with testosterone gel (adjusted odds ratio, 3.1).

In contrast, short-acting ester injections and oral administration of testosterone had a similar risk for erythrocytosis, as did testosterone gel.

Other determinants of elevated hematocrit included smoking, medical history of a number of comorbid conditions, and older age on initiation of testosterone.

In contrast, “higher testosterone levels per se were not associated with an increased odds of hematocrit greater than 0.50 L/L”, the authors noted.
 

Current advice for trans men based on old guidance for hypogonadism

The authors said that current advice for trans men is based on recommendations for testosterone-treated hypogonadal cis men (those assigned male at birth) from 2008, which advises a hematocrit greater than 0.50 L/L has a moderate to high risk of adverse outcome. For levels greater than 0.54 L/L, cessation of testosterone therapy, a dose reduction, or therapeutic phlebotomy to reduce the risk of adverse events is advised. For levels 0.50-0.54 L/L, no clear advice is given.

But questions remain as to whether these guidelines are applicable to trans men because the duration of testosterone therapy is much longer in trans men and hormone treatment often cannot be discontinued without causing distress.

Meanwhile, hematology guidelines indicate an upper limit for hematocrit for cis females of 0.48 L/L.

“It could be argued that the upper limit for cis females should be applied, as trans men are born with female genetics,” the authors said. “This is a subject for further research.”
 

Duration of testosterone therapy impacts risk of erythrocytosis

In the study, the researchers found that longer duration of testosterone therapy increased the risk of developing hematocrit levels greater than 0.50 L/L. For example, after 1 year, the cumulative incidence of erythrocytosis was 8%; after 10 years, it was 38%; and after 14 years, it was 50%.

Until more specific guidance is developed for trans men, if hematocrit levels rise to 0.50-0.54 L/L, the researchers suggested taking “reasonable” steps to prevent a further increase:

  • Consider switching patients who use injectable testosterone to transdermal products.
  • Advise patients with a BMI greater than 25 kg/m2 to lose weight to attain a BMI of 18.5-25.
  • Advise patients to stop smoking.
  • Pursue treatment optimization for chronic lung disease or sleep apnea.

The study had no external funding. The authors reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer

New approach to breast screening based on breast density at 40

Article Type
Changed
Thu, 12/15/2022 - 17:30

 

A new approach to breast screening proposes that all women should have a baseline evaluation of breast density by mammography at the age of 40.

The result would then be used to stratify further screening, with annual screening starting at age 40 for average-risk women who have dense breasts, and screening every 2 years starting at age 50 for women without dense breasts.

Such an approach would be cost effective and offers a more targeted risk-based strategy for the early detection of breast cancer when compared with current practices, say the authors, led by Tina Shih, PhD, University of Texas MD Anderson Cancer Center, Houston.

Their modeling study was published online in the Annals of Internal Medicine.

However, experts writing in an accompanying editorial are not persuaded. Karla Kerlikowske, MD, and Kirsten Bibbins-Domingo, MD, PhD, both from the University of California, San Francisco, point out that not all women with dense breasts are at increased risk for breast cancer. They caution against relying on breast density alone when determining screening strategies, and say age and other risk factors also need to be considered.
 

New approach proposed

Current recommendations from the United States Preventive Services Task Force suggest that women in their 40s can choose to undergo screening mammography based on their own personal preference, Dr. Shih explained in an interview.

However, these recommendations do not take into consideration the additional risk that breast density confers on breast cancer risk – and the only way women can know their breast density is to have a mammogram. “If you follow [current] guidelines, you would not know about your breast density until the age of 45 or 50,” she commented.

“But what if you knew about breast density earlier on and then acted on it –would that make a difference?” This was the question her team set out to explore.

For their study, the authors defined women with dense breasts as those with the Breast Imaging Reporting and Data System (BI-RADS) category C (heterogeneously dense breasts) and category D (extremely dense breasts).

The team used a computer model to compare seven different breast screening strategies:

  • No screening.
  • Triennial mammography from age 50 to 75 years (T50).
  • Biennial mammography from age 50 to 75 years (B50).
  • Stratified annual mammography from age 50 to 75 for women with dense breasts at age 50, and triennial. screening from age 50 to 75 for women without dense breasts at the age of 50 (SA50T50).
  • Stratified annual mammography from age 50 to 75 for women with dense breasts at age 50, and biennial screening from age 50 to 75 for those without dense breast at age 50 (SA50B50).
  • Stratified annual mammography from age 40 to 75 for women with dense breasts at age 49, and triennial screening from age 50 to 75 for those without dense breasts at age 40 (SA40T50).
  • Stratified annual mammography from age 40 to 75 for women with dense breasts at age 40, and biennial mammography for women from age 50 to 75 without dense breasts at age 40 (SA40B50).
 

 

Compared with a no-screening strategy, the average number of mammography sessions through a woman’s lifetime would increase from seven mammograms per lifetime for the least frequent screening (T50) to 22 mammograms per lifetime for the most intensive screening schedule, the team reports.  

Compared with no screening, screening would reduce breast cancer deaths by 8.6 per 1,000 women (T50)–13.2 per 1,000 women (SA40B50).

A cost-effectiveness analysis showed that the proposed new approach (SA40B50) yielded an incremental cost-effectiveness ratio of $36,200 per quality-adjusted life-year (QALY), compared with the currently recommended biennial screening strategy. This is well within the willingness-to-pay threshold of $100,000 per QALY that is generally accepted by society, the authors point out.

On the other hand, false-positive results and overdiagnosis would increase, the authors note.

The average number of false positives would increase from 141.2 per 1,000 women who underwent the least frequent triennial mammography screening schedule (T50) to 567.3 per 1,000 women with the new approach (SA40B50).  

Rates of overdiagnosis would also increase from a low of 12.5% to a high of 18.6%, they add.

“With this study, we are not saying that everybody should start screening at the age of 40. We’re just saying, do a baseline mammography at 40, know your breast density status, and then we can try to modify the screening schedule based on individual risk,” Dr. Shih emphasized.

“Compared with other screening strategies examined in our study, this strategy is associated with the greatest reduction in breast cancer mortality and is cost effective, [although it] involves the most screening mammograms in a woman’s lifetime and higher rates of false-positive results and overdiagnosis,” the authors conclude.  
 

Fundamental problem with this approach 

The fundamental problem with this approach of stratifying risk on measurement of breast density – and on the basis of a single reading – is that not every woman with dense breasts is at increased risk for breast cancer, the editorialists comment.

Dr. Kerlikowske and Dr. Bibbins-Domingo point out that, in fact, only about one-quarter of women with dense breasts are at high risk for a missed invasive cancer within 1 year of a negative mammogram, and these women can be identified by using the Breast Cancer Surveillance Consortium risk model.

“This observation means that most women with dense breasts can undergo biennial screening and need not consider annual screening or supplemental imaging,” the editorialists write.

“Thus, we caution against using breast density alone to determine if a woman is at elevated risk for breast cancer,” they emphasize.

An alternative option is to focus on overall risk to select screening strategies, they suggest. For example, most guidelines recommend screening from age 50 to 74, so identifying women in their 40s who have the same risk of a woman aged 50-59 is one way to determine who may benefit from earlier initiation of screening, the editorialists observe.

“Thus, women who have a first-degree relative with breast cancer or a history of breast biopsy could be offered screening in their 40s, and, if mammography shows dense breasts, they could continue biennial screening through their 40s,” the editorialists observe. “Such women with nondense breasts could resume biennial screening at age 50 years.”  

Dr. Shih told this news organization that she did not disagree with the editorialists’ suggestion that physicians could focus on overall breast cancer risk to select an appropriate screening strategy for individual patients.

“What we are suggesting is, ‘Let’s just do a baseline assessment at the age of 40 so women know their breast density instead of waiting until they are older,’ “ she said.

“But what the editorialists are suggesting is a strategy that could be even more cost effective,” she acknowledged. Dr. Shih also said that Dr. Kerlikowske and Dr. Bibbins-Domingo’s estimate that only one-quarter of women with dense breasts are actually at high risk for breast cancer likely reflects their limitation of breast density to only those women with BI-RADs category “D” – extremely dense breasts.

Yet as Dr. Shih notes, women with category C and category D breast densities are both at higher risk for breast cancer, so ignoring women with lesser degrees of breast density still doesn’t address the fact that they have a higher-than-average risk for breast cancer.

“It’s getting harder to make universal screening strategies work as we are learning more and more about breast cancer, so people are starting to talk about screening strategies based on a patient’s risk classification,” Dr. Shih noted.

“It’ll be harder to implement these kinds of strategies, but it seems like the right way to go,” she added.

The study was funded by the National Cancer Institute. Dr. Shih reports grants from the National Cancer Institute during the conduct of the study and personal fees from Pfizer and AstraZeneca outside the submitted work. Dr. Kerlikowske is an unpaid consultant for GRAIL for the STRIVE study. Dr. Bibbins-Domingo has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

A new approach to breast screening proposes that all women should have a baseline evaluation of breast density by mammography at the age of 40.

The result would then be used to stratify further screening, with annual screening starting at age 40 for average-risk women who have dense breasts, and screening every 2 years starting at age 50 for women without dense breasts.

Such an approach would be cost effective and offers a more targeted risk-based strategy for the early detection of breast cancer when compared with current practices, say the authors, led by Tina Shih, PhD, University of Texas MD Anderson Cancer Center, Houston.

Their modeling study was published online in the Annals of Internal Medicine.

However, experts writing in an accompanying editorial are not persuaded. Karla Kerlikowske, MD, and Kirsten Bibbins-Domingo, MD, PhD, both from the University of California, San Francisco, point out that not all women with dense breasts are at increased risk for breast cancer. They caution against relying on breast density alone when determining screening strategies, and say age and other risk factors also need to be considered.
 

New approach proposed

Current recommendations from the United States Preventive Services Task Force suggest that women in their 40s can choose to undergo screening mammography based on their own personal preference, Dr. Shih explained in an interview.

However, these recommendations do not take into consideration the additional risk that breast density confers on breast cancer risk – and the only way women can know their breast density is to have a mammogram. “If you follow [current] guidelines, you would not know about your breast density until the age of 45 or 50,” she commented.

“But what if you knew about breast density earlier on and then acted on it –would that make a difference?” This was the question her team set out to explore.

For their study, the authors defined women with dense breasts as those with the Breast Imaging Reporting and Data System (BI-RADS) category C (heterogeneously dense breasts) and category D (extremely dense breasts).

The team used a computer model to compare seven different breast screening strategies:

  • No screening.
  • Triennial mammography from age 50 to 75 years (T50).
  • Biennial mammography from age 50 to 75 years (B50).
  • Stratified annual mammography from age 50 to 75 for women with dense breasts at age 50, and triennial. screening from age 50 to 75 for women without dense breasts at the age of 50 (SA50T50).
  • Stratified annual mammography from age 50 to 75 for women with dense breasts at age 50, and biennial screening from age 50 to 75 for those without dense breast at age 50 (SA50B50).
  • Stratified annual mammography from age 40 to 75 for women with dense breasts at age 49, and triennial screening from age 50 to 75 for those without dense breasts at age 40 (SA40T50).
  • Stratified annual mammography from age 40 to 75 for women with dense breasts at age 40, and biennial mammography for women from age 50 to 75 without dense breasts at age 40 (SA40B50).
 

 

Compared with a no-screening strategy, the average number of mammography sessions through a woman’s lifetime would increase from seven mammograms per lifetime for the least frequent screening (T50) to 22 mammograms per lifetime for the most intensive screening schedule, the team reports.  

Compared with no screening, screening would reduce breast cancer deaths by 8.6 per 1,000 women (T50)–13.2 per 1,000 women (SA40B50).

A cost-effectiveness analysis showed that the proposed new approach (SA40B50) yielded an incremental cost-effectiveness ratio of $36,200 per quality-adjusted life-year (QALY), compared with the currently recommended biennial screening strategy. This is well within the willingness-to-pay threshold of $100,000 per QALY that is generally accepted by society, the authors point out.

On the other hand, false-positive results and overdiagnosis would increase, the authors note.

The average number of false positives would increase from 141.2 per 1,000 women who underwent the least frequent triennial mammography screening schedule (T50) to 567.3 per 1,000 women with the new approach (SA40B50).  

Rates of overdiagnosis would also increase from a low of 12.5% to a high of 18.6%, they add.

“With this study, we are not saying that everybody should start screening at the age of 40. We’re just saying, do a baseline mammography at 40, know your breast density status, and then we can try to modify the screening schedule based on individual risk,” Dr. Shih emphasized.

“Compared with other screening strategies examined in our study, this strategy is associated with the greatest reduction in breast cancer mortality and is cost effective, [although it] involves the most screening mammograms in a woman’s lifetime and higher rates of false-positive results and overdiagnosis,” the authors conclude.  
 

Fundamental problem with this approach 

The fundamental problem with this approach of stratifying risk on measurement of breast density – and on the basis of a single reading – is that not every woman with dense breasts is at increased risk for breast cancer, the editorialists comment.

Dr. Kerlikowske and Dr. Bibbins-Domingo point out that, in fact, only about one-quarter of women with dense breasts are at high risk for a missed invasive cancer within 1 year of a negative mammogram, and these women can be identified by using the Breast Cancer Surveillance Consortium risk model.

“This observation means that most women with dense breasts can undergo biennial screening and need not consider annual screening or supplemental imaging,” the editorialists write.

“Thus, we caution against using breast density alone to determine if a woman is at elevated risk for breast cancer,” they emphasize.

An alternative option is to focus on overall risk to select screening strategies, they suggest. For example, most guidelines recommend screening from age 50 to 74, so identifying women in their 40s who have the same risk of a woman aged 50-59 is one way to determine who may benefit from earlier initiation of screening, the editorialists observe.

“Thus, women who have a first-degree relative with breast cancer or a history of breast biopsy could be offered screening in their 40s, and, if mammography shows dense breasts, they could continue biennial screening through their 40s,” the editorialists observe. “Such women with nondense breasts could resume biennial screening at age 50 years.”  

Dr. Shih told this news organization that she did not disagree with the editorialists’ suggestion that physicians could focus on overall breast cancer risk to select an appropriate screening strategy for individual patients.

“What we are suggesting is, ‘Let’s just do a baseline assessment at the age of 40 so women know their breast density instead of waiting until they are older,’ “ she said.

“But what the editorialists are suggesting is a strategy that could be even more cost effective,” she acknowledged. Dr. Shih also said that Dr. Kerlikowske and Dr. Bibbins-Domingo’s estimate that only one-quarter of women with dense breasts are actually at high risk for breast cancer likely reflects their limitation of breast density to only those women with BI-RADs category “D” – extremely dense breasts.

Yet as Dr. Shih notes, women with category C and category D breast densities are both at higher risk for breast cancer, so ignoring women with lesser degrees of breast density still doesn’t address the fact that they have a higher-than-average risk for breast cancer.

“It’s getting harder to make universal screening strategies work as we are learning more and more about breast cancer, so people are starting to talk about screening strategies based on a patient’s risk classification,” Dr. Shih noted.

“It’ll be harder to implement these kinds of strategies, but it seems like the right way to go,” she added.

The study was funded by the National Cancer Institute. Dr. Shih reports grants from the National Cancer Institute during the conduct of the study and personal fees from Pfizer and AstraZeneca outside the submitted work. Dr. Kerlikowske is an unpaid consultant for GRAIL for the STRIVE study. Dr. Bibbins-Domingo has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

 

A new approach to breast screening proposes that all women should have a baseline evaluation of breast density by mammography at the age of 40.

The result would then be used to stratify further screening, with annual screening starting at age 40 for average-risk women who have dense breasts, and screening every 2 years starting at age 50 for women without dense breasts.

Such an approach would be cost effective and offers a more targeted risk-based strategy for the early detection of breast cancer when compared with current practices, say the authors, led by Tina Shih, PhD, University of Texas MD Anderson Cancer Center, Houston.

Their modeling study was published online in the Annals of Internal Medicine.

However, experts writing in an accompanying editorial are not persuaded. Karla Kerlikowske, MD, and Kirsten Bibbins-Domingo, MD, PhD, both from the University of California, San Francisco, point out that not all women with dense breasts are at increased risk for breast cancer. They caution against relying on breast density alone when determining screening strategies, and say age and other risk factors also need to be considered.
 

New approach proposed

Current recommendations from the United States Preventive Services Task Force suggest that women in their 40s can choose to undergo screening mammography based on their own personal preference, Dr. Shih explained in an interview.

However, these recommendations do not take into consideration the additional risk that breast density confers on breast cancer risk – and the only way women can know their breast density is to have a mammogram. “If you follow [current] guidelines, you would not know about your breast density until the age of 45 or 50,” she commented.

“But what if you knew about breast density earlier on and then acted on it –would that make a difference?” This was the question her team set out to explore.

For their study, the authors defined women with dense breasts as those with the Breast Imaging Reporting and Data System (BI-RADS) category C (heterogeneously dense breasts) and category D (extremely dense breasts).

The team used a computer model to compare seven different breast screening strategies:

  • No screening.
  • Triennial mammography from age 50 to 75 years (T50).
  • Biennial mammography from age 50 to 75 years (B50).
  • Stratified annual mammography from age 50 to 75 for women with dense breasts at age 50, and triennial. screening from age 50 to 75 for women without dense breasts at the age of 50 (SA50T50).
  • Stratified annual mammography from age 50 to 75 for women with dense breasts at age 50, and biennial screening from age 50 to 75 for those without dense breast at age 50 (SA50B50).
  • Stratified annual mammography from age 40 to 75 for women with dense breasts at age 49, and triennial screening from age 50 to 75 for those without dense breasts at age 40 (SA40T50).
  • Stratified annual mammography from age 40 to 75 for women with dense breasts at age 40, and biennial mammography for women from age 50 to 75 without dense breasts at age 40 (SA40B50).
 

 

Compared with a no-screening strategy, the average number of mammography sessions through a woman’s lifetime would increase from seven mammograms per lifetime for the least frequent screening (T50) to 22 mammograms per lifetime for the most intensive screening schedule, the team reports.  

Compared with no screening, screening would reduce breast cancer deaths by 8.6 per 1,000 women (T50)–13.2 per 1,000 women (SA40B50).

A cost-effectiveness analysis showed that the proposed new approach (SA40B50) yielded an incremental cost-effectiveness ratio of $36,200 per quality-adjusted life-year (QALY), compared with the currently recommended biennial screening strategy. This is well within the willingness-to-pay threshold of $100,000 per QALY that is generally accepted by society, the authors point out.

On the other hand, false-positive results and overdiagnosis would increase, the authors note.

The average number of false positives would increase from 141.2 per 1,000 women who underwent the least frequent triennial mammography screening schedule (T50) to 567.3 per 1,000 women with the new approach (SA40B50).  

Rates of overdiagnosis would also increase from a low of 12.5% to a high of 18.6%, they add.

“With this study, we are not saying that everybody should start screening at the age of 40. We’re just saying, do a baseline mammography at 40, know your breast density status, and then we can try to modify the screening schedule based on individual risk,” Dr. Shih emphasized.

“Compared with other screening strategies examined in our study, this strategy is associated with the greatest reduction in breast cancer mortality and is cost effective, [although it] involves the most screening mammograms in a woman’s lifetime and higher rates of false-positive results and overdiagnosis,” the authors conclude.  
 

Fundamental problem with this approach 

The fundamental problem with this approach of stratifying risk on measurement of breast density – and on the basis of a single reading – is that not every woman with dense breasts is at increased risk for breast cancer, the editorialists comment.

Dr. Kerlikowske and Dr. Bibbins-Domingo point out that, in fact, only about one-quarter of women with dense breasts are at high risk for a missed invasive cancer within 1 year of a negative mammogram, and these women can be identified by using the Breast Cancer Surveillance Consortium risk model.

“This observation means that most women with dense breasts can undergo biennial screening and need not consider annual screening or supplemental imaging,” the editorialists write.

“Thus, we caution against using breast density alone to determine if a woman is at elevated risk for breast cancer,” they emphasize.

An alternative option is to focus on overall risk to select screening strategies, they suggest. For example, most guidelines recommend screening from age 50 to 74, so identifying women in their 40s who have the same risk of a woman aged 50-59 is one way to determine who may benefit from earlier initiation of screening, the editorialists observe.

“Thus, women who have a first-degree relative with breast cancer or a history of breast biopsy could be offered screening in their 40s, and, if mammography shows dense breasts, they could continue biennial screening through their 40s,” the editorialists observe. “Such women with nondense breasts could resume biennial screening at age 50 years.”  

Dr. Shih told this news organization that she did not disagree with the editorialists’ suggestion that physicians could focus on overall breast cancer risk to select an appropriate screening strategy for individual patients.

“What we are suggesting is, ‘Let’s just do a baseline assessment at the age of 40 so women know their breast density instead of waiting until they are older,’ “ she said.

“But what the editorialists are suggesting is a strategy that could be even more cost effective,” she acknowledged. Dr. Shih also said that Dr. Kerlikowske and Dr. Bibbins-Domingo’s estimate that only one-quarter of women with dense breasts are actually at high risk for breast cancer likely reflects their limitation of breast density to only those women with BI-RADs category “D” – extremely dense breasts.

Yet as Dr. Shih notes, women with category C and category D breast densities are both at higher risk for breast cancer, so ignoring women with lesser degrees of breast density still doesn’t address the fact that they have a higher-than-average risk for breast cancer.

“It’s getting harder to make universal screening strategies work as we are learning more and more about breast cancer, so people are starting to talk about screening strategies based on a patient’s risk classification,” Dr. Shih noted.

“It’ll be harder to implement these kinds of strategies, but it seems like the right way to go,” she added.

The study was funded by the National Cancer Institute. Dr. Shih reports grants from the National Cancer Institute during the conduct of the study and personal fees from Pfizer and AstraZeneca outside the submitted work. Dr. Kerlikowske is an unpaid consultant for GRAIL for the STRIVE study. Dr. Bibbins-Domingo has disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer

Findings could change breast cancer risk management

Article Type
Changed
Thu, 12/15/2022 - 17:31

New findings of breast cancer gene mutations in women who have no family history of the disease offer a new way of estimating risk and may change the way in which these women are advised on risk management.

The findings come from two large studies, both published on Jan. 20 in the New England Journal of Medicine.

The two articles are “extraordinary” for broadening and validating the genomic panel to help screen women at risk for breast cancer in the future, commented Eric Topol, MD, professor of molecular medicine, Scripps Research, La Jolla, Calif., and Medscape editor in chief.

“Traditionally, genetic testing of inherited breast cancer genes has focused on women at high risk who have a strong family history of breast cancer or those who were diagnosed at an early age, such as under 45 years,” commented the lead investigator of one of the studies, Fergus Couch, PhD, a pathologist at the Mayo Clinic, Rochester, Minn.

“[Although] the risk of developing breast cancer is generally lower for women without a family history of the disease ... when we looked at all women, we found that 30% of breast cancer mutations occurred in women who are not high risk,” he said.

In both studies, mutations or variants in eight genes – BRCA1, BRCA2, PALB2, BARD1, RAD51C, RAD51D, ATM, and CHEK2 – were found to be significantly associated with breast cancer risk.

However, the distribution of mutations among women with breast cancer differed from the distribution among unaffected women, noted Steven Narod, MD, from the Women’s College Research Institute, Toronto, in an accompanying editorial.

“What this means to clinicians, now that we are expanding the use of gene-panel testing to include unaffected women with a moderate risk of breast cancer in the family history, is that our time will increasingly be spent counseling women with CHEK2 and ATM mutations,” he wrote. Currently, these two are “clumped in with ‘other genes.’ ... Most of the pretest discussion is currently focused on the implications of finding a BRCA1 or BRCA2 mutation.”

The new findings may lead to new risk management strategies, he suggested. “Most breast cancers that occur in women with a mutation in ATM or CHEK2 are estrogen receptor positive, so these women may be candidates for antiestrogen therapies such as tamoxifenraloxifene, or aromatase inhibitors,” he wrote.

Dr. Narod observed that, for now, the management of most women with either mutation will consist of screening alone, starting with MRI at age 40 years.

The medical community is not ready yet to expand genetic screening to the general population, cautions Walton Taylor, MD, past president of the American Society of Breast Surgeons.

The ASBrS currently recommends that all patients with breast cancer as well as those at high risk for breast cancer be offered genetic testing. “All women at risk should be tested, and all patients with pathogenic variants need to be managed appropriately – it saves lives,” Dr. Taylor emphasized.

However, “unaffected people with no family history do not need genetic testing at this time,” he said in an interview.

As to what physicians might do to better manage patients with mutations that predispose to breast cancer, Dr. Taylor said, “It’s surprisingly easy.”

Every genetic testing company provides genetic counselors to guide patients through next steps, Dr. Taylor pointed out, and most cancer patients have nurse navigators who make sure patients get tested and followed appropriately.

Members of the ASBrS follow the National Comprehensive Cancer Network guidelines when they identify carriers of a pathogenic variant. Dr. Taylor said these are very useful guidelines for virtually all mutations identified thus far.

“This research is not necessarily new, but it is confirmatory for what we are doing, and that helps us make sure we are going down the right pathway,” Dr. Taylor said. “It confirms that what we think is right is right – and that matters,.”
 

 

 

CARRIERS consortium findings

The study led by Dr. Couch was carried out by the Cancer Risk Estimates Related to Susceptibility (CARRIERS) consortium. It involved analyzing data from 17 epidemiology studies that focused on women in the general population who develop breast cancer. For the studies, which were conducted in the United States, pathogenic variants in 28 cancer-predisposition genes were sequenced from 32,247 women with breast cancer (case patients) and 32,544 unaffected women (control persons).

In the overall CARRIERS analysis, the prevalence of pathogenic variants in 12 clinically actionable genes was 5.03% among case patients and 1.63% among control persons. The prevalence was similar in non-Hispanic White women, non-Hispanic Black women, and Hispanic case patients, as well as control persons, they added. The prevalence of pathogenic variants among Asian American case patients was lower, at only 1.64%.

Among patients who had breast cancer, the most common pathogenic variants included BRCA2, which occurred in 1.29% of case patients, followed by CHEK2, at a prevalence of 1.08%, and BRCA1, at a prevalence of 0.85%.

Mutations in BRCA1 increased the risk for breast cancer more than 7.5-fold; mutations in BRCA2 increased that risk more than fivefold, the investigators stated.

Mutations in PALB2 increased the risk of breast cancer approximately fourfold, they added.

Prevalence rates for both BRCA1 and BRCA2 among breast cancer patients declined rapidly after the age of 40. The decline in other variants, including ATM, CHEK2, and PALB2, was limited with increasing age.

Indeed, mutations in all five of these genes were associated with a lifetime absolute risk for breast cancer greater than 20% by the age of 85 years among non-Hispanic Whites.

Pathogenic variants in BRCA1 or BRCA2 yielded a lifetime risk for breast cancer of approximately 50%. Mutations in PALB2 yielded a lifetime breast cancer risk of approximately 32%.

The risk of having a mutation in specific genes varied depending on the type of breast cancer. For example, mutations in BARD1, RAD51C, and RAD51D increased the risk for estrogen receptor (ER)–negative breast cancer as well as triple-negative breast cancer, the authors noted, whereas mutations in ATM, CDH1, and CHEK2 increased the risk for ER-positive breast cancer.

“These refined estimates of the prevalences of pathogenic variants among women with breast cancer in the overall population, as opposed to selected high-risk patients, may inform ongoing discussions regarding testing in patients with breast cancer,” the CARRIERS authors observed.

“The risks of breast cancer associated with pathogenic variants in the genes evaluated in the population-based CARRIERS analysis also provide important information for risk assessment and counseling of women with breast cancer who do not meet high-risk selection criteria,” they suggested.
 

Similar findings in second study

The second study was conducted by the Breast Cancer Association Consortium under lead author Leila Dorling, PhD, University of Cambridge (England). This group sequenced 34 susceptibility genes from 60,466 women with breast cancer and 53,461 unaffected control persons.

“Protein-truncating variants in five genes (ATM, BRCA1, BRCA2, CHEK2, and PALB2) were associated with a significant risk of breast cancer overall (P < .0001),” the BCAC members reported. “For these genes, odds ratios ranged from 2.10 to 10.57.”

The association between overall breast cancer risk and mutations in seven other genes was more modest, conferring approximately twice the risk for breast cancer overall, although that risk was threefold higher for the TP53 mutation.

For the 12 genes the consortium singled out as being associated with either a significant or a more modest risk for breast cancer, the effect size did not vary significantly between European and Asian women, the authors noted. Again, the risk for ER-positive breast cancer was over two times greater for those who had either the ATM or the CHEK2 mutation. Having mutations in BARD1, BRCA1, BRCA1, PALB2, RAD51C, and RAD51D conferred a higher risk for ER-negative disease than for ER-positive disease.

There was also an association between rare missense variants in six genes – CHEK2, ATM, TP53, BRCA1, CDH1, and RECQL – and overall breast cancer risk, with the clearest evidence being for CHEK2.

“The absolute risk estimates place protein-truncating variants in BRCA1, BRCA2, and PALB2 in the high-risk category and place protein-truncating variants in ATM, BARD1, CHEK2, RAD51CC, and RAD51D in the moderate-risk category,” Dr. Dorling and colleagues reaffirmed.

“These results may guide screening as well as prevention with risk-reducing surgery or medication, in accordance with national guidelines,” the authors suggested.

The CARRIERS study was supported by the National Institutes of Health. The study by Dr. Dorling and colleagues was supported by the European Union Horizon 2020 research and innovation programs, among others. Dr. Narod disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

New findings of breast cancer gene mutations in women who have no family history of the disease offer a new way of estimating risk and may change the way in which these women are advised on risk management.

The findings come from two large studies, both published on Jan. 20 in the New England Journal of Medicine.

The two articles are “extraordinary” for broadening and validating the genomic panel to help screen women at risk for breast cancer in the future, commented Eric Topol, MD, professor of molecular medicine, Scripps Research, La Jolla, Calif., and Medscape editor in chief.

“Traditionally, genetic testing of inherited breast cancer genes has focused on women at high risk who have a strong family history of breast cancer or those who were diagnosed at an early age, such as under 45 years,” commented the lead investigator of one of the studies, Fergus Couch, PhD, a pathologist at the Mayo Clinic, Rochester, Minn.

“[Although] the risk of developing breast cancer is generally lower for women without a family history of the disease ... when we looked at all women, we found that 30% of breast cancer mutations occurred in women who are not high risk,” he said.

In both studies, mutations or variants in eight genes – BRCA1, BRCA2, PALB2, BARD1, RAD51C, RAD51D, ATM, and CHEK2 – were found to be significantly associated with breast cancer risk.

However, the distribution of mutations among women with breast cancer differed from the distribution among unaffected women, noted Steven Narod, MD, from the Women’s College Research Institute, Toronto, in an accompanying editorial.

“What this means to clinicians, now that we are expanding the use of gene-panel testing to include unaffected women with a moderate risk of breast cancer in the family history, is that our time will increasingly be spent counseling women with CHEK2 and ATM mutations,” he wrote. Currently, these two are “clumped in with ‘other genes.’ ... Most of the pretest discussion is currently focused on the implications of finding a BRCA1 or BRCA2 mutation.”

The new findings may lead to new risk management strategies, he suggested. “Most breast cancers that occur in women with a mutation in ATM or CHEK2 are estrogen receptor positive, so these women may be candidates for antiestrogen therapies such as tamoxifenraloxifene, or aromatase inhibitors,” he wrote.

Dr. Narod observed that, for now, the management of most women with either mutation will consist of screening alone, starting with MRI at age 40 years.

The medical community is not ready yet to expand genetic screening to the general population, cautions Walton Taylor, MD, past president of the American Society of Breast Surgeons.

The ASBrS currently recommends that all patients with breast cancer as well as those at high risk for breast cancer be offered genetic testing. “All women at risk should be tested, and all patients with pathogenic variants need to be managed appropriately – it saves lives,” Dr. Taylor emphasized.

However, “unaffected people with no family history do not need genetic testing at this time,” he said in an interview.

As to what physicians might do to better manage patients with mutations that predispose to breast cancer, Dr. Taylor said, “It’s surprisingly easy.”

Every genetic testing company provides genetic counselors to guide patients through next steps, Dr. Taylor pointed out, and most cancer patients have nurse navigators who make sure patients get tested and followed appropriately.

Members of the ASBrS follow the National Comprehensive Cancer Network guidelines when they identify carriers of a pathogenic variant. Dr. Taylor said these are very useful guidelines for virtually all mutations identified thus far.

“This research is not necessarily new, but it is confirmatory for what we are doing, and that helps us make sure we are going down the right pathway,” Dr. Taylor said. “It confirms that what we think is right is right – and that matters,.”
 

 

 

CARRIERS consortium findings

The study led by Dr. Couch was carried out by the Cancer Risk Estimates Related to Susceptibility (CARRIERS) consortium. It involved analyzing data from 17 epidemiology studies that focused on women in the general population who develop breast cancer. For the studies, which were conducted in the United States, pathogenic variants in 28 cancer-predisposition genes were sequenced from 32,247 women with breast cancer (case patients) and 32,544 unaffected women (control persons).

In the overall CARRIERS analysis, the prevalence of pathogenic variants in 12 clinically actionable genes was 5.03% among case patients and 1.63% among control persons. The prevalence was similar in non-Hispanic White women, non-Hispanic Black women, and Hispanic case patients, as well as control persons, they added. The prevalence of pathogenic variants among Asian American case patients was lower, at only 1.64%.

Among patients who had breast cancer, the most common pathogenic variants included BRCA2, which occurred in 1.29% of case patients, followed by CHEK2, at a prevalence of 1.08%, and BRCA1, at a prevalence of 0.85%.

Mutations in BRCA1 increased the risk for breast cancer more than 7.5-fold; mutations in BRCA2 increased that risk more than fivefold, the investigators stated.

Mutations in PALB2 increased the risk of breast cancer approximately fourfold, they added.

Prevalence rates for both BRCA1 and BRCA2 among breast cancer patients declined rapidly after the age of 40. The decline in other variants, including ATM, CHEK2, and PALB2, was limited with increasing age.

Indeed, mutations in all five of these genes were associated with a lifetime absolute risk for breast cancer greater than 20% by the age of 85 years among non-Hispanic Whites.

Pathogenic variants in BRCA1 or BRCA2 yielded a lifetime risk for breast cancer of approximately 50%. Mutations in PALB2 yielded a lifetime breast cancer risk of approximately 32%.

The risk of having a mutation in specific genes varied depending on the type of breast cancer. For example, mutations in BARD1, RAD51C, and RAD51D increased the risk for estrogen receptor (ER)–negative breast cancer as well as triple-negative breast cancer, the authors noted, whereas mutations in ATM, CDH1, and CHEK2 increased the risk for ER-positive breast cancer.

“These refined estimates of the prevalences of pathogenic variants among women with breast cancer in the overall population, as opposed to selected high-risk patients, may inform ongoing discussions regarding testing in patients with breast cancer,” the CARRIERS authors observed.

“The risks of breast cancer associated with pathogenic variants in the genes evaluated in the population-based CARRIERS analysis also provide important information for risk assessment and counseling of women with breast cancer who do not meet high-risk selection criteria,” they suggested.
 

Similar findings in second study

The second study was conducted by the Breast Cancer Association Consortium under lead author Leila Dorling, PhD, University of Cambridge (England). This group sequenced 34 susceptibility genes from 60,466 women with breast cancer and 53,461 unaffected control persons.

“Protein-truncating variants in five genes (ATM, BRCA1, BRCA2, CHEK2, and PALB2) were associated with a significant risk of breast cancer overall (P < .0001),” the BCAC members reported. “For these genes, odds ratios ranged from 2.10 to 10.57.”

The association between overall breast cancer risk and mutations in seven other genes was more modest, conferring approximately twice the risk for breast cancer overall, although that risk was threefold higher for the TP53 mutation.

For the 12 genes the consortium singled out as being associated with either a significant or a more modest risk for breast cancer, the effect size did not vary significantly between European and Asian women, the authors noted. Again, the risk for ER-positive breast cancer was over two times greater for those who had either the ATM or the CHEK2 mutation. Having mutations in BARD1, BRCA1, BRCA1, PALB2, RAD51C, and RAD51D conferred a higher risk for ER-negative disease than for ER-positive disease.

There was also an association between rare missense variants in six genes – CHEK2, ATM, TP53, BRCA1, CDH1, and RECQL – and overall breast cancer risk, with the clearest evidence being for CHEK2.

“The absolute risk estimates place protein-truncating variants in BRCA1, BRCA2, and PALB2 in the high-risk category and place protein-truncating variants in ATM, BARD1, CHEK2, RAD51CC, and RAD51D in the moderate-risk category,” Dr. Dorling and colleagues reaffirmed.

“These results may guide screening as well as prevention with risk-reducing surgery or medication, in accordance with national guidelines,” the authors suggested.

The CARRIERS study was supported by the National Institutes of Health. The study by Dr. Dorling and colleagues was supported by the European Union Horizon 2020 research and innovation programs, among others. Dr. Narod disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

New findings of breast cancer gene mutations in women who have no family history of the disease offer a new way of estimating risk and may change the way in which these women are advised on risk management.

The findings come from two large studies, both published on Jan. 20 in the New England Journal of Medicine.

The two articles are “extraordinary” for broadening and validating the genomic panel to help screen women at risk for breast cancer in the future, commented Eric Topol, MD, professor of molecular medicine, Scripps Research, La Jolla, Calif., and Medscape editor in chief.

“Traditionally, genetic testing of inherited breast cancer genes has focused on women at high risk who have a strong family history of breast cancer or those who were diagnosed at an early age, such as under 45 years,” commented the lead investigator of one of the studies, Fergus Couch, PhD, a pathologist at the Mayo Clinic, Rochester, Minn.

“[Although] the risk of developing breast cancer is generally lower for women without a family history of the disease ... when we looked at all women, we found that 30% of breast cancer mutations occurred in women who are not high risk,” he said.

In both studies, mutations or variants in eight genes – BRCA1, BRCA2, PALB2, BARD1, RAD51C, RAD51D, ATM, and CHEK2 – were found to be significantly associated with breast cancer risk.

However, the distribution of mutations among women with breast cancer differed from the distribution among unaffected women, noted Steven Narod, MD, from the Women’s College Research Institute, Toronto, in an accompanying editorial.

“What this means to clinicians, now that we are expanding the use of gene-panel testing to include unaffected women with a moderate risk of breast cancer in the family history, is that our time will increasingly be spent counseling women with CHEK2 and ATM mutations,” he wrote. Currently, these two are “clumped in with ‘other genes.’ ... Most of the pretest discussion is currently focused on the implications of finding a BRCA1 or BRCA2 mutation.”

The new findings may lead to new risk management strategies, he suggested. “Most breast cancers that occur in women with a mutation in ATM or CHEK2 are estrogen receptor positive, so these women may be candidates for antiestrogen therapies such as tamoxifenraloxifene, or aromatase inhibitors,” he wrote.

Dr. Narod observed that, for now, the management of most women with either mutation will consist of screening alone, starting with MRI at age 40 years.

The medical community is not ready yet to expand genetic screening to the general population, cautions Walton Taylor, MD, past president of the American Society of Breast Surgeons.

The ASBrS currently recommends that all patients with breast cancer as well as those at high risk for breast cancer be offered genetic testing. “All women at risk should be tested, and all patients with pathogenic variants need to be managed appropriately – it saves lives,” Dr. Taylor emphasized.

However, “unaffected people with no family history do not need genetic testing at this time,” he said in an interview.

As to what physicians might do to better manage patients with mutations that predispose to breast cancer, Dr. Taylor said, “It’s surprisingly easy.”

Every genetic testing company provides genetic counselors to guide patients through next steps, Dr. Taylor pointed out, and most cancer patients have nurse navigators who make sure patients get tested and followed appropriately.

Members of the ASBrS follow the National Comprehensive Cancer Network guidelines when they identify carriers of a pathogenic variant. Dr. Taylor said these are very useful guidelines for virtually all mutations identified thus far.

“This research is not necessarily new, but it is confirmatory for what we are doing, and that helps us make sure we are going down the right pathway,” Dr. Taylor said. “It confirms that what we think is right is right – and that matters,.”
 

 

 

CARRIERS consortium findings

The study led by Dr. Couch was carried out by the Cancer Risk Estimates Related to Susceptibility (CARRIERS) consortium. It involved analyzing data from 17 epidemiology studies that focused on women in the general population who develop breast cancer. For the studies, which were conducted in the United States, pathogenic variants in 28 cancer-predisposition genes were sequenced from 32,247 women with breast cancer (case patients) and 32,544 unaffected women (control persons).

In the overall CARRIERS analysis, the prevalence of pathogenic variants in 12 clinically actionable genes was 5.03% among case patients and 1.63% among control persons. The prevalence was similar in non-Hispanic White women, non-Hispanic Black women, and Hispanic case patients, as well as control persons, they added. The prevalence of pathogenic variants among Asian American case patients was lower, at only 1.64%.

Among patients who had breast cancer, the most common pathogenic variants included BRCA2, which occurred in 1.29% of case patients, followed by CHEK2, at a prevalence of 1.08%, and BRCA1, at a prevalence of 0.85%.

Mutations in BRCA1 increased the risk for breast cancer more than 7.5-fold; mutations in BRCA2 increased that risk more than fivefold, the investigators stated.

Mutations in PALB2 increased the risk of breast cancer approximately fourfold, they added.

Prevalence rates for both BRCA1 and BRCA2 among breast cancer patients declined rapidly after the age of 40. The decline in other variants, including ATM, CHEK2, and PALB2, was limited with increasing age.

Indeed, mutations in all five of these genes were associated with a lifetime absolute risk for breast cancer greater than 20% by the age of 85 years among non-Hispanic Whites.

Pathogenic variants in BRCA1 or BRCA2 yielded a lifetime risk for breast cancer of approximately 50%. Mutations in PALB2 yielded a lifetime breast cancer risk of approximately 32%.

The risk of having a mutation in specific genes varied depending on the type of breast cancer. For example, mutations in BARD1, RAD51C, and RAD51D increased the risk for estrogen receptor (ER)–negative breast cancer as well as triple-negative breast cancer, the authors noted, whereas mutations in ATM, CDH1, and CHEK2 increased the risk for ER-positive breast cancer.

“These refined estimates of the prevalences of pathogenic variants among women with breast cancer in the overall population, as opposed to selected high-risk patients, may inform ongoing discussions regarding testing in patients with breast cancer,” the CARRIERS authors observed.

“The risks of breast cancer associated with pathogenic variants in the genes evaluated in the population-based CARRIERS analysis also provide important information for risk assessment and counseling of women with breast cancer who do not meet high-risk selection criteria,” they suggested.
 

Similar findings in second study

The second study was conducted by the Breast Cancer Association Consortium under lead author Leila Dorling, PhD, University of Cambridge (England). This group sequenced 34 susceptibility genes from 60,466 women with breast cancer and 53,461 unaffected control persons.

“Protein-truncating variants in five genes (ATM, BRCA1, BRCA2, CHEK2, and PALB2) were associated with a significant risk of breast cancer overall (P < .0001),” the BCAC members reported. “For these genes, odds ratios ranged from 2.10 to 10.57.”

The association between overall breast cancer risk and mutations in seven other genes was more modest, conferring approximately twice the risk for breast cancer overall, although that risk was threefold higher for the TP53 mutation.

For the 12 genes the consortium singled out as being associated with either a significant or a more modest risk for breast cancer, the effect size did not vary significantly between European and Asian women, the authors noted. Again, the risk for ER-positive breast cancer was over two times greater for those who had either the ATM or the CHEK2 mutation. Having mutations in BARD1, BRCA1, BRCA1, PALB2, RAD51C, and RAD51D conferred a higher risk for ER-negative disease than for ER-positive disease.

There was also an association between rare missense variants in six genes – CHEK2, ATM, TP53, BRCA1, CDH1, and RECQL – and overall breast cancer risk, with the clearest evidence being for CHEK2.

“The absolute risk estimates place protein-truncating variants in BRCA1, BRCA2, and PALB2 in the high-risk category and place protein-truncating variants in ATM, BARD1, CHEK2, RAD51CC, and RAD51D in the moderate-risk category,” Dr. Dorling and colleagues reaffirmed.

“These results may guide screening as well as prevention with risk-reducing surgery or medication, in accordance with national guidelines,” the authors suggested.

The CARRIERS study was supported by the National Institutes of Health. The study by Dr. Dorling and colleagues was supported by the European Union Horizon 2020 research and innovation programs, among others. Dr. Narod disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Could an osteoporosis drug reduce need for hip revision surgery?

Article Type
Changed
Wed, 01/20/2021 - 09:45

A single injection of denosumab (Prolia, Amgen), frequently used to treat osteoporosis, may reduce the need for revision surgery in patients with symptomatic osteolysis following total hip arthroplasty, a new proof-of-concept study suggests.

Aseptic loosening is the result of wear-induced osteolysis caused by the prosthetic hip and is a major contributor to the need for revision surgery in many parts of the world.

“The only established treatment for prosthesis-related osteolysis after joint replacement is revision surgery, which carries substantially greater morbidity and mortality than primary joint replacement,” Mohit M. Mahatma, MRes, of the University of Sheffield, England, and colleagues wrote in their article, published online Jan. 11 in The Lancet Rheumatology.

As well as an increased risk of infection and other complications, revision surgery is much more costly than a first-time operation, they added.

“The results of this proof-of-concept clinical trial indicate that denosumab is effective at reducing bone resorption activity within osteolytic lesion tissue and is well tolerated within the limitations of the single dose used here,” they concluded.

Commenting on the findings, Antonia Chen, MD, associate professor of orthopedic surgery, Harvard Medical School, Boston, emphasized that further studies are needed to assess the effectiveness of this strategy to reduce the need for hip revision surgery.

Nevertheless, “osteolysis is still unfortunately a problem we do have to deal with and we do not have any other way to prevent it,” she said in an interview. “So it’s a good start ... although further studies are definitely needed,” Dr. Chen added.

In an accompanying editorial, Hannu Aro, MD, Turku University Hospital in Finland, agreed: “Without a doubt, the trial is a breakthrough, but it represents only the first step in the development of pharmacological therapy aiming to slow, prevent, or even reverse the process of wear-induced periprosthetic osteolysis.”
 

Small single-center study

The phase 2, single-center, randomized, controlled trial involved 22 patients who had previously undergone hip replacement surgery at Sheffield Teaching Hospitals and were scheduled for revision surgery due to symptomatic osteolysis. They were randomized to a single subcutaneous injection of denosumab at a dose of 60 mg, or placebo, on their second hospital visit.

“The primary outcome was the between-group difference in the number of osteoclasts per mm of osteolytic membrane at the osteolytic membrane-bone interface at week 8,” the authors noted.

At this time point, there were 83% fewer osteoclasts at the interface in the denosumab group compared with placebo, at a median of 0.05 per mm in the treatment group compared with 0.30 per mm in the placebo group (P = .011). 

Secondary histological outcomes were also significantly improved in favor of the denosumab group compared with placebo.
 

Potential to prevent half of all hip revision surgeries?

Patients who received denosumab also demonstrated an acute fall in serum and urinary markers of bone resorption following administration of the drug, reaching a nadir at week 4, which was maintained until revision surgery at week 8.

In contrast, “no change in these markers was observed in the placebo group [P < .0003 for all biomarkers],” the investigators noted. Rates of adverse events were comparable in both treatment groups.

As the authors explained, osteolysis occurs following joint replacement surgery when particles of plastic wear off from the prosthesis, triggering an immune reaction that attacks the bone around the implant, causing the joint to loosen.

“It is very clear from our bone biopsies and bone imaging that the [denosumab] injection stops the bone absorbing the microplastic particles from the replacement joint and therefore could prevent the bone from being eaten away and the need for revision surgery,” senior author Mark Wilkinson, MBChB, PhD, honorary consultant orthopedic surgeon, Sheffield Teaching Hospitals, said in a press release from his institution.

“This study is a significant breakthrough as we’ve demonstrated that there is a drug, already available and successful in the treatment of osteoporosis, that has the potential to prevent up to half of all revised replacement surgeries which are caused by osteolysis,” he added.

Dr. Wilkinson and coauthors said their results justify the need for future trials targeting earlier-stage disease to further test the use of denosumab to prevent or reduce the need for revision surgery.

In 2018, aseptic loosening accounted for over half of all revision procedures, as reported to the National Joint Registry in England and Wales.
 

 

 

Older polyethylene prostheses are the main culprit

Commenting further on the study, Dr. Chen noted that osteolysis still plagues orthopedic surgeons because the original polyethylene prostheses were not very good. A better prosthesis developed at Massachusetts General Hospital is made up of highly crossed-link polyethylene and still wears over time but to a much lesser extent than the older polyethylene prostheses.

Metal and ceramic prostheses also can induce osteolysis, but again to a much lesser extent than the older polyethylene implants.

“Any particle can technically cause osteolysis but plastic produces the most particles,” Dr. Chen explained. Although hip revision rates in the United States are low to begin with, aseptic loosening is still one of the main reasons that patients need to undergo revision surgery, she observed.

“A lot of patients are still living with the old plastic [implants] so there is still a need for something like this,” she stressed.

However, many questions about this potential new strategy remain to be answered, including when best to initiate treatment and how to manage patients at risk for osteolysis 20-30 years after they have received their original implant.

In his editorial, Dr. Aro said that serious adverse consequences often become evident 10-20 years after patients have undergone the original hip replacement procedures, when they are potentially less physically fit than they were at the time of the operation and thus less able to withstand the rigors of a difficult revision surgery.

“In this context, the concept of nonsurgical pharmacological treatment of periprosthetic osteolysis ... brings a new hope for the ever-increasing population of patients with total hip arthroplasty to avoid revision surgery,” Dr. Aro suggested.

However, Dr. Aro cautioned that reduction of bone turnover by antiresorptive agents such as denosumab has been associated with the development of atypical femoral fractures.

The study was funded by Amgen. Dr. Wilkinson has reported receiving a grant from Amgen. Dr. Chen has reported serving as a consultant for Striker and b-One Ortho. Dr. Aro has reported receiving a grant to his institution from Amgen Finland and the Academy of Finland. He has also served as a member of an advisory scientific board for Amgen Finland.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A single injection of denosumab (Prolia, Amgen), frequently used to treat osteoporosis, may reduce the need for revision surgery in patients with symptomatic osteolysis following total hip arthroplasty, a new proof-of-concept study suggests.

Aseptic loosening is the result of wear-induced osteolysis caused by the prosthetic hip and is a major contributor to the need for revision surgery in many parts of the world.

“The only established treatment for prosthesis-related osteolysis after joint replacement is revision surgery, which carries substantially greater morbidity and mortality than primary joint replacement,” Mohit M. Mahatma, MRes, of the University of Sheffield, England, and colleagues wrote in their article, published online Jan. 11 in The Lancet Rheumatology.

As well as an increased risk of infection and other complications, revision surgery is much more costly than a first-time operation, they added.

“The results of this proof-of-concept clinical trial indicate that denosumab is effective at reducing bone resorption activity within osteolytic lesion tissue and is well tolerated within the limitations of the single dose used here,” they concluded.

Commenting on the findings, Antonia Chen, MD, associate professor of orthopedic surgery, Harvard Medical School, Boston, emphasized that further studies are needed to assess the effectiveness of this strategy to reduce the need for hip revision surgery.

Nevertheless, “osteolysis is still unfortunately a problem we do have to deal with and we do not have any other way to prevent it,” she said in an interview. “So it’s a good start ... although further studies are definitely needed,” Dr. Chen added.

In an accompanying editorial, Hannu Aro, MD, Turku University Hospital in Finland, agreed: “Without a doubt, the trial is a breakthrough, but it represents only the first step in the development of pharmacological therapy aiming to slow, prevent, or even reverse the process of wear-induced periprosthetic osteolysis.”
 

Small single-center study

The phase 2, single-center, randomized, controlled trial involved 22 patients who had previously undergone hip replacement surgery at Sheffield Teaching Hospitals and were scheduled for revision surgery due to symptomatic osteolysis. They were randomized to a single subcutaneous injection of denosumab at a dose of 60 mg, or placebo, on their second hospital visit.

“The primary outcome was the between-group difference in the number of osteoclasts per mm of osteolytic membrane at the osteolytic membrane-bone interface at week 8,” the authors noted.

At this time point, there were 83% fewer osteoclasts at the interface in the denosumab group compared with placebo, at a median of 0.05 per mm in the treatment group compared with 0.30 per mm in the placebo group (P = .011). 

Secondary histological outcomes were also significantly improved in favor of the denosumab group compared with placebo.
 

Potential to prevent half of all hip revision surgeries?

Patients who received denosumab also demonstrated an acute fall in serum and urinary markers of bone resorption following administration of the drug, reaching a nadir at week 4, which was maintained until revision surgery at week 8.

In contrast, “no change in these markers was observed in the placebo group [P < .0003 for all biomarkers],” the investigators noted. Rates of adverse events were comparable in both treatment groups.

As the authors explained, osteolysis occurs following joint replacement surgery when particles of plastic wear off from the prosthesis, triggering an immune reaction that attacks the bone around the implant, causing the joint to loosen.

“It is very clear from our bone biopsies and bone imaging that the [denosumab] injection stops the bone absorbing the microplastic particles from the replacement joint and therefore could prevent the bone from being eaten away and the need for revision surgery,” senior author Mark Wilkinson, MBChB, PhD, honorary consultant orthopedic surgeon, Sheffield Teaching Hospitals, said in a press release from his institution.

“This study is a significant breakthrough as we’ve demonstrated that there is a drug, already available and successful in the treatment of osteoporosis, that has the potential to prevent up to half of all revised replacement surgeries which are caused by osteolysis,” he added.

Dr. Wilkinson and coauthors said their results justify the need for future trials targeting earlier-stage disease to further test the use of denosumab to prevent or reduce the need for revision surgery.

In 2018, aseptic loosening accounted for over half of all revision procedures, as reported to the National Joint Registry in England and Wales.
 

 

 

Older polyethylene prostheses are the main culprit

Commenting further on the study, Dr. Chen noted that osteolysis still plagues orthopedic surgeons because the original polyethylene prostheses were not very good. A better prosthesis developed at Massachusetts General Hospital is made up of highly crossed-link polyethylene and still wears over time but to a much lesser extent than the older polyethylene prostheses.

Metal and ceramic prostheses also can induce osteolysis, but again to a much lesser extent than the older polyethylene implants.

“Any particle can technically cause osteolysis but plastic produces the most particles,” Dr. Chen explained. Although hip revision rates in the United States are low to begin with, aseptic loosening is still one of the main reasons that patients need to undergo revision surgery, she observed.

“A lot of patients are still living with the old plastic [implants] so there is still a need for something like this,” she stressed.

However, many questions about this potential new strategy remain to be answered, including when best to initiate treatment and how to manage patients at risk for osteolysis 20-30 years after they have received their original implant.

In his editorial, Dr. Aro said that serious adverse consequences often become evident 10-20 years after patients have undergone the original hip replacement procedures, when they are potentially less physically fit than they were at the time of the operation and thus less able to withstand the rigors of a difficult revision surgery.

“In this context, the concept of nonsurgical pharmacological treatment of periprosthetic osteolysis ... brings a new hope for the ever-increasing population of patients with total hip arthroplasty to avoid revision surgery,” Dr. Aro suggested.

However, Dr. Aro cautioned that reduction of bone turnover by antiresorptive agents such as denosumab has been associated with the development of atypical femoral fractures.

The study was funded by Amgen. Dr. Wilkinson has reported receiving a grant from Amgen. Dr. Chen has reported serving as a consultant for Striker and b-One Ortho. Dr. Aro has reported receiving a grant to his institution from Amgen Finland and the Academy of Finland. He has also served as a member of an advisory scientific board for Amgen Finland.

A version of this article first appeared on Medscape.com.

A single injection of denosumab (Prolia, Amgen), frequently used to treat osteoporosis, may reduce the need for revision surgery in patients with symptomatic osteolysis following total hip arthroplasty, a new proof-of-concept study suggests.

Aseptic loosening is the result of wear-induced osteolysis caused by the prosthetic hip and is a major contributor to the need for revision surgery in many parts of the world.

“The only established treatment for prosthesis-related osteolysis after joint replacement is revision surgery, which carries substantially greater morbidity and mortality than primary joint replacement,” Mohit M. Mahatma, MRes, of the University of Sheffield, England, and colleagues wrote in their article, published online Jan. 11 in The Lancet Rheumatology.

As well as an increased risk of infection and other complications, revision surgery is much more costly than a first-time operation, they added.

“The results of this proof-of-concept clinical trial indicate that denosumab is effective at reducing bone resorption activity within osteolytic lesion tissue and is well tolerated within the limitations of the single dose used here,” they concluded.

Commenting on the findings, Antonia Chen, MD, associate professor of orthopedic surgery, Harvard Medical School, Boston, emphasized that further studies are needed to assess the effectiveness of this strategy to reduce the need for hip revision surgery.

Nevertheless, “osteolysis is still unfortunately a problem we do have to deal with and we do not have any other way to prevent it,” she said in an interview. “So it’s a good start ... although further studies are definitely needed,” Dr. Chen added.

In an accompanying editorial, Hannu Aro, MD, Turku University Hospital in Finland, agreed: “Without a doubt, the trial is a breakthrough, but it represents only the first step in the development of pharmacological therapy aiming to slow, prevent, or even reverse the process of wear-induced periprosthetic osteolysis.”
 

Small single-center study

The phase 2, single-center, randomized, controlled trial involved 22 patients who had previously undergone hip replacement surgery at Sheffield Teaching Hospitals and were scheduled for revision surgery due to symptomatic osteolysis. They were randomized to a single subcutaneous injection of denosumab at a dose of 60 mg, or placebo, on their second hospital visit.

“The primary outcome was the between-group difference in the number of osteoclasts per mm of osteolytic membrane at the osteolytic membrane-bone interface at week 8,” the authors noted.

At this time point, there were 83% fewer osteoclasts at the interface in the denosumab group compared with placebo, at a median of 0.05 per mm in the treatment group compared with 0.30 per mm in the placebo group (P = .011). 

Secondary histological outcomes were also significantly improved in favor of the denosumab group compared with placebo.
 

Potential to prevent half of all hip revision surgeries?

Patients who received denosumab also demonstrated an acute fall in serum and urinary markers of bone resorption following administration of the drug, reaching a nadir at week 4, which was maintained until revision surgery at week 8.

In contrast, “no change in these markers was observed in the placebo group [P < .0003 for all biomarkers],” the investigators noted. Rates of adverse events were comparable in both treatment groups.

As the authors explained, osteolysis occurs following joint replacement surgery when particles of plastic wear off from the prosthesis, triggering an immune reaction that attacks the bone around the implant, causing the joint to loosen.

“It is very clear from our bone biopsies and bone imaging that the [denosumab] injection stops the bone absorbing the microplastic particles from the replacement joint and therefore could prevent the bone from being eaten away and the need for revision surgery,” senior author Mark Wilkinson, MBChB, PhD, honorary consultant orthopedic surgeon, Sheffield Teaching Hospitals, said in a press release from his institution.

“This study is a significant breakthrough as we’ve demonstrated that there is a drug, already available and successful in the treatment of osteoporosis, that has the potential to prevent up to half of all revised replacement surgeries which are caused by osteolysis,” he added.

Dr. Wilkinson and coauthors said their results justify the need for future trials targeting earlier-stage disease to further test the use of denosumab to prevent or reduce the need for revision surgery.

In 2018, aseptic loosening accounted for over half of all revision procedures, as reported to the National Joint Registry in England and Wales.
 

 

 

Older polyethylene prostheses are the main culprit

Commenting further on the study, Dr. Chen noted that osteolysis still plagues orthopedic surgeons because the original polyethylene prostheses were not very good. A better prosthesis developed at Massachusetts General Hospital is made up of highly crossed-link polyethylene and still wears over time but to a much lesser extent than the older polyethylene prostheses.

Metal and ceramic prostheses also can induce osteolysis, but again to a much lesser extent than the older polyethylene implants.

“Any particle can technically cause osteolysis but plastic produces the most particles,” Dr. Chen explained. Although hip revision rates in the United States are low to begin with, aseptic loosening is still one of the main reasons that patients need to undergo revision surgery, she observed.

“A lot of patients are still living with the old plastic [implants] so there is still a need for something like this,” she stressed.

However, many questions about this potential new strategy remain to be answered, including when best to initiate treatment and how to manage patients at risk for osteolysis 20-30 years after they have received their original implant.

In his editorial, Dr. Aro said that serious adverse consequences often become evident 10-20 years after patients have undergone the original hip replacement procedures, when they are potentially less physically fit than they were at the time of the operation and thus less able to withstand the rigors of a difficult revision surgery.

“In this context, the concept of nonsurgical pharmacological treatment of periprosthetic osteolysis ... brings a new hope for the ever-increasing population of patients with total hip arthroplasty to avoid revision surgery,” Dr. Aro suggested.

However, Dr. Aro cautioned that reduction of bone turnover by antiresorptive agents such as denosumab has been associated with the development of atypical femoral fractures.

The study was funded by Amgen. Dr. Wilkinson has reported receiving a grant from Amgen. Dr. Chen has reported serving as a consultant for Striker and b-One Ortho. Dr. Aro has reported receiving a grant to his institution from Amgen Finland and the Academy of Finland. He has also served as a member of an advisory scientific board for Amgen Finland.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

U.S. cancer death rates drop for second year in a row

Article Type
Changed
Thu, 12/15/2022 - 17:31

For the second year in a row, mortality from cancer has fallen in the United States, driven largely by reductions in the incidence of, and death from, non–small cell lung cancer (NSCLC) in men and women, according to a new report from the American Cancer Society.

The study was published online Jan. 12 in CA: A Cancer Journal for Clinicians.

“Mortality rates are a better indicator of progress against cancer than incidence or survival because they are less affected by biases resulting from changes in detection practices,” wrote the authors, led by Rebecca Siegel, MPH, American Cancer Society, Atlanta.  

“The overall drop of 31% as of 2018 [since the early 1990s] translates to an estimated 3,188,500 fewer cancer deaths (2,170,700 in men and 1,017,800 in women) than what would have occurred if mortality rates had remained at their peak,” the researchers added.

Lung cancer accounted for 46% of the total decline in cancer mortality in the past 5 years, with a record, single-year drop of 2.4% between 2017 and 2018.

The recent and rapid reductions in lung cancer mortality reflect better treatments for NSCLC, the authors suggested. For example, survival rates at 2 years have increased from 34% for patients diagnosed with NSCLC between 2009 and 2010 to 42% for those diagnosed during 2015 and 2016 – an absolute gain of 5%-6% in survival odds for every stage of diagnosis.

On a more somber note, the authors warned that COVID-19 is predicted to have a negative impact on both the diagnosis and outcomes of patients with cancer in the near future.  

“We anticipate that disruptions in access to cancer care in 2020 will lead to downstream increases in advanced stage diagnoses that may impede progress in reducing cancer mortality rates in the years to come,” Ms. Siegel said in a statement.
 

New cancer cases

The report provides an estimated number of new cancer cases and deaths in 2021 in the United States (nationally and state-by-state) based on the most current population-based data for cancer incidence through 2017 and for mortality through 2018. “An estimated 608,570 Americans will die from cancer in 2021, corresponding to more than 1600 deaths per day,” Ms. Siegel and colleagues reported.

The greatest number of deaths are predicted to be from the most common cancers: Lung, prostate, and colorectal cancer in men and lung, breast, and colorectal cancer in women, they added. However, the mortality rates for all four cancers are continuing to fall.

As of 2018, the death rate from lung cancer had dropped by 54% among males and by 30% among females over the past few decades, the investigators noted.

Mortality from female breast cancer has dropped by 41% since 1989; by 52% for prostate cancer since 1993; and by 53% and 59% for colorectal cancer for men (since 1980) and women (since 1969), respectively.

“However, in recent years, mortality declines have slowed for breast cancer and [colorectal cancer] and have halted for prostate cancer,” the researchers noted.

In contrast, the pace of the annual decline in lung cancer mortality doubled among men from 3.1% between 2009 and 2013 to 5.5% between 2014 and 2018, and from 1.8% to 4.4% among women during the same time intervals.
 

 

 

Increase in incidence at common sites

Despite the steady progress in mortality for most cancers, “rates continue to increase for some common sites,” Ms. Siegel and colleagues reported.

For example, death rates from uterine corpus cancer have accelerated from the late 1990s at twice the pace of the increase in its incidence. Death rates also have increased for cancers of the oral cavity and pharynx – although in this cancer, increases in mortality parallel an increase in its incidence. 

Pancreatic cancer death rates [in turn] continued to increase slowly in men ... but remained stable in women, despite incidence [rates] rising by about 1% per year in both sexes,” the authors observed.

Meanwhile, the incidence of cervical cancer, although declining for decades overall, is increasing for patients who present with more distant-stage disease as well as cervical adenocarcinoma, both of which are often undetected by cytology.

“These findings underscore the need for more targeted efforts to increase both HPV [human papillomavirus] vaccination among all individuals aged [26 and younger] and primary HPV testing or HPV/cytology co-testing every 5 years among women beginning at age 25,” the authors emphasized.

On a more positive note, the long-term increase in mortality from liver cancer has recently slowed among women and has stabilized among men, they added.

Once again, disparities in both cancer occurrence and outcomes varied considerably between racial and ethnic groups. For example, cancer is the leading cause of death in people who are Hispanic, Asian American, and Alaska Native. Survival rates at 5 years for almost all cancers are still higher for White patients than for Black patients, although the disparity in cancer mortality between Black persons and White persons has declined to 13% from a peak of 33% in 1993.

Geographic disparities in cancer mortality rates still prevail; the rates are largest for preventable cancers such as lung and cervical cancer, for which mortality varies by as much as fivefold across states.

And although cancer remains the second most common cause of death among children, death rates from cancer have continuously declined over time among both children and adolescents, largely the result of dramatic declines in death rates from leukemia in both age groups.

The study authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

For the second year in a row, mortality from cancer has fallen in the United States, driven largely by reductions in the incidence of, and death from, non–small cell lung cancer (NSCLC) in men and women, according to a new report from the American Cancer Society.

The study was published online Jan. 12 in CA: A Cancer Journal for Clinicians.

“Mortality rates are a better indicator of progress against cancer than incidence or survival because they are less affected by biases resulting from changes in detection practices,” wrote the authors, led by Rebecca Siegel, MPH, American Cancer Society, Atlanta.  

“The overall drop of 31% as of 2018 [since the early 1990s] translates to an estimated 3,188,500 fewer cancer deaths (2,170,700 in men and 1,017,800 in women) than what would have occurred if mortality rates had remained at their peak,” the researchers added.

Lung cancer accounted for 46% of the total decline in cancer mortality in the past 5 years, with a record, single-year drop of 2.4% between 2017 and 2018.

The recent and rapid reductions in lung cancer mortality reflect better treatments for NSCLC, the authors suggested. For example, survival rates at 2 years have increased from 34% for patients diagnosed with NSCLC between 2009 and 2010 to 42% for those diagnosed during 2015 and 2016 – an absolute gain of 5%-6% in survival odds for every stage of diagnosis.

On a more somber note, the authors warned that COVID-19 is predicted to have a negative impact on both the diagnosis and outcomes of patients with cancer in the near future.  

“We anticipate that disruptions in access to cancer care in 2020 will lead to downstream increases in advanced stage diagnoses that may impede progress in reducing cancer mortality rates in the years to come,” Ms. Siegel said in a statement.
 

New cancer cases

The report provides an estimated number of new cancer cases and deaths in 2021 in the United States (nationally and state-by-state) based on the most current population-based data for cancer incidence through 2017 and for mortality through 2018. “An estimated 608,570 Americans will die from cancer in 2021, corresponding to more than 1600 deaths per day,” Ms. Siegel and colleagues reported.

The greatest number of deaths are predicted to be from the most common cancers: Lung, prostate, and colorectal cancer in men and lung, breast, and colorectal cancer in women, they added. However, the mortality rates for all four cancers are continuing to fall.

As of 2018, the death rate from lung cancer had dropped by 54% among males and by 30% among females over the past few decades, the investigators noted.

Mortality from female breast cancer has dropped by 41% since 1989; by 52% for prostate cancer since 1993; and by 53% and 59% for colorectal cancer for men (since 1980) and women (since 1969), respectively.

“However, in recent years, mortality declines have slowed for breast cancer and [colorectal cancer] and have halted for prostate cancer,” the researchers noted.

In contrast, the pace of the annual decline in lung cancer mortality doubled among men from 3.1% between 2009 and 2013 to 5.5% between 2014 and 2018, and from 1.8% to 4.4% among women during the same time intervals.
 

 

 

Increase in incidence at common sites

Despite the steady progress in mortality for most cancers, “rates continue to increase for some common sites,” Ms. Siegel and colleagues reported.

For example, death rates from uterine corpus cancer have accelerated from the late 1990s at twice the pace of the increase in its incidence. Death rates also have increased for cancers of the oral cavity and pharynx – although in this cancer, increases in mortality parallel an increase in its incidence. 

Pancreatic cancer death rates [in turn] continued to increase slowly in men ... but remained stable in women, despite incidence [rates] rising by about 1% per year in both sexes,” the authors observed.

Meanwhile, the incidence of cervical cancer, although declining for decades overall, is increasing for patients who present with more distant-stage disease as well as cervical adenocarcinoma, both of which are often undetected by cytology.

“These findings underscore the need for more targeted efforts to increase both HPV [human papillomavirus] vaccination among all individuals aged [26 and younger] and primary HPV testing or HPV/cytology co-testing every 5 years among women beginning at age 25,” the authors emphasized.

On a more positive note, the long-term increase in mortality from liver cancer has recently slowed among women and has stabilized among men, they added.

Once again, disparities in both cancer occurrence and outcomes varied considerably between racial and ethnic groups. For example, cancer is the leading cause of death in people who are Hispanic, Asian American, and Alaska Native. Survival rates at 5 years for almost all cancers are still higher for White patients than for Black patients, although the disparity in cancer mortality between Black persons and White persons has declined to 13% from a peak of 33% in 1993.

Geographic disparities in cancer mortality rates still prevail; the rates are largest for preventable cancers such as lung and cervical cancer, for which mortality varies by as much as fivefold across states.

And although cancer remains the second most common cause of death among children, death rates from cancer have continuously declined over time among both children and adolescents, largely the result of dramatic declines in death rates from leukemia in both age groups.

The study authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

For the second year in a row, mortality from cancer has fallen in the United States, driven largely by reductions in the incidence of, and death from, non–small cell lung cancer (NSCLC) in men and women, according to a new report from the American Cancer Society.

The study was published online Jan. 12 in CA: A Cancer Journal for Clinicians.

“Mortality rates are a better indicator of progress against cancer than incidence or survival because they are less affected by biases resulting from changes in detection practices,” wrote the authors, led by Rebecca Siegel, MPH, American Cancer Society, Atlanta.  

“The overall drop of 31% as of 2018 [since the early 1990s] translates to an estimated 3,188,500 fewer cancer deaths (2,170,700 in men and 1,017,800 in women) than what would have occurred if mortality rates had remained at their peak,” the researchers added.

Lung cancer accounted for 46% of the total decline in cancer mortality in the past 5 years, with a record, single-year drop of 2.4% between 2017 and 2018.

The recent and rapid reductions in lung cancer mortality reflect better treatments for NSCLC, the authors suggested. For example, survival rates at 2 years have increased from 34% for patients diagnosed with NSCLC between 2009 and 2010 to 42% for those diagnosed during 2015 and 2016 – an absolute gain of 5%-6% in survival odds for every stage of diagnosis.

On a more somber note, the authors warned that COVID-19 is predicted to have a negative impact on both the diagnosis and outcomes of patients with cancer in the near future.  

“We anticipate that disruptions in access to cancer care in 2020 will lead to downstream increases in advanced stage diagnoses that may impede progress in reducing cancer mortality rates in the years to come,” Ms. Siegel said in a statement.
 

New cancer cases

The report provides an estimated number of new cancer cases and deaths in 2021 in the United States (nationally and state-by-state) based on the most current population-based data for cancer incidence through 2017 and for mortality through 2018. “An estimated 608,570 Americans will die from cancer in 2021, corresponding to more than 1600 deaths per day,” Ms. Siegel and colleagues reported.

The greatest number of deaths are predicted to be from the most common cancers: Lung, prostate, and colorectal cancer in men and lung, breast, and colorectal cancer in women, they added. However, the mortality rates for all four cancers are continuing to fall.

As of 2018, the death rate from lung cancer had dropped by 54% among males and by 30% among females over the past few decades, the investigators noted.

Mortality from female breast cancer has dropped by 41% since 1989; by 52% for prostate cancer since 1993; and by 53% and 59% for colorectal cancer for men (since 1980) and women (since 1969), respectively.

“However, in recent years, mortality declines have slowed for breast cancer and [colorectal cancer] and have halted for prostate cancer,” the researchers noted.

In contrast, the pace of the annual decline in lung cancer mortality doubled among men from 3.1% between 2009 and 2013 to 5.5% between 2014 and 2018, and from 1.8% to 4.4% among women during the same time intervals.
 

 

 

Increase in incidence at common sites

Despite the steady progress in mortality for most cancers, “rates continue to increase for some common sites,” Ms. Siegel and colleagues reported.

For example, death rates from uterine corpus cancer have accelerated from the late 1990s at twice the pace of the increase in its incidence. Death rates also have increased for cancers of the oral cavity and pharynx – although in this cancer, increases in mortality parallel an increase in its incidence. 

Pancreatic cancer death rates [in turn] continued to increase slowly in men ... but remained stable in women, despite incidence [rates] rising by about 1% per year in both sexes,” the authors observed.

Meanwhile, the incidence of cervical cancer, although declining for decades overall, is increasing for patients who present with more distant-stage disease as well as cervical adenocarcinoma, both of which are often undetected by cytology.

“These findings underscore the need for more targeted efforts to increase both HPV [human papillomavirus] vaccination among all individuals aged [26 and younger] and primary HPV testing or HPV/cytology co-testing every 5 years among women beginning at age 25,” the authors emphasized.

On a more positive note, the long-term increase in mortality from liver cancer has recently slowed among women and has stabilized among men, they added.

Once again, disparities in both cancer occurrence and outcomes varied considerably between racial and ethnic groups. For example, cancer is the leading cause of death in people who are Hispanic, Asian American, and Alaska Native. Survival rates at 5 years for almost all cancers are still higher for White patients than for Black patients, although the disparity in cancer mortality between Black persons and White persons has declined to 13% from a peak of 33% in 1993.

Geographic disparities in cancer mortality rates still prevail; the rates are largest for preventable cancers such as lung and cervical cancer, for which mortality varies by as much as fivefold across states.

And although cancer remains the second most common cause of death among children, death rates from cancer have continuously declined over time among both children and adolescents, largely the result of dramatic declines in death rates from leukemia in both age groups.

The study authors have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Intense rectal cancer surveillance may be reduced

Article Type
Changed
Wed, 12/23/2020 - 11:40

 

The intensity of posttreatment surveillance of patients with rectal cancer managed by a watch-and-wait approach can be safely reduced if patients achieve and maintain a clinical complete response within the first 3 years of initiation of that approach, a retrospective, multicenter registry study suggests.

“The risk of local regrowth or distant metastases after a clinical complete response to neoadjuvant chemoradiotherapy after nonoperative management of rectal cancer remains an important drawback for the widespread uptake of watch and wait in clinical practice,” Laura Fernandez, MD, Champalimaud Clinical Center, Lisbon, and colleagues observe.

“Conditional survival analysis estimates suggest that patients who sustain a clinical complete response for 3 years have 5% or lower risk of developing a local regrowth and a less than 2% risk of developing systemic recurrence thereafter,” the investigators emphasize.

Achieving a complete clinical recovery and sustaining it for 1 year is the “most relevant protective factor” for patients with rectal cancer and places them in an “excellent prognostic stage,” Fernandez said in a press statement.

The study was published online Dec. 11 in The Lancet Oncology.
 

A watch-and-wait database

A total of 793 patients were identified from the International Watch and Wait Database, a large registry of patients who experience a clinical complete response after neoadjuvant chemotherapy and who are managed by a watch-and-wait strategy. The registry includes data from 47 clinics in 15 countries.

The main outcome measures were the probability of patients remaining free of local regrowth and distant metastasis for an additional 2 years after sustaining a clinical complete response for 1, 3, and 5 years after the start of watch-and-wait management.

Among patients who had sustained clinical complete response for 1 year, the probability of remaining local regrowth–free for an additional 2 years – in other words, for a total of 3 years – was 88.1%.

Local regrowth–free survival rates were in the high 90 percentages after sustaining a clinical response for 3 years and for 5 years.

“Similar results were observed for distant metastasis–free survival,” Dr. Fernandez and colleagues continue. For example, 2-year conditional distant metastasis–free survival rates among patients who remained free of distant metastasis from the time the decision was made to initiate watch-and-wait management for 1 year was 93.8%; for 3 years, it was 97.8%; and for 5 years, it was 96.6%, the investigators report.

The only risk factors identified in the study for local regrowth over time was baseline clinical tumor stage and total dose of radiotherapy received.

However, after patients have achieved and sustained a complete clinical response for 1 year, known risk factors for local regrowth, such as disease stage before any treatment and the dose of radiation received by the patient, “seem to become irrelevant,” said Dr. Fernandez.

The authors say that after a patient sustains a clinical complete response for more than 3 years, it is unlikely that intensive surveillance for the detection of local regrowth would be required.

Indeed, they suggest that those who have no sign of regrowth or distant metastases at 3 years post treatment could probably be followed in established follow-up programs for rectal cancer patients who are treated with standard therapy, including radical resection.
 

 

 

Study limitations

Asked for comment, Joshua Smith, MD, PhD, a colorectal surgeon with the Memorial Sloan Kettering Cancer Center, New York, cautioned that there are real limitations to retrospective data as used for the current analysis, including the heterogeneity of the definitions of a clinical complete response. The investigators also tried to assess response to treatment both before and after 2010. Before 2010, intrarectal ultrasound was used to stage rectal cancer; currently, MRI is used.

There was also heterogeneity of the radiation used across the study interval. All of these factors must be taken into consideration when interpreting the investigators’ conclusions, Smith cautioned. Nevertheless, he also noted that the group is very sophisticated and that the article was well written and, in his view, not terribly overstated. “I just would be cautious with what they are saying that after 3 years, you do not need to be as strict with your surveillance,” Dr. Smith told this news organization.

“I think we still have some patients with local regrowth after that period of time, so I wouldn’t say we’re out of the woods after 3 years – I think we still have to follow these patients very closely,” he emphasized.

“The data clearly show that the longer a patient doesn’t have a local regrowth, the lower their chances are that they will develop local regrowth,” Dr. Smith said.

The study also provides clinicians with data to discuss with potential watch-and-wait candidates, he added. “The decision we make should really depend on the patient – what their goals are and what their quality-of-life perspective is,” Dr. Smith said. More definitive data on patient outcomes are expected soon from the Organ Preservation in Rectal Adenocarcinoma (OPRA) Trial.

That trial prospectively evaluates the watch-and-wait approach. Results should reflect not only what surgeons can anticipate with respect to local regrowth and distant metastases, but it should also determine the real organ preservation rate – an important endpoint of the watch-and-wait approach.

“I think it will be a paradigm-changing trial,” Dr. Smith predicted.

The study was funded by the European Registration of Cancer Care, among others organizations. Dr. Fernandez has disclosed no relevant financial relationships. Dr. Smith has served as a clinical advisor to Guardant Health.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

 

The intensity of posttreatment surveillance of patients with rectal cancer managed by a watch-and-wait approach can be safely reduced if patients achieve and maintain a clinical complete response within the first 3 years of initiation of that approach, a retrospective, multicenter registry study suggests.

“The risk of local regrowth or distant metastases after a clinical complete response to neoadjuvant chemoradiotherapy after nonoperative management of rectal cancer remains an important drawback for the widespread uptake of watch and wait in clinical practice,” Laura Fernandez, MD, Champalimaud Clinical Center, Lisbon, and colleagues observe.

“Conditional survival analysis estimates suggest that patients who sustain a clinical complete response for 3 years have 5% or lower risk of developing a local regrowth and a less than 2% risk of developing systemic recurrence thereafter,” the investigators emphasize.

Achieving a complete clinical recovery and sustaining it for 1 year is the “most relevant protective factor” for patients with rectal cancer and places them in an “excellent prognostic stage,” Fernandez said in a press statement.

The study was published online Dec. 11 in The Lancet Oncology.
 

A watch-and-wait database

A total of 793 patients were identified from the International Watch and Wait Database, a large registry of patients who experience a clinical complete response after neoadjuvant chemotherapy and who are managed by a watch-and-wait strategy. The registry includes data from 47 clinics in 15 countries.

The main outcome measures were the probability of patients remaining free of local regrowth and distant metastasis for an additional 2 years after sustaining a clinical complete response for 1, 3, and 5 years after the start of watch-and-wait management.

Among patients who had sustained clinical complete response for 1 year, the probability of remaining local regrowth–free for an additional 2 years – in other words, for a total of 3 years – was 88.1%.

Local regrowth–free survival rates were in the high 90 percentages after sustaining a clinical response for 3 years and for 5 years.

“Similar results were observed for distant metastasis–free survival,” Dr. Fernandez and colleagues continue. For example, 2-year conditional distant metastasis–free survival rates among patients who remained free of distant metastasis from the time the decision was made to initiate watch-and-wait management for 1 year was 93.8%; for 3 years, it was 97.8%; and for 5 years, it was 96.6%, the investigators report.

The only risk factors identified in the study for local regrowth over time was baseline clinical tumor stage and total dose of radiotherapy received.

However, after patients have achieved and sustained a complete clinical response for 1 year, known risk factors for local regrowth, such as disease stage before any treatment and the dose of radiation received by the patient, “seem to become irrelevant,” said Dr. Fernandez.

The authors say that after a patient sustains a clinical complete response for more than 3 years, it is unlikely that intensive surveillance for the detection of local regrowth would be required.

Indeed, they suggest that those who have no sign of regrowth or distant metastases at 3 years post treatment could probably be followed in established follow-up programs for rectal cancer patients who are treated with standard therapy, including radical resection.
 

 

 

Study limitations

Asked for comment, Joshua Smith, MD, PhD, a colorectal surgeon with the Memorial Sloan Kettering Cancer Center, New York, cautioned that there are real limitations to retrospective data as used for the current analysis, including the heterogeneity of the definitions of a clinical complete response. The investigators also tried to assess response to treatment both before and after 2010. Before 2010, intrarectal ultrasound was used to stage rectal cancer; currently, MRI is used.

There was also heterogeneity of the radiation used across the study interval. All of these factors must be taken into consideration when interpreting the investigators’ conclusions, Smith cautioned. Nevertheless, he also noted that the group is very sophisticated and that the article was well written and, in his view, not terribly overstated. “I just would be cautious with what they are saying that after 3 years, you do not need to be as strict with your surveillance,” Dr. Smith told this news organization.

“I think we still have some patients with local regrowth after that period of time, so I wouldn’t say we’re out of the woods after 3 years – I think we still have to follow these patients very closely,” he emphasized.

“The data clearly show that the longer a patient doesn’t have a local regrowth, the lower their chances are that they will develop local regrowth,” Dr. Smith said.

The study also provides clinicians with data to discuss with potential watch-and-wait candidates, he added. “The decision we make should really depend on the patient – what their goals are and what their quality-of-life perspective is,” Dr. Smith said. More definitive data on patient outcomes are expected soon from the Organ Preservation in Rectal Adenocarcinoma (OPRA) Trial.

That trial prospectively evaluates the watch-and-wait approach. Results should reflect not only what surgeons can anticipate with respect to local regrowth and distant metastases, but it should also determine the real organ preservation rate – an important endpoint of the watch-and-wait approach.

“I think it will be a paradigm-changing trial,” Dr. Smith predicted.

The study was funded by the European Registration of Cancer Care, among others organizations. Dr. Fernandez has disclosed no relevant financial relationships. Dr. Smith has served as a clinical advisor to Guardant Health.

A version of this article first appeared on Medscape.com.

 

The intensity of posttreatment surveillance of patients with rectal cancer managed by a watch-and-wait approach can be safely reduced if patients achieve and maintain a clinical complete response within the first 3 years of initiation of that approach, a retrospective, multicenter registry study suggests.

“The risk of local regrowth or distant metastases after a clinical complete response to neoadjuvant chemoradiotherapy after nonoperative management of rectal cancer remains an important drawback for the widespread uptake of watch and wait in clinical practice,” Laura Fernandez, MD, Champalimaud Clinical Center, Lisbon, and colleagues observe.

“Conditional survival analysis estimates suggest that patients who sustain a clinical complete response for 3 years have 5% or lower risk of developing a local regrowth and a less than 2% risk of developing systemic recurrence thereafter,” the investigators emphasize.

Achieving a complete clinical recovery and sustaining it for 1 year is the “most relevant protective factor” for patients with rectal cancer and places them in an “excellent prognostic stage,” Fernandez said in a press statement.

The study was published online Dec. 11 in The Lancet Oncology.
 

A watch-and-wait database

A total of 793 patients were identified from the International Watch and Wait Database, a large registry of patients who experience a clinical complete response after neoadjuvant chemotherapy and who are managed by a watch-and-wait strategy. The registry includes data from 47 clinics in 15 countries.

The main outcome measures were the probability of patients remaining free of local regrowth and distant metastasis for an additional 2 years after sustaining a clinical complete response for 1, 3, and 5 years after the start of watch-and-wait management.

Among patients who had sustained clinical complete response for 1 year, the probability of remaining local regrowth–free for an additional 2 years – in other words, for a total of 3 years – was 88.1%.

Local regrowth–free survival rates were in the high 90 percentages after sustaining a clinical response for 3 years and for 5 years.

“Similar results were observed for distant metastasis–free survival,” Dr. Fernandez and colleagues continue. For example, 2-year conditional distant metastasis–free survival rates among patients who remained free of distant metastasis from the time the decision was made to initiate watch-and-wait management for 1 year was 93.8%; for 3 years, it was 97.8%; and for 5 years, it was 96.6%, the investigators report.

The only risk factors identified in the study for local regrowth over time was baseline clinical tumor stage and total dose of radiotherapy received.

However, after patients have achieved and sustained a complete clinical response for 1 year, known risk factors for local regrowth, such as disease stage before any treatment and the dose of radiation received by the patient, “seem to become irrelevant,” said Dr. Fernandez.

The authors say that after a patient sustains a clinical complete response for more than 3 years, it is unlikely that intensive surveillance for the detection of local regrowth would be required.

Indeed, they suggest that those who have no sign of regrowth or distant metastases at 3 years post treatment could probably be followed in established follow-up programs for rectal cancer patients who are treated with standard therapy, including radical resection.
 

 

 

Study limitations

Asked for comment, Joshua Smith, MD, PhD, a colorectal surgeon with the Memorial Sloan Kettering Cancer Center, New York, cautioned that there are real limitations to retrospective data as used for the current analysis, including the heterogeneity of the definitions of a clinical complete response. The investigators also tried to assess response to treatment both before and after 2010. Before 2010, intrarectal ultrasound was used to stage rectal cancer; currently, MRI is used.

There was also heterogeneity of the radiation used across the study interval. All of these factors must be taken into consideration when interpreting the investigators’ conclusions, Smith cautioned. Nevertheless, he also noted that the group is very sophisticated and that the article was well written and, in his view, not terribly overstated. “I just would be cautious with what they are saying that after 3 years, you do not need to be as strict with your surveillance,” Dr. Smith told this news organization.

“I think we still have some patients with local regrowth after that period of time, so I wouldn’t say we’re out of the woods after 3 years – I think we still have to follow these patients very closely,” he emphasized.

“The data clearly show that the longer a patient doesn’t have a local regrowth, the lower their chances are that they will develop local regrowth,” Dr. Smith said.

The study also provides clinicians with data to discuss with potential watch-and-wait candidates, he added. “The decision we make should really depend on the patient – what their goals are and what their quality-of-life perspective is,” Dr. Smith said. More definitive data on patient outcomes are expected soon from the Organ Preservation in Rectal Adenocarcinoma (OPRA) Trial.

That trial prospectively evaluates the watch-and-wait approach. Results should reflect not only what surgeons can anticipate with respect to local regrowth and distant metastases, but it should also determine the real organ preservation rate – an important endpoint of the watch-and-wait approach.

“I think it will be a paradigm-changing trial,” Dr. Smith predicted.

The study was funded by the European Registration of Cancer Care, among others organizations. Dr. Fernandez has disclosed no relevant financial relationships. Dr. Smith has served as a clinical advisor to Guardant Health.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

New standard emerges for locally advanced rectal cancer

Article Type
Changed
Mon, 01/04/2021 - 16:38

A new approach to the treatment of patients with high-risk, locally advanced rectal cancer reduces the rate of treatment failure and may also increase the rate of organ preservation, compared with the traditional approach.

Patients treated with short-course radiotherapy followed by chemotherapy before surgery showed reduced disease-related treatment failure at 3 years compared with the traditional approach of neoadjuvant chemoradiation followed by surgery plus or minus adjuvant chemotherapy.

This finding comes from the phase 3 RAPIDO trial.

This experimental treatment also doubles the rate of pathological complete response compared with standard of care, which is an added bonus, the researchers comment, as it may increase the opportunity for patients to seek an organ preservation nonsurgical option.

“Preoperative short-course radiotherapy followed by chemotherapy and total mesorectal excision could be considered as a new standard of care,” the researchers conclude. The team was led by Renu Bahadoer, MD, University Medical Center, Leiden, the Netherlands.

A “prominent benefit” of the experimental treatment — especially in this era of COVID-19 — is the reduction in the number of treatment days spent in healthcare facilities (12 days compared with 25-28 days with the traditional approach for the preoperative period alone), the researchers note.

“If adjuvant chemotherapy is given…the reduction is even more pronounced,” they add, “and this reduction in time spent in hospital minimizes the risk for these susceptible patients and improves hospitals’ ability to implement physical distancing during the COVID-19 pandemic situation.” 

The study was published online December 7 in Lancet Oncology.

The new approach looks “promising” and is likely to become the new standard of care — especially in the current climate of COVID-19 when fewer visits to healthcare facilities are highly desirable, agree editorialists Avanish Saklani, MBBS, and colleagues from the Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India, writing in an accompanying commentary.

They also agreed that the protocol is likely to increase the number of patients being offered a “watch-and-wait” strategy because of its ability to induce a significantly higher pathological complete response.

However, the editorialists add a note of caution, “Whether or not this new treatment paradigm will have similar outcomes in a younger population with aggressive disease biology…is unknown.” 
 

Details of the RAPIDO trial

The RAPIDO trial enrolled 912 eligible patients and was conducted across 54 hospitals and radiotherapy centers in 7 different countries. The median age of the cohort was 62 years, but 40% of the cohort were 65 years or older.

Eligible patients had a biopsy-proven, newly diagnosed, primary, locally advanced rectal adenocarcinoma, which was classified as high risk on pelvic MRI (with at least one of the following criteria: clinical tumor [cT] stage cT4a or cT4b, extramural vascular invasion, clinical nodal [cN] stage cN2, involved mesorectal fascia, or enlarged lateral lymph nodes considered metastatic).

They were randomly assigned 1:1 to receive either the experimental or standard treatment.

Patients in the experimental group received a short-course of radiotherapy, delivered in five fractions of 5 Gy each, given over a maximum of 8 days.

This was followed by chemotherapy, preferably started within 11 to 18 days after the last radiotherapy session. It consisted of six cycles of CAPOX (capecitabine, oxaliplatin) or nine cycles of FOLFOX4 (oxaliplatin, leucovorin, fluorouracil), and the choice was per physician discretion or hospital policy.

Surgery (total mesorectal excision) was then carried out 2 to 4 weeks later.

In the standard-of-care group, patients received radiotherapy and concomitant chemotherapy (with oral capecitabine). Radiotherapy was administered in 28 daily fractions of 1.8 Gy up to 50.4 Gy, or 25 fractions of 2.0 Gy up to 50.0 Gy, with the choice between the two made by the physician or according to hospital policy.

This was followed by total mesorectal excision and, if stipulated by hospital policy, adjuvant chemotherapy with eight cycles of CAPOX or 12 cycles of FOLFOX4.

“The primary endpoint was disease-related treatment failure, defined as the first occurrence of locoregional failure, distant metastasis, a new primary colorectal tumor, or treatment-related death,” Bahadoer and colleagues observe.

At 3 years, rates of disease-related treatment failure were significantly lower in the experimental group, at 23.7% vs 30.4% in the standard-of-care group (P = .019). So too was the probability of distant metastases, at 20% vs 26.8% for the standard-of-care group (P = .0048).

In addition, the rates of pathological complete response were twice as high at 28% in the experimental group compared to 14% in the standard-of-care group (P < .0001).

The editorialists also suggest that this increase in pathological complete response seen in the experimental arm is probably the result of additional chemotherapy after the delivery of initial radiotherapy.

In contrast, the cumulative probability of locoregional failure at 3 years was higher in the experimental group at 8.3% compared with 6% for the standard-of-care group, although this difference was not statistically significant  (P = .12).

In the editorial, Saklani and colleagues comment that the higher rates of locoregional failure seen in the experimental group might possibly indicate that a proportion of patients in that arm were either nonresponders or poor responders to radiotherapy, or could be related to the considerable delay in surgery necessitated by the presurgical course of chemotherapy lasting some 18 weeks.

The authors suggest that “an interim restage MRI scan after three cycles of chemotherapy can potentially identify this group of patients who are non-responders to preoperative treatment, thus potentially prompting an earlier surgery than planned, and thus possibly improving overall survival outcomes.”

The most common grade 3 or higher adverse events (AEs) reported during preoperative treatment occurred in 48% of patients in the experimental arm compared with 25% of patients in the standard-of-care group. In the subgroup of patients in the standard-of-care arm who received adjuvant chemotherapy, slightly over one-third of patients developed grade 3 or higher AEs.

Serious AEs occurred in roughly equal numbers of patients in both groups (38% vs 34% in the standard-of-care arm). There were four treatment-related deaths in each of the two arms.

Bahadoer has disclosed no relevant financial relationships, but many coauthors have relationships with various pharmaceutical companies, as listed in the original article. The editorialists have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A new approach to the treatment of patients with high-risk, locally advanced rectal cancer reduces the rate of treatment failure and may also increase the rate of organ preservation, compared with the traditional approach.

Patients treated with short-course radiotherapy followed by chemotherapy before surgery showed reduced disease-related treatment failure at 3 years compared with the traditional approach of neoadjuvant chemoradiation followed by surgery plus or minus adjuvant chemotherapy.

This finding comes from the phase 3 RAPIDO trial.

This experimental treatment also doubles the rate of pathological complete response compared with standard of care, which is an added bonus, the researchers comment, as it may increase the opportunity for patients to seek an organ preservation nonsurgical option.

“Preoperative short-course radiotherapy followed by chemotherapy and total mesorectal excision could be considered as a new standard of care,” the researchers conclude. The team was led by Renu Bahadoer, MD, University Medical Center, Leiden, the Netherlands.

A “prominent benefit” of the experimental treatment — especially in this era of COVID-19 — is the reduction in the number of treatment days spent in healthcare facilities (12 days compared with 25-28 days with the traditional approach for the preoperative period alone), the researchers note.

“If adjuvant chemotherapy is given…the reduction is even more pronounced,” they add, “and this reduction in time spent in hospital minimizes the risk for these susceptible patients and improves hospitals’ ability to implement physical distancing during the COVID-19 pandemic situation.” 

The study was published online December 7 in Lancet Oncology.

The new approach looks “promising” and is likely to become the new standard of care — especially in the current climate of COVID-19 when fewer visits to healthcare facilities are highly desirable, agree editorialists Avanish Saklani, MBBS, and colleagues from the Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India, writing in an accompanying commentary.

They also agreed that the protocol is likely to increase the number of patients being offered a “watch-and-wait” strategy because of its ability to induce a significantly higher pathological complete response.

However, the editorialists add a note of caution, “Whether or not this new treatment paradigm will have similar outcomes in a younger population with aggressive disease biology…is unknown.” 
 

Details of the RAPIDO trial

The RAPIDO trial enrolled 912 eligible patients and was conducted across 54 hospitals and radiotherapy centers in 7 different countries. The median age of the cohort was 62 years, but 40% of the cohort were 65 years or older.

Eligible patients had a biopsy-proven, newly diagnosed, primary, locally advanced rectal adenocarcinoma, which was classified as high risk on pelvic MRI (with at least one of the following criteria: clinical tumor [cT] stage cT4a or cT4b, extramural vascular invasion, clinical nodal [cN] stage cN2, involved mesorectal fascia, or enlarged lateral lymph nodes considered metastatic).

They were randomly assigned 1:1 to receive either the experimental or standard treatment.

Patients in the experimental group received a short-course of radiotherapy, delivered in five fractions of 5 Gy each, given over a maximum of 8 days.

This was followed by chemotherapy, preferably started within 11 to 18 days after the last radiotherapy session. It consisted of six cycles of CAPOX (capecitabine, oxaliplatin) or nine cycles of FOLFOX4 (oxaliplatin, leucovorin, fluorouracil), and the choice was per physician discretion or hospital policy.

Surgery (total mesorectal excision) was then carried out 2 to 4 weeks later.

In the standard-of-care group, patients received radiotherapy and concomitant chemotherapy (with oral capecitabine). Radiotherapy was administered in 28 daily fractions of 1.8 Gy up to 50.4 Gy, or 25 fractions of 2.0 Gy up to 50.0 Gy, with the choice between the two made by the physician or according to hospital policy.

This was followed by total mesorectal excision and, if stipulated by hospital policy, adjuvant chemotherapy with eight cycles of CAPOX or 12 cycles of FOLFOX4.

“The primary endpoint was disease-related treatment failure, defined as the first occurrence of locoregional failure, distant metastasis, a new primary colorectal tumor, or treatment-related death,” Bahadoer and colleagues observe.

At 3 years, rates of disease-related treatment failure were significantly lower in the experimental group, at 23.7% vs 30.4% in the standard-of-care group (P = .019). So too was the probability of distant metastases, at 20% vs 26.8% for the standard-of-care group (P = .0048).

In addition, the rates of pathological complete response were twice as high at 28% in the experimental group compared to 14% in the standard-of-care group (P < .0001).

The editorialists also suggest that this increase in pathological complete response seen in the experimental arm is probably the result of additional chemotherapy after the delivery of initial radiotherapy.

In contrast, the cumulative probability of locoregional failure at 3 years was higher in the experimental group at 8.3% compared with 6% for the standard-of-care group, although this difference was not statistically significant  (P = .12).

In the editorial, Saklani and colleagues comment that the higher rates of locoregional failure seen in the experimental group might possibly indicate that a proportion of patients in that arm were either nonresponders or poor responders to radiotherapy, or could be related to the considerable delay in surgery necessitated by the presurgical course of chemotherapy lasting some 18 weeks.

The authors suggest that “an interim restage MRI scan after three cycles of chemotherapy can potentially identify this group of patients who are non-responders to preoperative treatment, thus potentially prompting an earlier surgery than planned, and thus possibly improving overall survival outcomes.”

The most common grade 3 or higher adverse events (AEs) reported during preoperative treatment occurred in 48% of patients in the experimental arm compared with 25% of patients in the standard-of-care group. In the subgroup of patients in the standard-of-care arm who received adjuvant chemotherapy, slightly over one-third of patients developed grade 3 or higher AEs.

Serious AEs occurred in roughly equal numbers of patients in both groups (38% vs 34% in the standard-of-care arm). There were four treatment-related deaths in each of the two arms.

Bahadoer has disclosed no relevant financial relationships, but many coauthors have relationships with various pharmaceutical companies, as listed in the original article. The editorialists have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A new approach to the treatment of patients with high-risk, locally advanced rectal cancer reduces the rate of treatment failure and may also increase the rate of organ preservation, compared with the traditional approach.

Patients treated with short-course radiotherapy followed by chemotherapy before surgery showed reduced disease-related treatment failure at 3 years compared with the traditional approach of neoadjuvant chemoradiation followed by surgery plus or minus adjuvant chemotherapy.

This finding comes from the phase 3 RAPIDO trial.

This experimental treatment also doubles the rate of pathological complete response compared with standard of care, which is an added bonus, the researchers comment, as it may increase the opportunity for patients to seek an organ preservation nonsurgical option.

“Preoperative short-course radiotherapy followed by chemotherapy and total mesorectal excision could be considered as a new standard of care,” the researchers conclude. The team was led by Renu Bahadoer, MD, University Medical Center, Leiden, the Netherlands.

A “prominent benefit” of the experimental treatment — especially in this era of COVID-19 — is the reduction in the number of treatment days spent in healthcare facilities (12 days compared with 25-28 days with the traditional approach for the preoperative period alone), the researchers note.

“If adjuvant chemotherapy is given…the reduction is even more pronounced,” they add, “and this reduction in time spent in hospital minimizes the risk for these susceptible patients and improves hospitals’ ability to implement physical distancing during the COVID-19 pandemic situation.” 

The study was published online December 7 in Lancet Oncology.

The new approach looks “promising” and is likely to become the new standard of care — especially in the current climate of COVID-19 when fewer visits to healthcare facilities are highly desirable, agree editorialists Avanish Saklani, MBBS, and colleagues from the Tata Memorial Centre, Homi Bhabha National Institute, Mumbai, India, writing in an accompanying commentary.

They also agreed that the protocol is likely to increase the number of patients being offered a “watch-and-wait” strategy because of its ability to induce a significantly higher pathological complete response.

However, the editorialists add a note of caution, “Whether or not this new treatment paradigm will have similar outcomes in a younger population with aggressive disease biology…is unknown.” 
 

Details of the RAPIDO trial

The RAPIDO trial enrolled 912 eligible patients and was conducted across 54 hospitals and radiotherapy centers in 7 different countries. The median age of the cohort was 62 years, but 40% of the cohort were 65 years or older.

Eligible patients had a biopsy-proven, newly diagnosed, primary, locally advanced rectal adenocarcinoma, which was classified as high risk on pelvic MRI (with at least one of the following criteria: clinical tumor [cT] stage cT4a or cT4b, extramural vascular invasion, clinical nodal [cN] stage cN2, involved mesorectal fascia, or enlarged lateral lymph nodes considered metastatic).

They were randomly assigned 1:1 to receive either the experimental or standard treatment.

Patients in the experimental group received a short-course of radiotherapy, delivered in five fractions of 5 Gy each, given over a maximum of 8 days.

This was followed by chemotherapy, preferably started within 11 to 18 days after the last radiotherapy session. It consisted of six cycles of CAPOX (capecitabine, oxaliplatin) or nine cycles of FOLFOX4 (oxaliplatin, leucovorin, fluorouracil), and the choice was per physician discretion or hospital policy.

Surgery (total mesorectal excision) was then carried out 2 to 4 weeks later.

In the standard-of-care group, patients received radiotherapy and concomitant chemotherapy (with oral capecitabine). Radiotherapy was administered in 28 daily fractions of 1.8 Gy up to 50.4 Gy, or 25 fractions of 2.0 Gy up to 50.0 Gy, with the choice between the two made by the physician or according to hospital policy.

This was followed by total mesorectal excision and, if stipulated by hospital policy, adjuvant chemotherapy with eight cycles of CAPOX or 12 cycles of FOLFOX4.

“The primary endpoint was disease-related treatment failure, defined as the first occurrence of locoregional failure, distant metastasis, a new primary colorectal tumor, or treatment-related death,” Bahadoer and colleagues observe.

At 3 years, rates of disease-related treatment failure were significantly lower in the experimental group, at 23.7% vs 30.4% in the standard-of-care group (P = .019). So too was the probability of distant metastases, at 20% vs 26.8% for the standard-of-care group (P = .0048).

In addition, the rates of pathological complete response were twice as high at 28% in the experimental group compared to 14% in the standard-of-care group (P < .0001).

The editorialists also suggest that this increase in pathological complete response seen in the experimental arm is probably the result of additional chemotherapy after the delivery of initial radiotherapy.

In contrast, the cumulative probability of locoregional failure at 3 years was higher in the experimental group at 8.3% compared with 6% for the standard-of-care group, although this difference was not statistically significant  (P = .12).

In the editorial, Saklani and colleagues comment that the higher rates of locoregional failure seen in the experimental group might possibly indicate that a proportion of patients in that arm were either nonresponders or poor responders to radiotherapy, or could be related to the considerable delay in surgery necessitated by the presurgical course of chemotherapy lasting some 18 weeks.

The authors suggest that “an interim restage MRI scan after three cycles of chemotherapy can potentially identify this group of patients who are non-responders to preoperative treatment, thus potentially prompting an earlier surgery than planned, and thus possibly improving overall survival outcomes.”

The most common grade 3 or higher adverse events (AEs) reported during preoperative treatment occurred in 48% of patients in the experimental arm compared with 25% of patients in the standard-of-care group. In the subgroup of patients in the standard-of-care arm who received adjuvant chemotherapy, slightly over one-third of patients developed grade 3 or higher AEs.

Serious AEs occurred in roughly equal numbers of patients in both groups (38% vs 34% in the standard-of-care arm). There were four treatment-related deaths in each of the two arms.

Bahadoer has disclosed no relevant financial relationships, but many coauthors have relationships with various pharmaceutical companies, as listed in the original article. The editorialists have disclosed no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article