Machine learning flags key risk factors for suicide attempts

Article Type
Changed
Thu, 01/14/2021 - 09:21

A history of suicidal behaviors or ideation, functional impairment related to mental health disorders, and socioeconomic disadvantage are the three most important risk factors predicting subsequent suicide attempts, new research suggests.

Investigators applied a machine-learning model to data on over 34,500 adults drawn from a large national survey database. After analyzing more than 2,500 survey questions, key areas were identified that yielded the most accurate predictions of who might be at risk for later suicide attempt.

Angel Garcia de la Garza


These predictors included experiencing previous suicidal behaviors and ideation or functional impairment because of emotional problems, being at a younger age, having a lower educational achievement, and experiencing a recent financial crisis.

“Our machine learning model confirmed well-known risk factors of suicide attempt, including previous suicidal behavior and depression; and we also identified functional impairment, such as doing activities less carefully or accomplishing less because of emotional problems, as a new important risk,” lead author Angel Garcia de la Garza, PhD candidate in the department of biostatistics, Columbia University, New York, said in an interview.

“We hope our results provide a novel avenue for future suicide risk assessment,” Mr. Garcia de la Garza said.

The findings were published online Jan. 6 in JAMA Psychiatry.
 

‘Rich’ dataset

Previous research using machine learning approaches to study nonfatal suicide attempt prediction has focused on high-risk patients in clinical treatment. However, more than one-third of individuals making nonfatal suicide attempts do not receive mental health treatment, Mr. Garcia de la Garza noted.

To gain further insight into predictors of suicide risk in nonclinical populations, the researchers turned to the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), a longitudinal survey of noninstitutionalized U.S. adults.

“We wanted to extend our understanding of suicide attempt risk factors beyond high-risk clinical populations to the general adult population; and the richness of the NESARC dataset provides a unique opportunity to do so,” Mr. Garcia de la Garza said.

The NESARC surveys were conducted in two waves: Wave 1 (2001-2002) and wave 2 (2004-2005), in which participants self-reported nonfatal suicide attempts in the preceding 3 years since wave 1.

Assessment of wave 1 participants was based on the Alcohol Use Disorder and Associated Disabilities Interview Schedule DSM-IV.

“This survey’s extensive assessment instrument contained a detailed evaluation of substance use, psychiatric disorders, and symptoms not routinely available in electronic health records,” Mr. Garcia de la Garza noted.

The wave 1 survey contained 2,805 separate questions. From participants’ responses, the investigators derived 180 variables for three categories: past-year, prior-to-past-year, and lifetime mental disorders.

They then identified 2,978 factors associated with suicide attempts and used a statistical method called balanced random forest to classify suicide attempts at wave 2. Each variable was accorded an “importance score” using identified wave 1 features.

The outcome variable of attempted suicide at any point during the 3 years prior to the wave 2 interview was defined by combining responses to three wave 2 questions:

  • In your entire life, did you ever attempt suicide?
  • If yes, how old were you the first time?
  • If the most recent event occurred within the last 3 years, how old were you during the most recent time?

Suicide risk severity was classified into four groups (low, medium, high, and very high) on the basis of the top-performing risk factors.

A statistical model combining survey design and nonresponse weights enabled estimates to be representative of the U.S. population, based on the 2000 census.

Out-of-fold model prediction assessed performance of the model, using area under receiver operator curve (AUC), sensitivity, and specificity.
 

Daily functioning

Of all participants, 70.2% (n = 34,653; almost 60% women) completed wave 2 interviews. The weighted mean ages at waves 1 and 2 were 45.1 and 48.2 years, respectively.

Of wave 2 respondents, 0.6% (n = 222) attempted suicide during the preceding 3 years.

Half of those who attempted suicide within the first year were classified as “very high risk,” while 33.2% of those who attempted suicide between the first and second year and 33.3% of those who attempted suicide between the second and third year were classified as “very high risk.”

Among participants who attempted suicide between the third year and follow-up, 16.48% were classified as “very high risk.”

The model accurately captured classification of participants, even across demographic characteristics, such as age, sex, race, and income.

Younger individuals (aged 18-36 years) were at higher risk, compared with older individuals. In addition, women were at higher risk than were men, White participants were at higher risk than were non-White participants, and individuals with lower income were at greater risk than were those with higher income.

The model found that 1.8% of the U.S. population had a 10% or greater risk of a suicide attempt.

The most important risk factors identified were the three questions about previous suicidal ideation or behavior; three items from the 12-Item Short Form Health Survey (feeling downhearted, doing activities less carefully, or accomplishing less because of emotional problems); younger age; lower educational achievement; and recent financial crisis.

“The clinical assessment of suicide risk typically focuses on acute suicidal symptoms, together with depression, anxiety, substance misuse, and recent stressful events,” coinvestigator Mark Olfson, MD, PhD, professor of epidemiology, Columbia University Irving Medical Center, New York, said in an interview.

“The new findings suggest that these assessments should also consider emotional problems that interfere with daily functioning,” Dr. Olfson said.
 

Extra vigilance

Commenting on the study in an interview, April C. Foreman, PhD, an executive board member of the American Association of Suicidology, noted that some of the findings were not surprising.

“When discharging a patient from inpatient care, or seeing them in primary care, bring up mental health concerns proactively and ask whether they have ever attempted suicide or harmed themselves – even a long time ago – just as you ask about a family history of heart disease or cancer, or other health issues,” said Dr. Foreman, chief medical officer of the Kevin and Margaret Hines Foundation.

She noted that half of people who die by suicide have a primary care visit within the preceding month.

“Primary care is a great place to get a suicide history and follow the patient with extra vigilance, just as you would with any other risk factors,” Dr. Foreman said.

The study was funded by the National Institute on Alcohol Abuse and Alcoholism and its Intramural Program. The study authors and Dr. Foreman have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Topics
Sections

A history of suicidal behaviors or ideation, functional impairment related to mental health disorders, and socioeconomic disadvantage are the three most important risk factors predicting subsequent suicide attempts, new research suggests.

Investigators applied a machine-learning model to data on over 34,500 adults drawn from a large national survey database. After analyzing more than 2,500 survey questions, key areas were identified that yielded the most accurate predictions of who might be at risk for later suicide attempt.

Angel Garcia de la Garza


These predictors included experiencing previous suicidal behaviors and ideation or functional impairment because of emotional problems, being at a younger age, having a lower educational achievement, and experiencing a recent financial crisis.

“Our machine learning model confirmed well-known risk factors of suicide attempt, including previous suicidal behavior and depression; and we also identified functional impairment, such as doing activities less carefully or accomplishing less because of emotional problems, as a new important risk,” lead author Angel Garcia de la Garza, PhD candidate in the department of biostatistics, Columbia University, New York, said in an interview.

“We hope our results provide a novel avenue for future suicide risk assessment,” Mr. Garcia de la Garza said.

The findings were published online Jan. 6 in JAMA Psychiatry.
 

‘Rich’ dataset

Previous research using machine learning approaches to study nonfatal suicide attempt prediction has focused on high-risk patients in clinical treatment. However, more than one-third of individuals making nonfatal suicide attempts do not receive mental health treatment, Mr. Garcia de la Garza noted.

To gain further insight into predictors of suicide risk in nonclinical populations, the researchers turned to the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), a longitudinal survey of noninstitutionalized U.S. adults.

“We wanted to extend our understanding of suicide attempt risk factors beyond high-risk clinical populations to the general adult population; and the richness of the NESARC dataset provides a unique opportunity to do so,” Mr. Garcia de la Garza said.

The NESARC surveys were conducted in two waves: Wave 1 (2001-2002) and wave 2 (2004-2005), in which participants self-reported nonfatal suicide attempts in the preceding 3 years since wave 1.

Assessment of wave 1 participants was based on the Alcohol Use Disorder and Associated Disabilities Interview Schedule DSM-IV.

“This survey’s extensive assessment instrument contained a detailed evaluation of substance use, psychiatric disorders, and symptoms not routinely available in electronic health records,” Mr. Garcia de la Garza noted.

The wave 1 survey contained 2,805 separate questions. From participants’ responses, the investigators derived 180 variables for three categories: past-year, prior-to-past-year, and lifetime mental disorders.

They then identified 2,978 factors associated with suicide attempts and used a statistical method called balanced random forest to classify suicide attempts at wave 2. Each variable was accorded an “importance score” using identified wave 1 features.

The outcome variable of attempted suicide at any point during the 3 years prior to the wave 2 interview was defined by combining responses to three wave 2 questions:

  • In your entire life, did you ever attempt suicide?
  • If yes, how old were you the first time?
  • If the most recent event occurred within the last 3 years, how old were you during the most recent time?

Suicide risk severity was classified into four groups (low, medium, high, and very high) on the basis of the top-performing risk factors.

A statistical model combining survey design and nonresponse weights enabled estimates to be representative of the U.S. population, based on the 2000 census.

Out-of-fold model prediction assessed performance of the model, using area under receiver operator curve (AUC), sensitivity, and specificity.
 

Daily functioning

Of all participants, 70.2% (n = 34,653; almost 60% women) completed wave 2 interviews. The weighted mean ages at waves 1 and 2 were 45.1 and 48.2 years, respectively.

Of wave 2 respondents, 0.6% (n = 222) attempted suicide during the preceding 3 years.

Half of those who attempted suicide within the first year were classified as “very high risk,” while 33.2% of those who attempted suicide between the first and second year and 33.3% of those who attempted suicide between the second and third year were classified as “very high risk.”

Among participants who attempted suicide between the third year and follow-up, 16.48% were classified as “very high risk.”

The model accurately captured classification of participants, even across demographic characteristics, such as age, sex, race, and income.

Younger individuals (aged 18-36 years) were at higher risk, compared with older individuals. In addition, women were at higher risk than were men, White participants were at higher risk than were non-White participants, and individuals with lower income were at greater risk than were those with higher income.

The model found that 1.8% of the U.S. population had a 10% or greater risk of a suicide attempt.

The most important risk factors identified were the three questions about previous suicidal ideation or behavior; three items from the 12-Item Short Form Health Survey (feeling downhearted, doing activities less carefully, or accomplishing less because of emotional problems); younger age; lower educational achievement; and recent financial crisis.

“The clinical assessment of suicide risk typically focuses on acute suicidal symptoms, together with depression, anxiety, substance misuse, and recent stressful events,” coinvestigator Mark Olfson, MD, PhD, professor of epidemiology, Columbia University Irving Medical Center, New York, said in an interview.

“The new findings suggest that these assessments should also consider emotional problems that interfere with daily functioning,” Dr. Olfson said.
 

Extra vigilance

Commenting on the study in an interview, April C. Foreman, PhD, an executive board member of the American Association of Suicidology, noted that some of the findings were not surprising.

“When discharging a patient from inpatient care, or seeing them in primary care, bring up mental health concerns proactively and ask whether they have ever attempted suicide or harmed themselves – even a long time ago – just as you ask about a family history of heart disease or cancer, or other health issues,” said Dr. Foreman, chief medical officer of the Kevin and Margaret Hines Foundation.

She noted that half of people who die by suicide have a primary care visit within the preceding month.

“Primary care is a great place to get a suicide history and follow the patient with extra vigilance, just as you would with any other risk factors,” Dr. Foreman said.

The study was funded by the National Institute on Alcohol Abuse and Alcoholism and its Intramural Program. The study authors and Dr. Foreman have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

A history of suicidal behaviors or ideation, functional impairment related to mental health disorders, and socioeconomic disadvantage are the three most important risk factors predicting subsequent suicide attempts, new research suggests.

Investigators applied a machine-learning model to data on over 34,500 adults drawn from a large national survey database. After analyzing more than 2,500 survey questions, key areas were identified that yielded the most accurate predictions of who might be at risk for later suicide attempt.

Angel Garcia de la Garza


These predictors included experiencing previous suicidal behaviors and ideation or functional impairment because of emotional problems, being at a younger age, having a lower educational achievement, and experiencing a recent financial crisis.

“Our machine learning model confirmed well-known risk factors of suicide attempt, including previous suicidal behavior and depression; and we also identified functional impairment, such as doing activities less carefully or accomplishing less because of emotional problems, as a new important risk,” lead author Angel Garcia de la Garza, PhD candidate in the department of biostatistics, Columbia University, New York, said in an interview.

“We hope our results provide a novel avenue for future suicide risk assessment,” Mr. Garcia de la Garza said.

The findings were published online Jan. 6 in JAMA Psychiatry.
 

‘Rich’ dataset

Previous research using machine learning approaches to study nonfatal suicide attempt prediction has focused on high-risk patients in clinical treatment. However, more than one-third of individuals making nonfatal suicide attempts do not receive mental health treatment, Mr. Garcia de la Garza noted.

To gain further insight into predictors of suicide risk in nonclinical populations, the researchers turned to the National Epidemiologic Survey on Alcohol and Related Conditions (NESARC), a longitudinal survey of noninstitutionalized U.S. adults.

“We wanted to extend our understanding of suicide attempt risk factors beyond high-risk clinical populations to the general adult population; and the richness of the NESARC dataset provides a unique opportunity to do so,” Mr. Garcia de la Garza said.

The NESARC surveys were conducted in two waves: Wave 1 (2001-2002) and wave 2 (2004-2005), in which participants self-reported nonfatal suicide attempts in the preceding 3 years since wave 1.

Assessment of wave 1 participants was based on the Alcohol Use Disorder and Associated Disabilities Interview Schedule DSM-IV.

“This survey’s extensive assessment instrument contained a detailed evaluation of substance use, psychiatric disorders, and symptoms not routinely available in electronic health records,” Mr. Garcia de la Garza noted.

The wave 1 survey contained 2,805 separate questions. From participants’ responses, the investigators derived 180 variables for three categories: past-year, prior-to-past-year, and lifetime mental disorders.

They then identified 2,978 factors associated with suicide attempts and used a statistical method called balanced random forest to classify suicide attempts at wave 2. Each variable was accorded an “importance score” using identified wave 1 features.

The outcome variable of attempted suicide at any point during the 3 years prior to the wave 2 interview was defined by combining responses to three wave 2 questions:

  • In your entire life, did you ever attempt suicide?
  • If yes, how old were you the first time?
  • If the most recent event occurred within the last 3 years, how old were you during the most recent time?

Suicide risk severity was classified into four groups (low, medium, high, and very high) on the basis of the top-performing risk factors.

A statistical model combining survey design and nonresponse weights enabled estimates to be representative of the U.S. population, based on the 2000 census.

Out-of-fold model prediction assessed performance of the model, using area under receiver operator curve (AUC), sensitivity, and specificity.
 

Daily functioning

Of all participants, 70.2% (n = 34,653; almost 60% women) completed wave 2 interviews. The weighted mean ages at waves 1 and 2 were 45.1 and 48.2 years, respectively.

Of wave 2 respondents, 0.6% (n = 222) attempted suicide during the preceding 3 years.

Half of those who attempted suicide within the first year were classified as “very high risk,” while 33.2% of those who attempted suicide between the first and second year and 33.3% of those who attempted suicide between the second and third year were classified as “very high risk.”

Among participants who attempted suicide between the third year and follow-up, 16.48% were classified as “very high risk.”

The model accurately captured classification of participants, even across demographic characteristics, such as age, sex, race, and income.

Younger individuals (aged 18-36 years) were at higher risk, compared with older individuals. In addition, women were at higher risk than were men, White participants were at higher risk than were non-White participants, and individuals with lower income were at greater risk than were those with higher income.

The model found that 1.8% of the U.S. population had a 10% or greater risk of a suicide attempt.

The most important risk factors identified were the three questions about previous suicidal ideation or behavior; three items from the 12-Item Short Form Health Survey (feeling downhearted, doing activities less carefully, or accomplishing less because of emotional problems); younger age; lower educational achievement; and recent financial crisis.

“The clinical assessment of suicide risk typically focuses on acute suicidal symptoms, together with depression, anxiety, substance misuse, and recent stressful events,” coinvestigator Mark Olfson, MD, PhD, professor of epidemiology, Columbia University Irving Medical Center, New York, said in an interview.

“The new findings suggest that these assessments should also consider emotional problems that interfere with daily functioning,” Dr. Olfson said.
 

Extra vigilance

Commenting on the study in an interview, April C. Foreman, PhD, an executive board member of the American Association of Suicidology, noted that some of the findings were not surprising.

“When discharging a patient from inpatient care, or seeing them in primary care, bring up mental health concerns proactively and ask whether they have ever attempted suicide or harmed themselves – even a long time ago – just as you ask about a family history of heart disease or cancer, or other health issues,” said Dr. Foreman, chief medical officer of the Kevin and Margaret Hines Foundation.

She noted that half of people who die by suicide have a primary care visit within the preceding month.

“Primary care is a great place to get a suicide history and follow the patient with extra vigilance, just as you would with any other risk factors,” Dr. Foreman said.

The study was funded by the National Institute on Alcohol Abuse and Alcoholism and its Intramural Program. The study authors and Dr. Foreman have reported no relevant financial relationships.

A version of this article first appeared on Medscape.com.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

New findings add to questions about existence of gouty nephropathy

Article Type
Changed
Wed, 01/13/2021 - 15:08

Is gouty nephropathy real? It’s a question that has been posed often in rheumatology over the last several decades.

HYWARDS/Getty Images

A new study found 36% of patients with untreated gout at a medical center in Vietnam have diffuse hyperechoic renal medulla as seen on ultrasound, which could indicate the presence of microcrystalline nephropathy. However, the results, published in Kidney International, may raise more questions than answers about the existence of gouty nephropathy and its relation to chronic kidney disease (CKD).

In their study, Thomas Bardin, MD, of the department of rheumatology at Lariboisière Hospital in Paris and colleagues evaluated 502 consecutive patients from Vien Gut Medical Center in Ho Chi Minh City, Vietnam, using B-mode renal ultrasound. The patients were mostly men with a median age of 46 years, body mass index of 25 kg/m2, estimated disease duration of 4 years, and uricemia of 423.2 micromol/L (7.11 mg/dL). Patients had a median estimated glomerular filtration rate (eGFR) of 78 mL/min per 1.73 m2. There was a history of hypertension in 112 patients (22.3%), type 2 diabetes in 58 patients (11.5%), renal lithiasis in 28 patients (5.6%), and coronary heart disease in 5 patients (1%).

While 39% of patients had previously used allopurinol for “a generally short period,” patients were not on urate-lowering therapy at the time of the study. Clinical tophi were present in 279 patients (55.6%), urate arthropathies in 154 patients (30.7%), and 43 patients (10.4%) used steroids daily.

B-mode renal ultrasound showed 181 patients (36%; 95% confidence interval, 32%-40%) had “hyperechoic pattern of Malpighi pyramids compared with the adjacent cortex,” which was “associated with twinkling artifacts” visible on color Doppler ultrasound. There was a significant association between renal medulla hyperechogenicity and patient age, disease duration, use of steroids, clinical tophi, and urate arthropathy (P less than .0001 for all). A significant association was also seen between renal medulla hyperechogenicity and decreased eGFR (P < .0001), proteinuria (P = .0006), leukocyturia (P = .0008), hypertension (P = .0008), hyperuricemia (P = .002), and coronary heart disease (P = .006).

In a multivariate analysis, there was a significant association between renal medulla hyperechogenicity and clinical tophi (odds ratio, 7.27; 95% CI, 3.68–15.19; P < .0001), urate arthropathy (OR, 3.46; 95% CI, 1.99–6.09; P < .0001), estimated gout duration (OR, 2.13; 95% CI, 1.55–2.96; P < .0001), double contour thickness (OR, 1.45; 95% CI, 1.06–1.97; P < .02), and eGFR (OR, 0.30; 95% CI, 0.09–0.89; P < .034).

“The finding was observed mainly in tophaceous gout, which involved a large proportion of our patients who had received very little treatment with urate-lowering drugs and was associated with moderately impaired renal function and urinary features compatible with tubulointerstitial nephritis,” Dr. Bardin and colleagues wrote in the study. The researchers also found “similar features” in 4 of 10 French patients at the Paris Necker Hospital in Paris, and noted that similar findings have been reported in Japan and Korea, which they said may mean hyperechoic medulla “is not unique to Vietnamese patients.”
 

 

 

Relation to CKD still unclear

In a related editorial, Federica Piani, MD, and Richard J. Johnson, MD, of the division of renal diseases and hypertension at the University of Colorado at Denver, Aurora, explained that gout was considered by some clinicians to be a cause of CKD in a time before urate-lowering therapies, because as many as 25% of patients with gout went on to experience kidney failure and about half experienced lower kidney function.

The association between gout and CKD was thought to be attributable to “frequent deposition of urate crystals in the tubular lumens and interstitium in the outer medulla of these patients,” but the concept was later challenged because “the crystals were generally found focally and did not readily explain the kidney damage.”

But even as interest in rheumatology moved away from the concept of gouty nephropathy to how serum uric acid impacts CKD, “the possibility that urate crystal deposition in the kidney could also be contributing to the kidney injury was never ruled out,” according to Dr. Piani and Dr. Johnson.

Kidney biopsies can sometimes miss urate crystals because the crystals dissolve if alcohol fixation is not used and because the biopsy site is often in the renal cortex, the authors noted. Recent research has identified that dual-energy CT scans can distinguish between calcium deposits and urate crystals better than ultrasound, and previous research from Thomas Bardin, MD, and colleagues in two patients noted a correlation between dual-energy CT scan findings of urate crystals and hyperechoic medulla findings on renal ultrasound.

The results by Dr. Bardin and associates, they said, “have reawakened the entity of urate microcrystalline nephropathy as a possible cause of CKD.”

Robert Terkeltaub, MD, professor of medicine at the University of California, San Diego, and section chief of Rheumatology at the San Diego VA Medical Center, said in an interview that he also believes the findings by Dr. Bardin and associates are real. He cited a study by Isabelle Ayoub, MD, and colleagues in Clinical Nephrology from 2016 that evaluated kidney biopsies in Germany and found medullary tophi were more likely to be present in patients with CKD than without.

Dr. Robert Terkeltaub

“Chronic gouty nephropathy did not disappear. It still exists,” said Dr. Terkeltaub, who was not involved in the study by Dr. Bardin and colleagues.

The prospect that, if “you look hard enough, you’re going to see urate crystals and a pattern that’s attributed in the renal medulla” in patients with untreated gout is “very provocative, and interesting, and clinically relevant, and merits more investigation,” noted Dr. Terkeltaub, who is also president of the Gout, Hyperuricemia and Crystal-Associated Disease Network.

If verified, the results have important implications for patients with gout and its relationship to CKD, Dr. Terkeltaub said, but they raise “more questions than answers.

“I think it’s a really good wake-up call to start looking, doing good detective work here, and looking especially in people who have gout as opposed to just people with chronic kidney disease,” he said.

The authors reported no relevant conflicts of interest. Dr. Johnson, who coauthored an accompanying editorial, reported having equity in XORTX Therapeutics, serving as a consultant for Horizon Pharma, and having equity in Colorado Research Partners LLC. Dr. Terkeltaub reported receiving a research grant from AstraZeneca in the field of hyperuricemia and consultancies with AstraZeneca, Horizon, Sobi, Selecta Biosciences.

Publications
Topics
Sections

Is gouty nephropathy real? It’s a question that has been posed often in rheumatology over the last several decades.

HYWARDS/Getty Images

A new study found 36% of patients with untreated gout at a medical center in Vietnam have diffuse hyperechoic renal medulla as seen on ultrasound, which could indicate the presence of microcrystalline nephropathy. However, the results, published in Kidney International, may raise more questions than answers about the existence of gouty nephropathy and its relation to chronic kidney disease (CKD).

In their study, Thomas Bardin, MD, of the department of rheumatology at Lariboisière Hospital in Paris and colleagues evaluated 502 consecutive patients from Vien Gut Medical Center in Ho Chi Minh City, Vietnam, using B-mode renal ultrasound. The patients were mostly men with a median age of 46 years, body mass index of 25 kg/m2, estimated disease duration of 4 years, and uricemia of 423.2 micromol/L (7.11 mg/dL). Patients had a median estimated glomerular filtration rate (eGFR) of 78 mL/min per 1.73 m2. There was a history of hypertension in 112 patients (22.3%), type 2 diabetes in 58 patients (11.5%), renal lithiasis in 28 patients (5.6%), and coronary heart disease in 5 patients (1%).

While 39% of patients had previously used allopurinol for “a generally short period,” patients were not on urate-lowering therapy at the time of the study. Clinical tophi were present in 279 patients (55.6%), urate arthropathies in 154 patients (30.7%), and 43 patients (10.4%) used steroids daily.

B-mode renal ultrasound showed 181 patients (36%; 95% confidence interval, 32%-40%) had “hyperechoic pattern of Malpighi pyramids compared with the adjacent cortex,” which was “associated with twinkling artifacts” visible on color Doppler ultrasound. There was a significant association between renal medulla hyperechogenicity and patient age, disease duration, use of steroids, clinical tophi, and urate arthropathy (P less than .0001 for all). A significant association was also seen between renal medulla hyperechogenicity and decreased eGFR (P < .0001), proteinuria (P = .0006), leukocyturia (P = .0008), hypertension (P = .0008), hyperuricemia (P = .002), and coronary heart disease (P = .006).

In a multivariate analysis, there was a significant association between renal medulla hyperechogenicity and clinical tophi (odds ratio, 7.27; 95% CI, 3.68–15.19; P < .0001), urate arthropathy (OR, 3.46; 95% CI, 1.99–6.09; P < .0001), estimated gout duration (OR, 2.13; 95% CI, 1.55–2.96; P < .0001), double contour thickness (OR, 1.45; 95% CI, 1.06–1.97; P < .02), and eGFR (OR, 0.30; 95% CI, 0.09–0.89; P < .034).

“The finding was observed mainly in tophaceous gout, which involved a large proportion of our patients who had received very little treatment with urate-lowering drugs and was associated with moderately impaired renal function and urinary features compatible with tubulointerstitial nephritis,” Dr. Bardin and colleagues wrote in the study. The researchers also found “similar features” in 4 of 10 French patients at the Paris Necker Hospital in Paris, and noted that similar findings have been reported in Japan and Korea, which they said may mean hyperechoic medulla “is not unique to Vietnamese patients.”
 

 

 

Relation to CKD still unclear

In a related editorial, Federica Piani, MD, and Richard J. Johnson, MD, of the division of renal diseases and hypertension at the University of Colorado at Denver, Aurora, explained that gout was considered by some clinicians to be a cause of CKD in a time before urate-lowering therapies, because as many as 25% of patients with gout went on to experience kidney failure and about half experienced lower kidney function.

The association between gout and CKD was thought to be attributable to “frequent deposition of urate crystals in the tubular lumens and interstitium in the outer medulla of these patients,” but the concept was later challenged because “the crystals were generally found focally and did not readily explain the kidney damage.”

But even as interest in rheumatology moved away from the concept of gouty nephropathy to how serum uric acid impacts CKD, “the possibility that urate crystal deposition in the kidney could also be contributing to the kidney injury was never ruled out,” according to Dr. Piani and Dr. Johnson.

Kidney biopsies can sometimes miss urate crystals because the crystals dissolve if alcohol fixation is not used and because the biopsy site is often in the renal cortex, the authors noted. Recent research has identified that dual-energy CT scans can distinguish between calcium deposits and urate crystals better than ultrasound, and previous research from Thomas Bardin, MD, and colleagues in two patients noted a correlation between dual-energy CT scan findings of urate crystals and hyperechoic medulla findings on renal ultrasound.

The results by Dr. Bardin and associates, they said, “have reawakened the entity of urate microcrystalline nephropathy as a possible cause of CKD.”

Robert Terkeltaub, MD, professor of medicine at the University of California, San Diego, and section chief of Rheumatology at the San Diego VA Medical Center, said in an interview that he also believes the findings by Dr. Bardin and associates are real. He cited a study by Isabelle Ayoub, MD, and colleagues in Clinical Nephrology from 2016 that evaluated kidney biopsies in Germany and found medullary tophi were more likely to be present in patients with CKD than without.

Dr. Robert Terkeltaub

“Chronic gouty nephropathy did not disappear. It still exists,” said Dr. Terkeltaub, who was not involved in the study by Dr. Bardin and colleagues.

The prospect that, if “you look hard enough, you’re going to see urate crystals and a pattern that’s attributed in the renal medulla” in patients with untreated gout is “very provocative, and interesting, and clinically relevant, and merits more investigation,” noted Dr. Terkeltaub, who is also president of the Gout, Hyperuricemia and Crystal-Associated Disease Network.

If verified, the results have important implications for patients with gout and its relationship to CKD, Dr. Terkeltaub said, but they raise “more questions than answers.

“I think it’s a really good wake-up call to start looking, doing good detective work here, and looking especially in people who have gout as opposed to just people with chronic kidney disease,” he said.

The authors reported no relevant conflicts of interest. Dr. Johnson, who coauthored an accompanying editorial, reported having equity in XORTX Therapeutics, serving as a consultant for Horizon Pharma, and having equity in Colorado Research Partners LLC. Dr. Terkeltaub reported receiving a research grant from AstraZeneca in the field of hyperuricemia and consultancies with AstraZeneca, Horizon, Sobi, Selecta Biosciences.

Is gouty nephropathy real? It’s a question that has been posed often in rheumatology over the last several decades.

HYWARDS/Getty Images

A new study found 36% of patients with untreated gout at a medical center in Vietnam have diffuse hyperechoic renal medulla as seen on ultrasound, which could indicate the presence of microcrystalline nephropathy. However, the results, published in Kidney International, may raise more questions than answers about the existence of gouty nephropathy and its relation to chronic kidney disease (CKD).

In their study, Thomas Bardin, MD, of the department of rheumatology at Lariboisière Hospital in Paris and colleagues evaluated 502 consecutive patients from Vien Gut Medical Center in Ho Chi Minh City, Vietnam, using B-mode renal ultrasound. The patients were mostly men with a median age of 46 years, body mass index of 25 kg/m2, estimated disease duration of 4 years, and uricemia of 423.2 micromol/L (7.11 mg/dL). Patients had a median estimated glomerular filtration rate (eGFR) of 78 mL/min per 1.73 m2. There was a history of hypertension in 112 patients (22.3%), type 2 diabetes in 58 patients (11.5%), renal lithiasis in 28 patients (5.6%), and coronary heart disease in 5 patients (1%).

While 39% of patients had previously used allopurinol for “a generally short period,” patients were not on urate-lowering therapy at the time of the study. Clinical tophi were present in 279 patients (55.6%), urate arthropathies in 154 patients (30.7%), and 43 patients (10.4%) used steroids daily.

B-mode renal ultrasound showed 181 patients (36%; 95% confidence interval, 32%-40%) had “hyperechoic pattern of Malpighi pyramids compared with the adjacent cortex,” which was “associated with twinkling artifacts” visible on color Doppler ultrasound. There was a significant association between renal medulla hyperechogenicity and patient age, disease duration, use of steroids, clinical tophi, and urate arthropathy (P less than .0001 for all). A significant association was also seen between renal medulla hyperechogenicity and decreased eGFR (P < .0001), proteinuria (P = .0006), leukocyturia (P = .0008), hypertension (P = .0008), hyperuricemia (P = .002), and coronary heart disease (P = .006).

In a multivariate analysis, there was a significant association between renal medulla hyperechogenicity and clinical tophi (odds ratio, 7.27; 95% CI, 3.68–15.19; P < .0001), urate arthropathy (OR, 3.46; 95% CI, 1.99–6.09; P < .0001), estimated gout duration (OR, 2.13; 95% CI, 1.55–2.96; P < .0001), double contour thickness (OR, 1.45; 95% CI, 1.06–1.97; P < .02), and eGFR (OR, 0.30; 95% CI, 0.09–0.89; P < .034).

“The finding was observed mainly in tophaceous gout, which involved a large proportion of our patients who had received very little treatment with urate-lowering drugs and was associated with moderately impaired renal function and urinary features compatible with tubulointerstitial nephritis,” Dr. Bardin and colleagues wrote in the study. The researchers also found “similar features” in 4 of 10 French patients at the Paris Necker Hospital in Paris, and noted that similar findings have been reported in Japan and Korea, which they said may mean hyperechoic medulla “is not unique to Vietnamese patients.”
 

 

 

Relation to CKD still unclear

In a related editorial, Federica Piani, MD, and Richard J. Johnson, MD, of the division of renal diseases and hypertension at the University of Colorado at Denver, Aurora, explained that gout was considered by some clinicians to be a cause of CKD in a time before urate-lowering therapies, because as many as 25% of patients with gout went on to experience kidney failure and about half experienced lower kidney function.

The association between gout and CKD was thought to be attributable to “frequent deposition of urate crystals in the tubular lumens and interstitium in the outer medulla of these patients,” but the concept was later challenged because “the crystals were generally found focally and did not readily explain the kidney damage.”

But even as interest in rheumatology moved away from the concept of gouty nephropathy to how serum uric acid impacts CKD, “the possibility that urate crystal deposition in the kidney could also be contributing to the kidney injury was never ruled out,” according to Dr. Piani and Dr. Johnson.

Kidney biopsies can sometimes miss urate crystals because the crystals dissolve if alcohol fixation is not used and because the biopsy site is often in the renal cortex, the authors noted. Recent research has identified that dual-energy CT scans can distinguish between calcium deposits and urate crystals better than ultrasound, and previous research from Thomas Bardin, MD, and colleagues in two patients noted a correlation between dual-energy CT scan findings of urate crystals and hyperechoic medulla findings on renal ultrasound.

The results by Dr. Bardin and associates, they said, “have reawakened the entity of urate microcrystalline nephropathy as a possible cause of CKD.”

Robert Terkeltaub, MD, professor of medicine at the University of California, San Diego, and section chief of Rheumatology at the San Diego VA Medical Center, said in an interview that he also believes the findings by Dr. Bardin and associates are real. He cited a study by Isabelle Ayoub, MD, and colleagues in Clinical Nephrology from 2016 that evaluated kidney biopsies in Germany and found medullary tophi were more likely to be present in patients with CKD than without.

Dr. Robert Terkeltaub

“Chronic gouty nephropathy did not disappear. It still exists,” said Dr. Terkeltaub, who was not involved in the study by Dr. Bardin and colleagues.

The prospect that, if “you look hard enough, you’re going to see urate crystals and a pattern that’s attributed in the renal medulla” in patients with untreated gout is “very provocative, and interesting, and clinically relevant, and merits more investigation,” noted Dr. Terkeltaub, who is also president of the Gout, Hyperuricemia and Crystal-Associated Disease Network.

If verified, the results have important implications for patients with gout and its relationship to CKD, Dr. Terkeltaub said, but they raise “more questions than answers.

“I think it’s a really good wake-up call to start looking, doing good detective work here, and looking especially in people who have gout as opposed to just people with chronic kidney disease,” he said.

The authors reported no relevant conflicts of interest. Dr. Johnson, who coauthored an accompanying editorial, reported having equity in XORTX Therapeutics, serving as a consultant for Horizon Pharma, and having equity in Colorado Research Partners LLC. Dr. Terkeltaub reported receiving a research grant from AstraZeneca in the field of hyperuricemia and consultancies with AstraZeneca, Horizon, Sobi, Selecta Biosciences.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM KIDNEY INTERNATIONAL

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

A standardized approach to postop management of DOACs in AFib

Article Type
Changed
Wed, 01/13/2021 - 15:03

Clinical question: Is it safe to adopt a standardized approach to direct oral anticoagulant (DOAC) interruption for patients with atrial fibrillation (AFib) who are undergoing elective surgeries/procedures?

Dr. Leslie B. Gordon

Background: At present, perioperative management of DOACs for patients with AFib has significant variation, and robust data are absent. Points of controversy include: The length of time to hold DOACs before and after the procedure, whether to bridge with heparin, and whether to measure coagulation function studies prior to the procedure.

Study design: Prospective cohort study.

Setting: Conducted in Canada, the United States, and Europe.

Synopsis: The PAUSE study included adults with atrial fibrillation who were long-term users of either apixaban, dabigatran, or rivaroxaban and were scheduled for an elective procedure (n = 3,007). Patients were placed on a standardized DOAC interruption schedule based on whether their procedure had high bleeding risk (held for 2 days prior; resumed 2-3 days after) or low bleeding risk (held for 1 day prior; resumed 1 day after).

The primary clinical outcomes were major bleeding and arterial thromboembolism. Authors determined safety by comparing to expected outcome rates derived from research on perioperative warfarin management.

They found that all three drugs were associated with acceptable rates of arterial thromboembolism (apixaban 0.2%, dabigatran 0.6%, rivaroxaban 0.4%). The rates of major bleeding observed with each drug (apixaban 0.6% low-risk procedures, 3% high-risk procedures; dabigatran 0.9% both low- and high-risk procedures; and rivaroxaban 1.3% low-risk procedures, 3% high-risk procedures) were similar to those in the BRIDGE trial (patients on warfarin who were not bridged perioperatively). However, it must still be noted that only dabigatran met the authors’ predetermined definition of safety for major bleeding.

Limitations include the lack of true control rates for major bleeding and stroke, the relatively low mean CHADS2-Va2Sc of 3.3-3.5, and that greater than 95% of patients were white.

Bottom line: For patients with moderate-risk atrial fibrillation, a standardized approach to DOAC interruption in the perioperative period that omits bridging along with coagulation function testing appears safe in this preliminary study.

Citation: Douketis JD et al. Perioperative management of patients with atrial fibrillation receiving a direct oral anticoagulant. JAMA Intern Med. 2019 Aug 5. doi: 10.1001/jamainternmed.2019.2431.

Dr. Gordon is a hospitalist at Maine Medical Center in Portland.

Publications
Topics
Sections

Clinical question: Is it safe to adopt a standardized approach to direct oral anticoagulant (DOAC) interruption for patients with atrial fibrillation (AFib) who are undergoing elective surgeries/procedures?

Dr. Leslie B. Gordon

Background: At present, perioperative management of DOACs for patients with AFib has significant variation, and robust data are absent. Points of controversy include: The length of time to hold DOACs before and after the procedure, whether to bridge with heparin, and whether to measure coagulation function studies prior to the procedure.

Study design: Prospective cohort study.

Setting: Conducted in Canada, the United States, and Europe.

Synopsis: The PAUSE study included adults with atrial fibrillation who were long-term users of either apixaban, dabigatran, or rivaroxaban and were scheduled for an elective procedure (n = 3,007). Patients were placed on a standardized DOAC interruption schedule based on whether their procedure had high bleeding risk (held for 2 days prior; resumed 2-3 days after) or low bleeding risk (held for 1 day prior; resumed 1 day after).

The primary clinical outcomes were major bleeding and arterial thromboembolism. Authors determined safety by comparing to expected outcome rates derived from research on perioperative warfarin management.

They found that all three drugs were associated with acceptable rates of arterial thromboembolism (apixaban 0.2%, dabigatran 0.6%, rivaroxaban 0.4%). The rates of major bleeding observed with each drug (apixaban 0.6% low-risk procedures, 3% high-risk procedures; dabigatran 0.9% both low- and high-risk procedures; and rivaroxaban 1.3% low-risk procedures, 3% high-risk procedures) were similar to those in the BRIDGE trial (patients on warfarin who were not bridged perioperatively). However, it must still be noted that only dabigatran met the authors’ predetermined definition of safety for major bleeding.

Limitations include the lack of true control rates for major bleeding and stroke, the relatively low mean CHADS2-Va2Sc of 3.3-3.5, and that greater than 95% of patients were white.

Bottom line: For patients with moderate-risk atrial fibrillation, a standardized approach to DOAC interruption in the perioperative period that omits bridging along with coagulation function testing appears safe in this preliminary study.

Citation: Douketis JD et al. Perioperative management of patients with atrial fibrillation receiving a direct oral anticoagulant. JAMA Intern Med. 2019 Aug 5. doi: 10.1001/jamainternmed.2019.2431.

Dr. Gordon is a hospitalist at Maine Medical Center in Portland.

Clinical question: Is it safe to adopt a standardized approach to direct oral anticoagulant (DOAC) interruption for patients with atrial fibrillation (AFib) who are undergoing elective surgeries/procedures?

Dr. Leslie B. Gordon

Background: At present, perioperative management of DOACs for patients with AFib has significant variation, and robust data are absent. Points of controversy include: The length of time to hold DOACs before and after the procedure, whether to bridge with heparin, and whether to measure coagulation function studies prior to the procedure.

Study design: Prospective cohort study.

Setting: Conducted in Canada, the United States, and Europe.

Synopsis: The PAUSE study included adults with atrial fibrillation who were long-term users of either apixaban, dabigatran, or rivaroxaban and were scheduled for an elective procedure (n = 3,007). Patients were placed on a standardized DOAC interruption schedule based on whether their procedure had high bleeding risk (held for 2 days prior; resumed 2-3 days after) or low bleeding risk (held for 1 day prior; resumed 1 day after).

The primary clinical outcomes were major bleeding and arterial thromboembolism. Authors determined safety by comparing to expected outcome rates derived from research on perioperative warfarin management.

They found that all three drugs were associated with acceptable rates of arterial thromboembolism (apixaban 0.2%, dabigatran 0.6%, rivaroxaban 0.4%). The rates of major bleeding observed with each drug (apixaban 0.6% low-risk procedures, 3% high-risk procedures; dabigatran 0.9% both low- and high-risk procedures; and rivaroxaban 1.3% low-risk procedures, 3% high-risk procedures) were similar to those in the BRIDGE trial (patients on warfarin who were not bridged perioperatively). However, it must still be noted that only dabigatran met the authors’ predetermined definition of safety for major bleeding.

Limitations include the lack of true control rates for major bleeding and stroke, the relatively low mean CHADS2-Va2Sc of 3.3-3.5, and that greater than 95% of patients were white.

Bottom line: For patients with moderate-risk atrial fibrillation, a standardized approach to DOAC interruption in the perioperative period that omits bridging along with coagulation function testing appears safe in this preliminary study.

Citation: Douketis JD et al. Perioperative management of patients with atrial fibrillation receiving a direct oral anticoagulant. JAMA Intern Med. 2019 Aug 5. doi: 10.1001/jamainternmed.2019.2431.

Dr. Gordon is a hospitalist at Maine Medical Center in Portland.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Endoscopic CRC resection carries recurrence, mortality risks

Article Type
Changed
Fri, 02/19/2021 - 14:58

After endoscopic resection, high-risk T1 colorectal cancer (CRC) may have a tenfold greater risk of recurrence than low-risk disease, based on a meta-analysis involving more than 5,000 patients.

These findings support personalized, histologically based surveillance strategies following endoscopic resection of T1 CRC, reported lead author Hao Dang of Leiden University Medical Center in the Netherlands, and colleagues.

“With the introduction of population-based screening programs, a growing number of early-invasive colorectal cancers (T1 CRCs) are detected and treated with local endoscopic resection,” the investigators wrote in Clinical Gastroenterology and Hepatology.

Success with this approach, however, depends upon accurate risk recurrence data, which have been lacking.

Joseph Feuerstein, MD, of the department of medicine at Harvard Medical School, Boston, and associate clinical chief of gastroenterology at Beth Israel Deaconess Medical Center, Boston, said, “While attempting complete resection of an early cancer with a colonoscopy is appealing, given the very low morbidity associated with it, this technique is only advisable if the risk of recurrence is extremely low when comparing [it] to surgical resection.”

In addition to patient selection, accurate recurrence data could also inform postoperative surveillance.

“To determine the optimal frequency and method of surveillance, it is important to know how often, and at which moments in follow-up local or distant CRC recurrences exactly occur,” wrote Mr. Dang and colleagues. “However, for endoscopically treated T1 CRC patients, the definite answers to these questions have not yet been provided.”

To find answers, Mr. Dang and colleagues conducted a meta-analysis involving 71 studies and 5,167 patients with endoscopically treated T1 CRC. The primary outcome was cumulative incidence and time pattern of CRC recurrence. Data were further characterized by local and/or distant metastasis and CRC-specific mortality.

The pooled cumulative incidence of CRC recurrence was 3.3%, with local and distant recurrences occurring at similar, respective rates of 1.9% and 1.6%. Most recurrences (95.6%) occurred within 72 months of endoscopic resection.

Risk-based recurrence analysis revealed a distinct pattern, with high-risk T1 CRCs recurring at a rate of 7.0% (95% confidence interval, 4.9%-9.9%; I2 = 48.1%), compared with just 0.7% for low-risk tumors (95%-CI, 0.4%-1.2%; I2 = 0%). Mortality data emphasized the clinical importance of this disparity, as the CRC-related mortality rate was 1.7% across the entire population, versus 40.8% among patients with recurrence.

“Our meta-analysis provides quantitative measures of relevant follow-up outcomes, which can form the basis for evidence-based surveillance recommendations for endoscopically treated T1 CRC patients,” the investigators concluded.

According to Dr. Feuerstein, the findings highlight the importance of surveillance after endoscopic resection of CRC while adding clarity to appropriate timing.

“Current guidelines recommend a colonoscopy following a colon cancer diagnosis at 1 year and then 3 years and then every 5 years,” Dr. Feuerstein said. “Adhering to these guidelines would likely identify most cases of recurrence early on within the 72-month window identified in this study.” He noted that “high-risk T1 CRC should probably be monitored more aggressively.”

Anoop Prabhu, MD, of the department of medicine at the University of Michigan Medical Center and director of endoscopy at Ann Arbor Veterans Affairs Medical Center, drew similar conclusions from the findings, noting that “tumor histology appears to be a powerful risk-stratification tool for subsequent surveillance.”

“One of the most important take-home messages from this paper is that, in those patients with low-risk, endoscopically resected colon cancer, surveillance with a colonoscopy in 1 year (as opposed to more intense endoscopic or radiographic surveillance) is likely more than adequate and can save unnecessary testing,” Dr. Prabhu said.

To build upon these findings, Dr. Prabhu suggested that upcoming studies could directly compare different management pathways.

“A potential area for future research would be a cost-effectiveness analysis of competing surveillance strategies after upfront endoscopic resection, with a particular focus on cancer-specific survival,” he said.

The investigators disclosed relationships with Boston Scientific, Cook Medical, and Medtronics. Dr. Feuerstein and Dr. Prabhu reported no relevant conflicts of interest.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC

Publications
Topics
Sections

After endoscopic resection, high-risk T1 colorectal cancer (CRC) may have a tenfold greater risk of recurrence than low-risk disease, based on a meta-analysis involving more than 5,000 patients.

These findings support personalized, histologically based surveillance strategies following endoscopic resection of T1 CRC, reported lead author Hao Dang of Leiden University Medical Center in the Netherlands, and colleagues.

“With the introduction of population-based screening programs, a growing number of early-invasive colorectal cancers (T1 CRCs) are detected and treated with local endoscopic resection,” the investigators wrote in Clinical Gastroenterology and Hepatology.

Success with this approach, however, depends upon accurate risk recurrence data, which have been lacking.

Joseph Feuerstein, MD, of the department of medicine at Harvard Medical School, Boston, and associate clinical chief of gastroenterology at Beth Israel Deaconess Medical Center, Boston, said, “While attempting complete resection of an early cancer with a colonoscopy is appealing, given the very low morbidity associated with it, this technique is only advisable if the risk of recurrence is extremely low when comparing [it] to surgical resection.”

In addition to patient selection, accurate recurrence data could also inform postoperative surveillance.

“To determine the optimal frequency and method of surveillance, it is important to know how often, and at which moments in follow-up local or distant CRC recurrences exactly occur,” wrote Mr. Dang and colleagues. “However, for endoscopically treated T1 CRC patients, the definite answers to these questions have not yet been provided.”

To find answers, Mr. Dang and colleagues conducted a meta-analysis involving 71 studies and 5,167 patients with endoscopically treated T1 CRC. The primary outcome was cumulative incidence and time pattern of CRC recurrence. Data were further characterized by local and/or distant metastasis and CRC-specific mortality.

The pooled cumulative incidence of CRC recurrence was 3.3%, with local and distant recurrences occurring at similar, respective rates of 1.9% and 1.6%. Most recurrences (95.6%) occurred within 72 months of endoscopic resection.

Risk-based recurrence analysis revealed a distinct pattern, with high-risk T1 CRCs recurring at a rate of 7.0% (95% confidence interval, 4.9%-9.9%; I2 = 48.1%), compared with just 0.7% for low-risk tumors (95%-CI, 0.4%-1.2%; I2 = 0%). Mortality data emphasized the clinical importance of this disparity, as the CRC-related mortality rate was 1.7% across the entire population, versus 40.8% among patients with recurrence.

“Our meta-analysis provides quantitative measures of relevant follow-up outcomes, which can form the basis for evidence-based surveillance recommendations for endoscopically treated T1 CRC patients,” the investigators concluded.

According to Dr. Feuerstein, the findings highlight the importance of surveillance after endoscopic resection of CRC while adding clarity to appropriate timing.

“Current guidelines recommend a colonoscopy following a colon cancer diagnosis at 1 year and then 3 years and then every 5 years,” Dr. Feuerstein said. “Adhering to these guidelines would likely identify most cases of recurrence early on within the 72-month window identified in this study.” He noted that “high-risk T1 CRC should probably be monitored more aggressively.”

Anoop Prabhu, MD, of the department of medicine at the University of Michigan Medical Center and director of endoscopy at Ann Arbor Veterans Affairs Medical Center, drew similar conclusions from the findings, noting that “tumor histology appears to be a powerful risk-stratification tool for subsequent surveillance.”

“One of the most important take-home messages from this paper is that, in those patients with low-risk, endoscopically resected colon cancer, surveillance with a colonoscopy in 1 year (as opposed to more intense endoscopic or radiographic surveillance) is likely more than adequate and can save unnecessary testing,” Dr. Prabhu said.

To build upon these findings, Dr. Prabhu suggested that upcoming studies could directly compare different management pathways.

“A potential area for future research would be a cost-effectiveness analysis of competing surveillance strategies after upfront endoscopic resection, with a particular focus on cancer-specific survival,” he said.

The investigators disclosed relationships with Boston Scientific, Cook Medical, and Medtronics. Dr. Feuerstein and Dr. Prabhu reported no relevant conflicts of interest.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC

After endoscopic resection, high-risk T1 colorectal cancer (CRC) may have a tenfold greater risk of recurrence than low-risk disease, based on a meta-analysis involving more than 5,000 patients.

These findings support personalized, histologically based surveillance strategies following endoscopic resection of T1 CRC, reported lead author Hao Dang of Leiden University Medical Center in the Netherlands, and colleagues.

“With the introduction of population-based screening programs, a growing number of early-invasive colorectal cancers (T1 CRCs) are detected and treated with local endoscopic resection,” the investigators wrote in Clinical Gastroenterology and Hepatology.

Success with this approach, however, depends upon accurate risk recurrence data, which have been lacking.

Joseph Feuerstein, MD, of the department of medicine at Harvard Medical School, Boston, and associate clinical chief of gastroenterology at Beth Israel Deaconess Medical Center, Boston, said, “While attempting complete resection of an early cancer with a colonoscopy is appealing, given the very low morbidity associated with it, this technique is only advisable if the risk of recurrence is extremely low when comparing [it] to surgical resection.”

In addition to patient selection, accurate recurrence data could also inform postoperative surveillance.

“To determine the optimal frequency and method of surveillance, it is important to know how often, and at which moments in follow-up local or distant CRC recurrences exactly occur,” wrote Mr. Dang and colleagues. “However, for endoscopically treated T1 CRC patients, the definite answers to these questions have not yet been provided.”

To find answers, Mr. Dang and colleagues conducted a meta-analysis involving 71 studies and 5,167 patients with endoscopically treated T1 CRC. The primary outcome was cumulative incidence and time pattern of CRC recurrence. Data were further characterized by local and/or distant metastasis and CRC-specific mortality.

The pooled cumulative incidence of CRC recurrence was 3.3%, with local and distant recurrences occurring at similar, respective rates of 1.9% and 1.6%. Most recurrences (95.6%) occurred within 72 months of endoscopic resection.

Risk-based recurrence analysis revealed a distinct pattern, with high-risk T1 CRCs recurring at a rate of 7.0% (95% confidence interval, 4.9%-9.9%; I2 = 48.1%), compared with just 0.7% for low-risk tumors (95%-CI, 0.4%-1.2%; I2 = 0%). Mortality data emphasized the clinical importance of this disparity, as the CRC-related mortality rate was 1.7% across the entire population, versus 40.8% among patients with recurrence.

“Our meta-analysis provides quantitative measures of relevant follow-up outcomes, which can form the basis for evidence-based surveillance recommendations for endoscopically treated T1 CRC patients,” the investigators concluded.

According to Dr. Feuerstein, the findings highlight the importance of surveillance after endoscopic resection of CRC while adding clarity to appropriate timing.

“Current guidelines recommend a colonoscopy following a colon cancer diagnosis at 1 year and then 3 years and then every 5 years,” Dr. Feuerstein said. “Adhering to these guidelines would likely identify most cases of recurrence early on within the 72-month window identified in this study.” He noted that “high-risk T1 CRC should probably be monitored more aggressively.”

Anoop Prabhu, MD, of the department of medicine at the University of Michigan Medical Center and director of endoscopy at Ann Arbor Veterans Affairs Medical Center, drew similar conclusions from the findings, noting that “tumor histology appears to be a powerful risk-stratification tool for subsequent surveillance.”

“One of the most important take-home messages from this paper is that, in those patients with low-risk, endoscopically resected colon cancer, surveillance with a colonoscopy in 1 year (as opposed to more intense endoscopic or radiographic surveillance) is likely more than adequate and can save unnecessary testing,” Dr. Prabhu said.

To build upon these findings, Dr. Prabhu suggested that upcoming studies could directly compare different management pathways.

“A potential area for future research would be a cost-effectiveness analysis of competing surveillance strategies after upfront endoscopic resection, with a particular focus on cancer-specific survival,” he said.

The investigators disclosed relationships with Boston Scientific, Cook Medical, and Medtronics. Dr. Feuerstein and Dr. Prabhu reported no relevant conflicts of interest.

Help your patients understand colorectal cancer prevention and screening options by sharing AGA’s patient education from the GI Patient Center: www.gastro.org/CRC

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM CLINICAL GASTROENTEROLOGY AND HEPATOLOGY

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article
Display survey writer

Solutions to the pandemic must include public behavior

Article Type
Changed
Thu, 08/26/2021 - 15:52

Many scientific problems are complex. Finding the solution can require the concerted efforts of a team. Producing a vaccine for COVID-19 involved a multidisciplinary team with a variety of highly specialized expertises, extensive technological resources, and a history of previous scientific discoveries upon whose shoulders today’s scientists can stand.

Dr. Kevin T. Powell

Many ethical problems are also complex. Finding the ideal, multifaceted answer that addresses all the nuances of a social problem requires brilliant minds, a refined ability for logical analysis and rhetoric, the empowerment of the voices of all stakeholders, and attention to social values such as diversity and justice.

In both endeavors, the typical scientists and ethicists involved tend to presume that if they can determine an ideal solution, it will be rapidly and enthusiastically adopted and implemented for the betterment of society. That is, after all, exactly how those researchers would choose to act. Scientists see moral actions as having two steps. The hard part is deciding what is right. Doing the right thing is the easier task. This delusion is ubiquitous. Many scientists and ethicists recognize the delusion of the existence of a rational society, but proceed anyhow as if one exists.

There is a chorus of voices capable of debunking this delusion. Any priest who hears confessions will testify that the vast majority of harm comes from the failure to do what people already know is right, not from uncertainty, confusion, or ignorance. Psychologists and substance abuse counselors are inundated with people who are stuck doing harmful and self-destructive acts. Internists discuss diet and exercise with their patients, but find the advice is infrequently adopted. Master in business administration programs are devoted to training graduates in methods of motivating people to do what is right.

The response of the scientific establishment to the COVID-19 pandemic was imperfect. There were gaps in knowledge and some early information from China was misleading. The initial CDC test kit production was flawed. The early appeal for the public not to buy masks was strongly driven by a desire to preserve supplies for health care workers. Despite these missteps, the overall advice of scientists was wildly successful and beneficial. The goal was to flatten the curve, and a comparison of the April-June time frame with the November-January period shows markedly fewer COVID-19 cases, hospitalizations, and deaths. Confronted with the pandemic of the century, my assessment is that scientific establishment has performed well.

I am far more negative in my assessment of the institutions that support morality, form the social order, establish justice, and promote the general welfare. For instance, misinformation on social media is rampant, including conspiracy theories and outright denials of the pandemic. Scientific advice has been undercut and impugned. Policy recommendations of esteemed scientific institutions have been ignored. The public’s cooperation has fatigued. Laws on public gatherings, quarantines, and social distancing have been broken. Communitarian ethics and devotion to the common good have been left in a trash heap. The consequences have been hundreds of thousands of lives lost in 2020 and some states are on the brink of much worse.

Medical ethicists have debated in fine detail how to triage ventilators, ration antibody treatments, and prioritize vaccinations. Those policy recommendations have had limited influence. Medical ethics has inadequately addressed the age old problem of morality, which is getting people to behave as they know they ought. Modern medical ethics may have exacerbated the deviancy. Medical ethics for 50 years has emphasized replacing paternalism with autonomy, but it has not adequately promoted communitarian virtues, self-regulation, and personal integrity.

There were many accomplishments and many people to admire in 2020 when compared to historical actions by the health care professionals during crises. Doctors, confronted with the COVID-19 plague, have not abandoned the cities as happened in prior centuries. Patients have not been shunned like lepers, though the total-body protective equipment and the no-visitor policies come very close. Nurses have heroically provided bedside care, though I am haunted by one dissident nurse during a protest carrying a sign saying “Don’t call me a hero. I am being martyred against my will.”

As a scientist, I am prone to the delusion that, if I can build a better mouse trap, people will use it. I’ve lived with that delusion for decades. It carries over into my medical ethics work. Yet I see hospitals in California being overwhelmed by the surge on top of a surge due to unwise and unsafe holiday travel. I can see that optimized solutions aren’t the answer – it is better behavior by the public. I recall when I was a child, my mother would simply command, “Behave yourself.” And never, in any of those recollections, was I in doubt about which correct behavior she meant.
 

Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].

Publications
Topics
Sections

Many scientific problems are complex. Finding the solution can require the concerted efforts of a team. Producing a vaccine for COVID-19 involved a multidisciplinary team with a variety of highly specialized expertises, extensive technological resources, and a history of previous scientific discoveries upon whose shoulders today’s scientists can stand.

Dr. Kevin T. Powell

Many ethical problems are also complex. Finding the ideal, multifaceted answer that addresses all the nuances of a social problem requires brilliant minds, a refined ability for logical analysis and rhetoric, the empowerment of the voices of all stakeholders, and attention to social values such as diversity and justice.

In both endeavors, the typical scientists and ethicists involved tend to presume that if they can determine an ideal solution, it will be rapidly and enthusiastically adopted and implemented for the betterment of society. That is, after all, exactly how those researchers would choose to act. Scientists see moral actions as having two steps. The hard part is deciding what is right. Doing the right thing is the easier task. This delusion is ubiquitous. Many scientists and ethicists recognize the delusion of the existence of a rational society, but proceed anyhow as if one exists.

There is a chorus of voices capable of debunking this delusion. Any priest who hears confessions will testify that the vast majority of harm comes from the failure to do what people already know is right, not from uncertainty, confusion, or ignorance. Psychologists and substance abuse counselors are inundated with people who are stuck doing harmful and self-destructive acts. Internists discuss diet and exercise with their patients, but find the advice is infrequently adopted. Master in business administration programs are devoted to training graduates in methods of motivating people to do what is right.

The response of the scientific establishment to the COVID-19 pandemic was imperfect. There were gaps in knowledge and some early information from China was misleading. The initial CDC test kit production was flawed. The early appeal for the public not to buy masks was strongly driven by a desire to preserve supplies for health care workers. Despite these missteps, the overall advice of scientists was wildly successful and beneficial. The goal was to flatten the curve, and a comparison of the April-June time frame with the November-January period shows markedly fewer COVID-19 cases, hospitalizations, and deaths. Confronted with the pandemic of the century, my assessment is that scientific establishment has performed well.

I am far more negative in my assessment of the institutions that support morality, form the social order, establish justice, and promote the general welfare. For instance, misinformation on social media is rampant, including conspiracy theories and outright denials of the pandemic. Scientific advice has been undercut and impugned. Policy recommendations of esteemed scientific institutions have been ignored. The public’s cooperation has fatigued. Laws on public gatherings, quarantines, and social distancing have been broken. Communitarian ethics and devotion to the common good have been left in a trash heap. The consequences have been hundreds of thousands of lives lost in 2020 and some states are on the brink of much worse.

Medical ethicists have debated in fine detail how to triage ventilators, ration antibody treatments, and prioritize vaccinations. Those policy recommendations have had limited influence. Medical ethics has inadequately addressed the age old problem of morality, which is getting people to behave as they know they ought. Modern medical ethics may have exacerbated the deviancy. Medical ethics for 50 years has emphasized replacing paternalism with autonomy, but it has not adequately promoted communitarian virtues, self-regulation, and personal integrity.

There were many accomplishments and many people to admire in 2020 when compared to historical actions by the health care professionals during crises. Doctors, confronted with the COVID-19 plague, have not abandoned the cities as happened in prior centuries. Patients have not been shunned like lepers, though the total-body protective equipment and the no-visitor policies come very close. Nurses have heroically provided bedside care, though I am haunted by one dissident nurse during a protest carrying a sign saying “Don’t call me a hero. I am being martyred against my will.”

As a scientist, I am prone to the delusion that, if I can build a better mouse trap, people will use it. I’ve lived with that delusion for decades. It carries over into my medical ethics work. Yet I see hospitals in California being overwhelmed by the surge on top of a surge due to unwise and unsafe holiday travel. I can see that optimized solutions aren’t the answer – it is better behavior by the public. I recall when I was a child, my mother would simply command, “Behave yourself.” And never, in any of those recollections, was I in doubt about which correct behavior she meant.
 

Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].

Many scientific problems are complex. Finding the solution can require the concerted efforts of a team. Producing a vaccine for COVID-19 involved a multidisciplinary team with a variety of highly specialized expertises, extensive technological resources, and a history of previous scientific discoveries upon whose shoulders today’s scientists can stand.

Dr. Kevin T. Powell

Many ethical problems are also complex. Finding the ideal, multifaceted answer that addresses all the nuances of a social problem requires brilliant minds, a refined ability for logical analysis and rhetoric, the empowerment of the voices of all stakeholders, and attention to social values such as diversity and justice.

In both endeavors, the typical scientists and ethicists involved tend to presume that if they can determine an ideal solution, it will be rapidly and enthusiastically adopted and implemented for the betterment of society. That is, after all, exactly how those researchers would choose to act. Scientists see moral actions as having two steps. The hard part is deciding what is right. Doing the right thing is the easier task. This delusion is ubiquitous. Many scientists and ethicists recognize the delusion of the existence of a rational society, but proceed anyhow as if one exists.

There is a chorus of voices capable of debunking this delusion. Any priest who hears confessions will testify that the vast majority of harm comes from the failure to do what people already know is right, not from uncertainty, confusion, or ignorance. Psychologists and substance abuse counselors are inundated with people who are stuck doing harmful and self-destructive acts. Internists discuss diet and exercise with their patients, but find the advice is infrequently adopted. Master in business administration programs are devoted to training graduates in methods of motivating people to do what is right.

The response of the scientific establishment to the COVID-19 pandemic was imperfect. There were gaps in knowledge and some early information from China was misleading. The initial CDC test kit production was flawed. The early appeal for the public not to buy masks was strongly driven by a desire to preserve supplies for health care workers. Despite these missteps, the overall advice of scientists was wildly successful and beneficial. The goal was to flatten the curve, and a comparison of the April-June time frame with the November-January period shows markedly fewer COVID-19 cases, hospitalizations, and deaths. Confronted with the pandemic of the century, my assessment is that scientific establishment has performed well.

I am far more negative in my assessment of the institutions that support morality, form the social order, establish justice, and promote the general welfare. For instance, misinformation on social media is rampant, including conspiracy theories and outright denials of the pandemic. Scientific advice has been undercut and impugned. Policy recommendations of esteemed scientific institutions have been ignored. The public’s cooperation has fatigued. Laws on public gatherings, quarantines, and social distancing have been broken. Communitarian ethics and devotion to the common good have been left in a trash heap. The consequences have been hundreds of thousands of lives lost in 2020 and some states are on the brink of much worse.

Medical ethicists have debated in fine detail how to triage ventilators, ration antibody treatments, and prioritize vaccinations. Those policy recommendations have had limited influence. Medical ethics has inadequately addressed the age old problem of morality, which is getting people to behave as they know they ought. Modern medical ethics may have exacerbated the deviancy. Medical ethics for 50 years has emphasized replacing paternalism with autonomy, but it has not adequately promoted communitarian virtues, self-regulation, and personal integrity.

There were many accomplishments and many people to admire in 2020 when compared to historical actions by the health care professionals during crises. Doctors, confronted with the COVID-19 plague, have not abandoned the cities as happened in prior centuries. Patients have not been shunned like lepers, though the total-body protective equipment and the no-visitor policies come very close. Nurses have heroically provided bedside care, though I am haunted by one dissident nurse during a protest carrying a sign saying “Don’t call me a hero. I am being martyred against my will.”

As a scientist, I am prone to the delusion that, if I can build a better mouse trap, people will use it. I’ve lived with that delusion for decades. It carries over into my medical ethics work. Yet I see hospitals in California being overwhelmed by the surge on top of a surge due to unwise and unsafe holiday travel. I can see that optimized solutions aren’t the answer – it is better behavior by the public. I recall when I was a child, my mother would simply command, “Behave yourself.” And never, in any of those recollections, was I in doubt about which correct behavior she meant.
 

Dr. Powell is a pediatric hospitalist and clinical ethics consultant living in St. Louis. Email him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Greater reductions in knee OA pain seen with supportive rather than flexible shoes

Article Type
Changed
Thu, 01/14/2021 - 09:22

Patients with knee osteoarthritis (OA) who wear stable supportive shoes for 6 months have greater average reductions in knee pain when walking, compared with patients who wear flat flexible shoes, according to a randomized trial that included more than 160 patients.

copyright Nandyphotos/Thinkstock

“Contrary to our hypothesis, flat flexible shoes were not superior to stable supportive shoes,” reported Kade L. Paterson, PhD, of the University of Melbourne, and colleagues. Their study was published Jan. 12 in Annals of Internal Medicine.
 

Research gap

Abnormal knee joint loading has been implicated in the pathogenesis of knee OA. Guidelines recommend that patients wear appropriate footwear, but research has not established which shoes are best.

The 2019 American College of Rheumatology clinical guidelines note that “optimal footwear is likely to be of considerable importance for those with knee and/or hip OA,” but “the available studies do not define the best type of footwear to improve specific outcomes for knee or hip OA.”

Some doctors call for thick, shock-absorbing soles and arch supports, based on expert opinion. On the other hand, studies have found that knee loading is lower with flat flexible shoes, and preliminary evidence has suggested that flat flexible shoes may improve OA symptoms, the investigators said.

To study this question, they enrolled in their trial 164 patients aged 50 years and older who had radiographic medial knee OA. Participants had knee pain on most days of the previous month, tibiofemoral osteophytes, and moderate to severe tibiofemoral OA.

The researchers randomly assigned 82 participants to flat flexible shoes and 82 participants to stable supportive shoes, worn for at least 6 hours a day for 6 months.

In the trial, flat flexible shoes included Merrell Bare Access (men’s and women’s), Vivobarefoot Primus Lite (men’s and women’s), Vivobarefoot Mata Canvas (men’s), Converse Dainty Low (women’s), and Lacoste Marice (men’s).

Stable supportive shoes included ASICS Kayano (men’s and women’s), Merrell Jungle Moc (men’s), Nike Air Max 90 Ultra (women’s), Rockport Edge Hill (men’s), and New Balance 624 (women’s).

After participants were randomly assigned to a group, they chose two different pairs of shoes from their assigned footwear group.

“Participants were not told that the purpose of the study was to compare flat flexible with stable supportive shoes,” the researchers noted. “Instead, they were informed only that the trial was comparing the effects of ‘different shoes’ on knee OA symptoms.”

The primary outcomes were changes in walking pain on a 0-10 scale and physical function as assessed by the Western Ontario and McMaster Universities Osteoarthritis Index subscale at 6 months. The researchers also assessed other measures of pain and function, physical activity, and quality of life.

In all, 161 participants reported 6-month primary outcomes. The between-group difference in change in pain favored stable supportive shoes (mean difference, 1.1 units). In the flat flexible shoe group, overall average knee pain while walking decreased from 6.3 at baseline to 5.2 at 6 months. In the stable supportive shoe group, knee pain while walking decreased from 6.1 to 4.

In addition, improvements in knee-related quality of life and ipsilateral hip pain favored stable supportive shoes.

Participants who wore stable supportive shoes also were less likely to report adverse events, compared with those who wore flat flexible shoes (15% vs. 32%). Knee pain, ankle or foot pain, and shin or calf pain were among the adverse events reported.
 

 

 

‘Important work’

“This study suggests that more supportive shoes may help some patients with knee osteoarthritis feel better,” Constance R. Chu, MD, professor of orthopedic surgery at Stanford (Calif.) University, said in an interview. “Shoes, insoles, wedges, and high heels have been shown to change loading of the knee related to knee pain and osteoarthritis ... This is important work toward providing more specific information on the optimum shoes for people with different patterns and types of arthritis to reduce pain and disability from early knee OA.”

Dr. Constance R. Chu

The reported changes in pain may be clinically meaningful for many but not all patients, the authors wrote. “Despite biomechanical evidence showing that flat flexible shoes reduce medial knee load compared with stable supportive shoes, our findings show that this does not translate to improved knee osteoarthritis symptoms,” they said. “This may be because relationships between knee loading and symptoms are not as strong as previously thought, or because the small reductions in medial knee load with flat flexible shoes are insufficient to substantively improve pain and function.”

The trial did not include a control group of patients who wore their usual shoes, and it focused on a select subgroup of patients with knee OA, which may limit the study’s generalizability, the authors noted. The study excluded people with lateral joint space narrowing greater than or equal to medial, those with recent or planned knee surgery, and those who were using shoe orthoses or customized shoes.

The study was supported by grants from the National Health and Medical Research Council. Dr. Chu had no relevant disclosures.

Publications
Topics
Sections

Patients with knee osteoarthritis (OA) who wear stable supportive shoes for 6 months have greater average reductions in knee pain when walking, compared with patients who wear flat flexible shoes, according to a randomized trial that included more than 160 patients.

copyright Nandyphotos/Thinkstock

“Contrary to our hypothesis, flat flexible shoes were not superior to stable supportive shoes,” reported Kade L. Paterson, PhD, of the University of Melbourne, and colleagues. Their study was published Jan. 12 in Annals of Internal Medicine.
 

Research gap

Abnormal knee joint loading has been implicated in the pathogenesis of knee OA. Guidelines recommend that patients wear appropriate footwear, but research has not established which shoes are best.

The 2019 American College of Rheumatology clinical guidelines note that “optimal footwear is likely to be of considerable importance for those with knee and/or hip OA,” but “the available studies do not define the best type of footwear to improve specific outcomes for knee or hip OA.”

Some doctors call for thick, shock-absorbing soles and arch supports, based on expert opinion. On the other hand, studies have found that knee loading is lower with flat flexible shoes, and preliminary evidence has suggested that flat flexible shoes may improve OA symptoms, the investigators said.

To study this question, they enrolled in their trial 164 patients aged 50 years and older who had radiographic medial knee OA. Participants had knee pain on most days of the previous month, tibiofemoral osteophytes, and moderate to severe tibiofemoral OA.

The researchers randomly assigned 82 participants to flat flexible shoes and 82 participants to stable supportive shoes, worn for at least 6 hours a day for 6 months.

In the trial, flat flexible shoes included Merrell Bare Access (men’s and women’s), Vivobarefoot Primus Lite (men’s and women’s), Vivobarefoot Mata Canvas (men’s), Converse Dainty Low (women’s), and Lacoste Marice (men’s).

Stable supportive shoes included ASICS Kayano (men’s and women’s), Merrell Jungle Moc (men’s), Nike Air Max 90 Ultra (women’s), Rockport Edge Hill (men’s), and New Balance 624 (women’s).

After participants were randomly assigned to a group, they chose two different pairs of shoes from their assigned footwear group.

“Participants were not told that the purpose of the study was to compare flat flexible with stable supportive shoes,” the researchers noted. “Instead, they were informed only that the trial was comparing the effects of ‘different shoes’ on knee OA symptoms.”

The primary outcomes were changes in walking pain on a 0-10 scale and physical function as assessed by the Western Ontario and McMaster Universities Osteoarthritis Index subscale at 6 months. The researchers also assessed other measures of pain and function, physical activity, and quality of life.

In all, 161 participants reported 6-month primary outcomes. The between-group difference in change in pain favored stable supportive shoes (mean difference, 1.1 units). In the flat flexible shoe group, overall average knee pain while walking decreased from 6.3 at baseline to 5.2 at 6 months. In the stable supportive shoe group, knee pain while walking decreased from 6.1 to 4.

In addition, improvements in knee-related quality of life and ipsilateral hip pain favored stable supportive shoes.

Participants who wore stable supportive shoes also were less likely to report adverse events, compared with those who wore flat flexible shoes (15% vs. 32%). Knee pain, ankle or foot pain, and shin or calf pain were among the adverse events reported.
 

 

 

‘Important work’

“This study suggests that more supportive shoes may help some patients with knee osteoarthritis feel better,” Constance R. Chu, MD, professor of orthopedic surgery at Stanford (Calif.) University, said in an interview. “Shoes, insoles, wedges, and high heels have been shown to change loading of the knee related to knee pain and osteoarthritis ... This is important work toward providing more specific information on the optimum shoes for people with different patterns and types of arthritis to reduce pain and disability from early knee OA.”

Dr. Constance R. Chu

The reported changes in pain may be clinically meaningful for many but not all patients, the authors wrote. “Despite biomechanical evidence showing that flat flexible shoes reduce medial knee load compared with stable supportive shoes, our findings show that this does not translate to improved knee osteoarthritis symptoms,” they said. “This may be because relationships between knee loading and symptoms are not as strong as previously thought, or because the small reductions in medial knee load with flat flexible shoes are insufficient to substantively improve pain and function.”

The trial did not include a control group of patients who wore their usual shoes, and it focused on a select subgroup of patients with knee OA, which may limit the study’s generalizability, the authors noted. The study excluded people with lateral joint space narrowing greater than or equal to medial, those with recent or planned knee surgery, and those who were using shoe orthoses or customized shoes.

The study was supported by grants from the National Health and Medical Research Council. Dr. Chu had no relevant disclosures.

Patients with knee osteoarthritis (OA) who wear stable supportive shoes for 6 months have greater average reductions in knee pain when walking, compared with patients who wear flat flexible shoes, according to a randomized trial that included more than 160 patients.

copyright Nandyphotos/Thinkstock

“Contrary to our hypothesis, flat flexible shoes were not superior to stable supportive shoes,” reported Kade L. Paterson, PhD, of the University of Melbourne, and colleagues. Their study was published Jan. 12 in Annals of Internal Medicine.
 

Research gap

Abnormal knee joint loading has been implicated in the pathogenesis of knee OA. Guidelines recommend that patients wear appropriate footwear, but research has not established which shoes are best.

The 2019 American College of Rheumatology clinical guidelines note that “optimal footwear is likely to be of considerable importance for those with knee and/or hip OA,” but “the available studies do not define the best type of footwear to improve specific outcomes for knee or hip OA.”

Some doctors call for thick, shock-absorbing soles and arch supports, based on expert opinion. On the other hand, studies have found that knee loading is lower with flat flexible shoes, and preliminary evidence has suggested that flat flexible shoes may improve OA symptoms, the investigators said.

To study this question, they enrolled in their trial 164 patients aged 50 years and older who had radiographic medial knee OA. Participants had knee pain on most days of the previous month, tibiofemoral osteophytes, and moderate to severe tibiofemoral OA.

The researchers randomly assigned 82 participants to flat flexible shoes and 82 participants to stable supportive shoes, worn for at least 6 hours a day for 6 months.

In the trial, flat flexible shoes included Merrell Bare Access (men’s and women’s), Vivobarefoot Primus Lite (men’s and women’s), Vivobarefoot Mata Canvas (men’s), Converse Dainty Low (women’s), and Lacoste Marice (men’s).

Stable supportive shoes included ASICS Kayano (men’s and women’s), Merrell Jungle Moc (men’s), Nike Air Max 90 Ultra (women’s), Rockport Edge Hill (men’s), and New Balance 624 (women’s).

After participants were randomly assigned to a group, they chose two different pairs of shoes from their assigned footwear group.

“Participants were not told that the purpose of the study was to compare flat flexible with stable supportive shoes,” the researchers noted. “Instead, they were informed only that the trial was comparing the effects of ‘different shoes’ on knee OA symptoms.”

The primary outcomes were changes in walking pain on a 0-10 scale and physical function as assessed by the Western Ontario and McMaster Universities Osteoarthritis Index subscale at 6 months. The researchers also assessed other measures of pain and function, physical activity, and quality of life.

In all, 161 participants reported 6-month primary outcomes. The between-group difference in change in pain favored stable supportive shoes (mean difference, 1.1 units). In the flat flexible shoe group, overall average knee pain while walking decreased from 6.3 at baseline to 5.2 at 6 months. In the stable supportive shoe group, knee pain while walking decreased from 6.1 to 4.

In addition, improvements in knee-related quality of life and ipsilateral hip pain favored stable supportive shoes.

Participants who wore stable supportive shoes also were less likely to report adverse events, compared with those who wore flat flexible shoes (15% vs. 32%). Knee pain, ankle or foot pain, and shin or calf pain were among the adverse events reported.
 

 

 

‘Important work’

“This study suggests that more supportive shoes may help some patients with knee osteoarthritis feel better,” Constance R. Chu, MD, professor of orthopedic surgery at Stanford (Calif.) University, said in an interview. “Shoes, insoles, wedges, and high heels have been shown to change loading of the knee related to knee pain and osteoarthritis ... This is important work toward providing more specific information on the optimum shoes for people with different patterns and types of arthritis to reduce pain and disability from early knee OA.”

Dr. Constance R. Chu

The reported changes in pain may be clinically meaningful for many but not all patients, the authors wrote. “Despite biomechanical evidence showing that flat flexible shoes reduce medial knee load compared with stable supportive shoes, our findings show that this does not translate to improved knee osteoarthritis symptoms,” they said. “This may be because relationships between knee loading and symptoms are not as strong as previously thought, or because the small reductions in medial knee load with flat flexible shoes are insufficient to substantively improve pain and function.”

The trial did not include a control group of patients who wore their usual shoes, and it focused on a select subgroup of patients with knee OA, which may limit the study’s generalizability, the authors noted. The study excluded people with lateral joint space narrowing greater than or equal to medial, those with recent or planned knee surgery, and those who were using shoe orthoses or customized shoes.

The study was supported by grants from the National Health and Medical Research Council. Dr. Chu had no relevant disclosures.

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM ANNALS OF INTERNAL MEDICINE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Factors influencing early molecular response to imatinib therapy in CML

Article Type
Changed
Wed, 01/13/2021 - 13:01

Key clinical point: In patients with chronic-phase chronic myeloid leukemia (CP-CML), steady state plasma imatinib levels, MDR1 polymorphisms, and ABC transporter expression influence early molecular response (EMR)/major molecular response (MMR) to imatinib therapy, which in turn influence failure-free survival (FFS).

Major finding: Patients with low and intermediate Sokal scores showed better 2-year FFS vs. those with high Sokal score (P = .02). Patients with variant MDR1/ABCB1-C1236T had high day 29 plasma imatinib levels (P = .005), increased EMR at 3 months (P = .044), and a better 2 year FFS (P = .003) vs. those with wild type genotype. Patients with lower ABCB1 mRNA expression showed significantly higher intracellular imatinib levels (P = .029). The median plasma imatinib level on day 29 was significantly higher in patients who achieved EMR at 3 months (P = .022) and MMR at 12 months (P = .041) which essentially resulted in better 2-year FFS (P = .05).

Study details: This prospective single center observational study evaluated factors influencing EMR to imatinib and FFS in newly diagnosed CP-CML patients (n = 160).

Disclosures: No study sponsor was identified. The authors declared no conflicts of interest.

Source: Rajamani BM et al. Sci Rep. 2020 Nov 26. doi: 10.1038/s41598-020-77140-9.

Publications
Topics
Sections

Key clinical point: In patients with chronic-phase chronic myeloid leukemia (CP-CML), steady state plasma imatinib levels, MDR1 polymorphisms, and ABC transporter expression influence early molecular response (EMR)/major molecular response (MMR) to imatinib therapy, which in turn influence failure-free survival (FFS).

Major finding: Patients with low and intermediate Sokal scores showed better 2-year FFS vs. those with high Sokal score (P = .02). Patients with variant MDR1/ABCB1-C1236T had high day 29 plasma imatinib levels (P = .005), increased EMR at 3 months (P = .044), and a better 2 year FFS (P = .003) vs. those with wild type genotype. Patients with lower ABCB1 mRNA expression showed significantly higher intracellular imatinib levels (P = .029). The median plasma imatinib level on day 29 was significantly higher in patients who achieved EMR at 3 months (P = .022) and MMR at 12 months (P = .041) which essentially resulted in better 2-year FFS (P = .05).

Study details: This prospective single center observational study evaluated factors influencing EMR to imatinib and FFS in newly diagnosed CP-CML patients (n = 160).

Disclosures: No study sponsor was identified. The authors declared no conflicts of interest.

Source: Rajamani BM et al. Sci Rep. 2020 Nov 26. doi: 10.1038/s41598-020-77140-9.

Key clinical point: In patients with chronic-phase chronic myeloid leukemia (CP-CML), steady state plasma imatinib levels, MDR1 polymorphisms, and ABC transporter expression influence early molecular response (EMR)/major molecular response (MMR) to imatinib therapy, which in turn influence failure-free survival (FFS).

Major finding: Patients with low and intermediate Sokal scores showed better 2-year FFS vs. those with high Sokal score (P = .02). Patients with variant MDR1/ABCB1-C1236T had high day 29 plasma imatinib levels (P = .005), increased EMR at 3 months (P = .044), and a better 2 year FFS (P = .003) vs. those with wild type genotype. Patients with lower ABCB1 mRNA expression showed significantly higher intracellular imatinib levels (P = .029). The median plasma imatinib level on day 29 was significantly higher in patients who achieved EMR at 3 months (P = .022) and MMR at 12 months (P = .041) which essentially resulted in better 2-year FFS (P = .05).

Study details: This prospective single center observational study evaluated factors influencing EMR to imatinib and FFS in newly diagnosed CP-CML patients (n = 160).

Disclosures: No study sponsor was identified. The authors declared no conflicts of interest.

Source: Rajamani BM et al. Sci Rep. 2020 Nov 26. doi: 10.1038/s41598-020-77140-9.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 01/13/2021 - 12:45
Un-Gate On Date
Wed, 01/13/2021 - 12:45
Use ProPublica
CFC Schedule Remove Status
Wed, 01/13/2021 - 12:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Adverse events in CML patients treated with TKIs

Article Type
Changed
Wed, 01/13/2021 - 12:59

Key clinical point: Tyrosine kinase inhibitor (TKI) therapy is associated with a higher burden of adverse events in patients with chronic myelogenous leukemia (CML). Later-generation TKIs may have greater toxicity than imatinib.

Major finding: The 5-year cumulative incidence of almost all major organ system outcomes was significantly higher for the CML + TKI vs. noncancer group (P less than .05). In the first year, later-generation TKIs vs. imatinib were associated with primary infections (hazard ratio [HR], 1.43; 95% confidence interval [CI], 1.02-2.00), circulatory events (HR, 1.15; 95% CI, 1.01-1.31), and skin issues (HR, 1.43; 95% CI, 1.13-1.80). Musculoskeletal and nervous system/sensory issues were less common with later-generation TKIs vs. imatinib (HR, 0.83-0.84; P less than .05).

Study details: This real-world analysis of health plan enrollees evaluated adverse events in CML patients treated with TKI (n = 1,200) compared with a noncancer cohort (n = 7,635; median follow-up, approximately 3 years).

Disclosures: The study was funded by Stand Up To Cancer, the American Association for Cancer Research, and the U.S. National Institutes of Health. The authors declared no conflicts of interest.

Source: Chow EJ et al. Leuk Lymphoma. 2020 Dec 7. doi: 10.1080/10428194.2020.1855340.

Publications
Topics
Sections

Key clinical point: Tyrosine kinase inhibitor (TKI) therapy is associated with a higher burden of adverse events in patients with chronic myelogenous leukemia (CML). Later-generation TKIs may have greater toxicity than imatinib.

Major finding: The 5-year cumulative incidence of almost all major organ system outcomes was significantly higher for the CML + TKI vs. noncancer group (P less than .05). In the first year, later-generation TKIs vs. imatinib were associated with primary infections (hazard ratio [HR], 1.43; 95% confidence interval [CI], 1.02-2.00), circulatory events (HR, 1.15; 95% CI, 1.01-1.31), and skin issues (HR, 1.43; 95% CI, 1.13-1.80). Musculoskeletal and nervous system/sensory issues were less common with later-generation TKIs vs. imatinib (HR, 0.83-0.84; P less than .05).

Study details: This real-world analysis of health plan enrollees evaluated adverse events in CML patients treated with TKI (n = 1,200) compared with a noncancer cohort (n = 7,635; median follow-up, approximately 3 years).

Disclosures: The study was funded by Stand Up To Cancer, the American Association for Cancer Research, and the U.S. National Institutes of Health. The authors declared no conflicts of interest.

Source: Chow EJ et al. Leuk Lymphoma. 2020 Dec 7. doi: 10.1080/10428194.2020.1855340.

Key clinical point: Tyrosine kinase inhibitor (TKI) therapy is associated with a higher burden of adverse events in patients with chronic myelogenous leukemia (CML). Later-generation TKIs may have greater toxicity than imatinib.

Major finding: The 5-year cumulative incidence of almost all major organ system outcomes was significantly higher for the CML + TKI vs. noncancer group (P less than .05). In the first year, later-generation TKIs vs. imatinib were associated with primary infections (hazard ratio [HR], 1.43; 95% confidence interval [CI], 1.02-2.00), circulatory events (HR, 1.15; 95% CI, 1.01-1.31), and skin issues (HR, 1.43; 95% CI, 1.13-1.80). Musculoskeletal and nervous system/sensory issues were less common with later-generation TKIs vs. imatinib (HR, 0.83-0.84; P less than .05).

Study details: This real-world analysis of health plan enrollees evaluated adverse events in CML patients treated with TKI (n = 1,200) compared with a noncancer cohort (n = 7,635; median follow-up, approximately 3 years).

Disclosures: The study was funded by Stand Up To Cancer, the American Association for Cancer Research, and the U.S. National Institutes of Health. The authors declared no conflicts of interest.

Source: Chow EJ et al. Leuk Lymphoma. 2020 Dec 7. doi: 10.1080/10428194.2020.1855340.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 01/13/2021 - 12:45
Un-Gate On Date
Wed, 01/13/2021 - 12:45
Use ProPublica
CFC Schedule Remove Status
Wed, 01/13/2021 - 12:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Ph+ CML-CP: Bosutinib is effective across age groups and mCCI scores

Article Type
Changed
Wed, 01/13/2021 - 12:58

Key clinical point: Bosutinib is effective in patients with Philadelphia chromosome–positive chronic myeloid leukemia in chronic phase (Ph+ CML-CP) resistant/intolerant to prior therapy across age groups and the Charlson Comorbidity Index score without the age component (mCCI) scores.

Major finding: A substantial proportion of patients attained or maintained molecular response across age groups and mCCI scores molecular response. Older patients and those with mCCI 4 showed a trend towards higher rates of grade 3/4 treatment-related adverse events.

Study details: The data come from the ongoing, phase 4, single-arm, open-label BYOND study examining the safety and efficacy of bosutinib.

Disclosures: The study was sponsored by Pfizer. No data available regarding conflicts of interest.

Source: Gambacorti-Passerini C et al. Poster. Abstract 055. BSH 2020. 2020 Nov 9-14.

Publications
Topics
Sections

Key clinical point: Bosutinib is effective in patients with Philadelphia chromosome–positive chronic myeloid leukemia in chronic phase (Ph+ CML-CP) resistant/intolerant to prior therapy across age groups and the Charlson Comorbidity Index score without the age component (mCCI) scores.

Major finding: A substantial proportion of patients attained or maintained molecular response across age groups and mCCI scores molecular response. Older patients and those with mCCI 4 showed a trend towards higher rates of grade 3/4 treatment-related adverse events.

Study details: The data come from the ongoing, phase 4, single-arm, open-label BYOND study examining the safety and efficacy of bosutinib.

Disclosures: The study was sponsored by Pfizer. No data available regarding conflicts of interest.

Source: Gambacorti-Passerini C et al. Poster. Abstract 055. BSH 2020. 2020 Nov 9-14.

Key clinical point: Bosutinib is effective in patients with Philadelphia chromosome–positive chronic myeloid leukemia in chronic phase (Ph+ CML-CP) resistant/intolerant to prior therapy across age groups and the Charlson Comorbidity Index score without the age component (mCCI) scores.

Major finding: A substantial proportion of patients attained or maintained molecular response across age groups and mCCI scores molecular response. Older patients and those with mCCI 4 showed a trend towards higher rates of grade 3/4 treatment-related adverse events.

Study details: The data come from the ongoing, phase 4, single-arm, open-label BYOND study examining the safety and efficacy of bosutinib.

Disclosures: The study was sponsored by Pfizer. No data available regarding conflicts of interest.

Source: Gambacorti-Passerini C et al. Poster. Abstract 055. BSH 2020. 2020 Nov 9-14.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 01/13/2021 - 12:45
Un-Gate On Date
Wed, 01/13/2021 - 12:45
Use ProPublica
CFC Schedule Remove Status
Wed, 01/13/2021 - 12:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article

Personalized treatment recommendations in patients with CML-CP

Article Type
Changed
Wed, 01/13/2021 - 12:54

Key clinical point: Personalized treatment selection according to the LEukemia Artificial intelligence Program (LEAP) recommendations for patients with chronic myeloid leukemia in chronic phase (CML-CP) is associated with better likelihood of survival.

Major finding: The LEAP CML-CP recommendation was associated with an improved overall survival  (P less than 001).

Study details: A cohort of CML-CP patients was randomly assigned to training/validation (n = 504) and test cohorts (n = 126). The training/validation cohort was used to develop the LEAP CML-CP model using 101 variables at diagnosis. The test cohort was then applied to the LEAP CML-CP model and an optimum tyrosine kinase inhibitor therapy was selected for each patient.

Disclosures: The study was supported by the University of Texas MD Anderson Cancer Center Support Grant from the National Institutes of Health, the National Institutes of Health/National Cancer Institute under award, the University of Texas MD Anderson MDS/AML Moon Shot, and Leukemia Texas. K Sasaki, EJ Jabbour, F Ravandi, M Konopleva, G Garcia-Manero, JE Cortes,  C DiNardo reported relationships with various pharmaceutical companies. The remaining authors declared no conflicts of interest.

Source: Sasaki K et al. Am J Hematol. 2020 Nov 12.  doi: 10.1002/ajh.26047.

Publications
Topics
Sections

Key clinical point: Personalized treatment selection according to the LEukemia Artificial intelligence Program (LEAP) recommendations for patients with chronic myeloid leukemia in chronic phase (CML-CP) is associated with better likelihood of survival.

Major finding: The LEAP CML-CP recommendation was associated with an improved overall survival  (P less than 001).

Study details: A cohort of CML-CP patients was randomly assigned to training/validation (n = 504) and test cohorts (n = 126). The training/validation cohort was used to develop the LEAP CML-CP model using 101 variables at diagnosis. The test cohort was then applied to the LEAP CML-CP model and an optimum tyrosine kinase inhibitor therapy was selected for each patient.

Disclosures: The study was supported by the University of Texas MD Anderson Cancer Center Support Grant from the National Institutes of Health, the National Institutes of Health/National Cancer Institute under award, the University of Texas MD Anderson MDS/AML Moon Shot, and Leukemia Texas. K Sasaki, EJ Jabbour, F Ravandi, M Konopleva, G Garcia-Manero, JE Cortes,  C DiNardo reported relationships with various pharmaceutical companies. The remaining authors declared no conflicts of interest.

Source: Sasaki K et al. Am J Hematol. 2020 Nov 12.  doi: 10.1002/ajh.26047.

Key clinical point: Personalized treatment selection according to the LEukemia Artificial intelligence Program (LEAP) recommendations for patients with chronic myeloid leukemia in chronic phase (CML-CP) is associated with better likelihood of survival.

Major finding: The LEAP CML-CP recommendation was associated with an improved overall survival  (P less than 001).

Study details: A cohort of CML-CP patients was randomly assigned to training/validation (n = 504) and test cohorts (n = 126). The training/validation cohort was used to develop the LEAP CML-CP model using 101 variables at diagnosis. The test cohort was then applied to the LEAP CML-CP model and an optimum tyrosine kinase inhibitor therapy was selected for each patient.

Disclosures: The study was supported by the University of Texas MD Anderson Cancer Center Support Grant from the National Institutes of Health, the National Institutes of Health/National Cancer Institute under award, the University of Texas MD Anderson MDS/AML Moon Shot, and Leukemia Texas. K Sasaki, EJ Jabbour, F Ravandi, M Konopleva, G Garcia-Manero, JE Cortes,  C DiNardo reported relationships with various pharmaceutical companies. The remaining authors declared no conflicts of interest.

Source: Sasaki K et al. Am J Hematol. 2020 Nov 12.  doi: 10.1002/ajh.26047.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Gate On Date
Wed, 01/13/2021 - 12:45
Un-Gate On Date
Wed, 01/13/2021 - 12:45
Use ProPublica
CFC Schedule Remove Status
Wed, 01/13/2021 - 12:45
Hide sidebar & use full width
render the right sidebar.
Conference Recap Checkbox
Not Conference Recap
Clinical Edge
Display the Slideshow in this Article
Medscape Article