HCV and alcohol use disorder – bad news for the liver

Article Type
Changed
Sat, 12/08/2018 - 15:07

 

Patients infected with both hepatitis C virus (HCV) and alcohol use disorder (AUD) were twice as likely to present with advanced liver fibrosis at hospital admission, according to the results of a database study published in Drug and Alcohol Dependence (2018;188:180-6).

The study population consisted of 1,313 patients (80% men). Median age at admission was 45 years and the median alcohol consumption was 200 g/day. HCV infection was present in 236 patients (18%), according to Arantza Sanvisens, MD, of the Universitat Autònoma de Barcelona, Badalona, Spain, and her colleagues.

s-c-s/Thinkstock
AUD patients with HCV infection were significantly younger, more likely to have used intravenous drugs, began alcohol consumption at younger age, drank larger quantities of alcohol, and were more likely to be current opiate users and current cocaine users, compared with patients without HCV infection.

After adjustment by sex, age and quantity of alcohol consumption, patients with HCV infection were two times more likely to have advanced liver fibrosis (odds ratio = 2.1, 95% confidence ratio,1.5–3.1).

“Successful evaluation of liver damage in this population includes the management of both excessive alcohol consumption and chronic HCV-related disease,” according to Dr. Sanvisens and her colleagues. “Furthermore, current guidelines from the American Association for the Study of Liver Disease, the European Association for the Study of the Liver, and the World Health Organization already recommend treatment of HCV infection in individuals with substance use disorder,” they concluded.

The authors reported that they had no conflicts of interest.

SOURCE: Sanvisens, A et al., Drug and Alcohol Dependence (2018;188:180-6).

Publications
Topics
Sections

 

Patients infected with both hepatitis C virus (HCV) and alcohol use disorder (AUD) were twice as likely to present with advanced liver fibrosis at hospital admission, according to the results of a database study published in Drug and Alcohol Dependence (2018;188:180-6).

The study population consisted of 1,313 patients (80% men). Median age at admission was 45 years and the median alcohol consumption was 200 g/day. HCV infection was present in 236 patients (18%), according to Arantza Sanvisens, MD, of the Universitat Autònoma de Barcelona, Badalona, Spain, and her colleagues.

s-c-s/Thinkstock
AUD patients with HCV infection were significantly younger, more likely to have used intravenous drugs, began alcohol consumption at younger age, drank larger quantities of alcohol, and were more likely to be current opiate users and current cocaine users, compared with patients without HCV infection.

After adjustment by sex, age and quantity of alcohol consumption, patients with HCV infection were two times more likely to have advanced liver fibrosis (odds ratio = 2.1, 95% confidence ratio,1.5–3.1).

“Successful evaluation of liver damage in this population includes the management of both excessive alcohol consumption and chronic HCV-related disease,” according to Dr. Sanvisens and her colleagues. “Furthermore, current guidelines from the American Association for the Study of Liver Disease, the European Association for the Study of the Liver, and the World Health Organization already recommend treatment of HCV infection in individuals with substance use disorder,” they concluded.

The authors reported that they had no conflicts of interest.

SOURCE: Sanvisens, A et al., Drug and Alcohol Dependence (2018;188:180-6).

 

Patients infected with both hepatitis C virus (HCV) and alcohol use disorder (AUD) were twice as likely to present with advanced liver fibrosis at hospital admission, according to the results of a database study published in Drug and Alcohol Dependence (2018;188:180-6).

The study population consisted of 1,313 patients (80% men). Median age at admission was 45 years and the median alcohol consumption was 200 g/day. HCV infection was present in 236 patients (18%), according to Arantza Sanvisens, MD, of the Universitat Autònoma de Barcelona, Badalona, Spain, and her colleagues.

s-c-s/Thinkstock
AUD patients with HCV infection were significantly younger, more likely to have used intravenous drugs, began alcohol consumption at younger age, drank larger quantities of alcohol, and were more likely to be current opiate users and current cocaine users, compared with patients without HCV infection.

After adjustment by sex, age and quantity of alcohol consumption, patients with HCV infection were two times more likely to have advanced liver fibrosis (odds ratio = 2.1, 95% confidence ratio,1.5–3.1).

“Successful evaluation of liver damage in this population includes the management of both excessive alcohol consumption and chronic HCV-related disease,” according to Dr. Sanvisens and her colleagues. “Furthermore, current guidelines from the American Association for the Study of Liver Disease, the European Association for the Study of the Liver, and the World Health Organization already recommend treatment of HCV infection in individuals with substance use disorder,” they concluded.

The authors reported that they had no conflicts of interest.

SOURCE: Sanvisens, A et al., Drug and Alcohol Dependence (2018;188:180-6).

Publications
Publications
Topics
Article Type
Sections
Article Source

FROM DRUG AND ALCOHOL DEPENDENCE

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Effect of High-Dose Ergocalciferol on Rate of Falls in a Community-Dwelling, Home-Based Primary Care Veteran Population: A Case-Crossover Study

Article Type
Changed
Wed, 06/13/2018 - 06:23
There was no difference identified in the rate of falls immediately prior to and following initiation of ergocalciferol 50,000 IU self-administered once weekly.

Annually, about 1 in 4 individuals aged ≥ 65 years will experience at least 1 fall, resulting in nearly 2.8 million cases of emergently treated injuries and more than 800,000 hospitalizations.1-3 Therefore, fall prevention has garnered heightened attention as the population ages. Many factors are at play in fall risk, including vitamin D levels.

Although vitamin D is essential for a multitude of physiologic processes, evidence suggests that serum concentrations of 25-hydroxy vitamin D (25[OH]D) < 30 ng/mL are associated with decreased bone mineral density, muscle weakness, impaired lower extremity function, balance problems, and high fall rates.4-12 Through a meta-analysis published in 2009 that included 8 randomized controlled trials of 2,426 participants aged ≥ 65 years, Bischoff-Ferrari and colleagues found that a dose of 700 to 1,000 IU/d significantly reduced the risk of falling compared with doses of 200 to 600 IU/d.13 A subsequent meta-analysis published in 2012 including 14 randomized trials across 28,135 participants aged ≥ 65 years evaluated the efficacy of supplementation with vitamin D with or without calcium cosupplement on fall prevention.14 Although no difference was found in falls across the total sample, a subgroup analysis exploring the effect in participants with lower vitamin D levels demonstrated a statistically significant benefit of vitamin D supplementation. To decrease the risk of fractures and falls, the American Geriatric Society (AGS) recommends vitamin D supplementation of at least 1,000 IU/d in combination with calcium supplementation in older adults, with a minimum goal 25(OH)D level of 30 ng/mL.15

Alarmingly, Bischoff-Ferrari and colleagues published a double-blind, randomized trial that described an association between higher monthly doses of vitamin D3 (cholecalciferol) and an increased risk of falls compared with 24,000 IU/mo. Particularly at higher achieved levels of 25(OH)D, with no difference in benefit was noted on the primary endpoint of lower extremity function.16

Although there exists limited representation of high-dose vitamin D2 and its resultant effects on falls in those aged ≥ 65 years, once weekly prescribing of vitamin D2 in the form of ergocalciferol 50,000 IU remains a commonly used option for repletion of low 25(OH)D. In this study, the authors evaluated the effect of high-dose ergocalciferol on rate of falls in a community-dwelling veteran population ≥ 65 years with low 25(OH)D.

Methods

Following approval from the Lexington Veteran Affairs Medical Center (Lexington VAMC) Institutional Review Board and Research and Development Committee, a retrospective chart review was conducted. Subjects were identified through use of Microsoft SQL (Redmond, WA). Veterans included were those enrolled in home-based primary care (HBPC), a primary care assignment for those individuals requiring skilled services and case management within the home and for whom falls are documented within the electronic health record (EHR). As fall data in a community-dwelling population are difficult to obtain in a retrospective analysis, the HBPC population offered a viable pool of data for evaluation. Some patients eligible for HBPC at the Lexington VAMC may be more dependent on specialized services offered through HBPC or have a reduced ability to perform activities of daily living (ADLs). Other patients can ambulate but may have difficulty traveling great distances to Lexington VAMC.

In addition to HBPC enrollment, veterans were included in the study if they were aged ≥ 65 years and had a 25(OH)D level < 20 ng/mL with subsequent prescribing of high-dose vitamin D2 for repletion, namely, ergocalciferol 50,000 IU once weekly, between March 1, 2005, and September 30, 2016.

Veterans were excluded if they had been enrolled in HBPC for less than 60 days before ergocalciferol initiation, if they were deceased or had been discharged from HBPC within 60 days of ergocalciferol initiation, if they had comorbid conditions that inherently increase the risk of falls (eg, Lewy body dementia, Parkinson disease, bilateral below-the-hip amputation, and hemi- or quadriplegia), or if they had been dispensed a previous prescription of ergocalciferol in the preceding 9 months.

A case-crossover study design was used, which compared the 60-day period prior to initiation of ergocalciferol supplementation with the 60-day period following initiation of supplementation. A 7-day period between these 2 periods was allotted to allow time for mailing of the new prescription and initiation of the supplement.

Data Collection

Data collected included age, sex, levels of 25(OH)D, ergocalciferol prescription data (dose, administration frequency, quantity, day supply, and fill date), falls documented during the 60 days preceding and during supplementation, and the number of medications that posed an increased risk of falls actively prescribed prior to and during supplementation. Those medications considered to increase risk of falls were determined according to the medications listed in the AGS 2015 Updated Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.17

 

 

Endpoints

The primary endpoint assessed was the change in rate of falls between the time preceding and during supplementation. The number of falls during the 60 days preceding ergocalciferol supplementation was standardized to falls per person per 30 days and compared with the same parameter during the 60-day period following initiation of ergocalciferol.

The secondary outcome was the rate of falls according to the level of 25(OH)D achieved as a result of supplementation in those patients who achieved a minimum 25(OH)D level of 30 ng/mL according to AGS recommendations. Those patients who achieved a minimum 25(OH)D concentration of 30 ng/mL were separated into 2 equal groups according to their respective concentration relative to the median.

Statistical Analysis

Numerical variables were compared using a Student t test. For the primary outcome, 64 participants were required in order to achieve 80% power at a significance of .05 for a 2-tailed assessment, each serving as his or her own control in the case-crossover study design. For the secondary outcome of falls according to 25(OH)D level following supplementation in order to achieve 80% power at a significance of 0.05 for a 2-tailed assessment, a total of 128 participants who reach a minimum 25(OH)D level of 30 ng/mL were required.

Results

After screening 187 subjects who met the inclusion criteria, 107 subjects were excluded (Figure ). 

Of the 80 study enrollees, 78 were male. The mean age was 81 years with 81.3% (n = 65) aged ≥ 75 years. The mean 25(OH)D level prior to supplementation was 14.5 ng/mL (SD 4.2). 
The mean number of potentially inappropriate medications that may increase risk of falls (PIMs-F) was 0.81 PIMs-F per person (SD 0.92). Baseline patient characteristic data are summarized in Table 1. 


Primary Endpoint

Following once weekly supplementation with ergocalciferol 50,000 IU, 25(OH)D levels increased from 14.5 ng/mL (SD 4.2) to 27.6 (SD 9.6) (P < .01). Of note, the timing of the 25(OH)D level obtained following initiation of supplementation ranged between 8 weeks and 24 weeks. The number of PIMs-F decreased marginally, although to a not statistically significant degree, from 0.81 PIMs-F per person (SD 0.92) to 0.76 PIMs-F per person (SD 0.88). 

The number of falls among the group was identical both preceding and during supplementation, totaling 24 falls in each 60-day period and equating to a rate of 0.15 falls per person per 30 days to which this was standardized (P = .99) (Table 2).

Secondary Endpoint

Although 51 of the subjects (63.8%) failed to achieve the target 25(OH)D level of ≥ 30 ng/mL, 29 were successful (Table 3). 

Of those, 14 subjects achieved a 25(OH)D level of 30 to 36 ng/mL (mean 33.5 ng/mL, SD 2.0), and the remaining 15 subjects achieved a 25(OH)D level of > 36 ng/mL (mean 42.8 ng/mL, SD 5.2). In subjects whose achieved 25(OH)D level was < 30 ng/mL, the rate of falls per person per 30 days decreased from 0.2 during the 60 days preceding ergocalciferol supplementation to 0.1 on initiation of supplementation.

In subjects whose achieved 25(OH)D level was 30 to 36 ng/mL, the rate of falls per person per 30 days increased from 0.036 to 0.18. Similarly, an increase in rate of falls per person per 30 days from 0.1 to 0.3 was noted in subjects whose attained 25(OH)D level was > 36.0 ng/mL. However, study enrollment was underpowered to claim statistical significance in these findings related to the secondary endpoint.

Discussion

In this retrospective chart review, individuals aged ≥ 65 years who were prescribed once weekly ergocalciferol 50,000 IU for increase of 25(OH)D levels < 20 ng/mL experienced no change in rate of falls across the entire study population. In those individuals whose achieved 25(OH)D level met the AGS recommendation of ≥ 30 ng/mL, there was a trend toward an increased rate of falls while the rate of falls decreased for subjects whose achieved 25(OH)D level was < 30 ng/mL.

High-dose vitamin D supplementation, albeit with vitamin D3, and its effect on falls have been evaluated in the geriatric population previously, most notably and recently, by Bischoff-Ferrari and colleagues.16 In a study comparing 24,000 IU vitamin D3 per month vs 60,000 IU vitamin D3 per month vs 24,000 IU vitamin D3 plus calcifediol 300 µg per month, lower extremity function did not differ in the 3 groups. However, an increased number of falls was noted in the second and third arm, respectively. Furthermore, after 12 months of treatment, those individuals who achieved the highest quartile of 25(OH)D level (44.7-98.9 ng/mL) had starkly increased odds of falling and number of falls compared with those achieving the lowest quartile (21.3-30.3 ng/mL).

The results of this study suggest that once-weekly high-dose vitamin D2 may carry a similar risk of increasing falls as found with high-dose vitamin D3, particularly at higher achieved levels of 25(OH)D. A possible explanation for a lower rate of falls in those individuals who did not achieve a 25(OH)D level of at least 30 ng/mL could be that these individuals may not have initiated the medication appropriately or administered it adherently, thereby avoiding a possible deleterious effect that the high-dose preparation may pose in this population.

Given the retrospective nature of the study and the evaluation of the change in the 25(OH)D level following approximately a 90-day supply of ergocalciferol, adherence was not addressed. In this case, although increased 25(OH)D level was the desired outcome of vitamin D supplementation, the increase in rate of falls may be attributable to the high-dose preparation itself. Alternatively, the 25(OH)D target of ≥ 30 ng/mL may be worth reconsidering in favor of a lower target with an upper limit.

The rate of falls in this study was collected over the 60 days following initiation of ergocalciferol. However, the achieved 25(OH)D level was not evaluated until between 8 and 24 weeks following initiation. In this context, it may be more likely that the increased rate of falls could be attributable to the high-dose nature of vitamin D2 supplementation or the rate of 25(OH)D repletion rather than the 25(OH)D level ultimately achieved.

 

 

Limitations

Given the study’s retrospective nature, at times there was difficulty in locating information in the EHR, including accurate reports of active medication use during study periods or documentation of all falls that had occurred in the appropriate format. This was further complicated by the reliance on self-reporting of falls, which may potentiate an underestimation of total falls.

The largely homogenous study population may limit extrapolating these results. Additionally, although some diseases and medications with an inherent risk on fall risk were incorporated into the exclusion criteria, on analysis, other diseases and medications were identified that also may pose a similar risk. These include legal blindness and a history of below-the-knee amputation as well as long-term opioid therapy and intensive antihypertensive therapy with multiple agents. Furthermore, other potential risk factors for falls were not addressed, such as functional status, use of assistive devices, or unsafe home environments.

For the secondary endpoint, sample size was not met for statistical significance, which limited the study’s ability to confirm the veracity of the trend of increased falls. Study duration posed an additional limitation. As most veterans enrolled in HBPC have vitamin D supplementation initiated soon after enrollment when the need for vitamin D repletion is routinely assessed, a 2-month duration for evaluation prior to and immediately following initiation of ergocalciferol was necessary to allow for adequate study enrollment for analysis of the primary endpoint. However, this may be resolved through conduction of a prospective study in the future.

Conclusion

There was no difference identified in the rate of falls immediately prior to and following initiation of ergocalciferol 50,000 IU self-administered once weekly. There was a trend of increased rate of falls in subjects with high levels of 25(OH)D achieved. In light of a similar finding of high-dose vitamin D3 associated with an increased rate of falls, particularly with higher achieved levels of 25(OH)D, it may be warranted to consider avoiding high-dose vitamin D2 supplementation. Future research including prospective, randomized clinical studies with a longer duration of follow-up would be recommended to confirm these findings and test the generalizability in the non-HBPC community-dwelling population.

References

1. Stevens JA, Ballesteros MF, Mack KA, Rudd RA, DeCaro E, Adler G. Gender differences in seeking care for falls in the aged Medicare population. Am J Prev Med. 2012;43(1):59-62.

2. Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. Welcome to WISQARS. https://www.cdc.gov/injury/wisqars/index.html. Updated February 5, 2018. Accessed April 10, 2018.

3. O’Loughlin JL, Robitaille Y, Boivin JF, Suissa S. Incidence of and risk factors for falls and injurious falls among the community-dwelling elderly. Am J Epidemiol. 1993;137(3):342-354.

4. Bischhoff-Ferrari HA, Dawson-Hughes B, Willet WC, et al. Effect of vitamin D on falls: a meta-analysis. JAMA. 2004;291(16):1999-2006.

5. Holick MF. Resurrection of vitamin D deficiency and rickets. J Clin Invest. 2006;116(8):2062-2072.

6. Bischoff-Ferrari HA, Giovannucci E, Willett WC, Dietrich T, Dawson-Hughes B. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84(1):18-28.

7. Bischoff HA, Stähelin HB, Dick W, et al. Effects of vitamin D and calcium supplementation on falls: a randomized controlled trial. J Bone Miner Res. 2003;18(2):343-351.

8. Bischoff-Ferrari HA, Dietrich T, Orav EJ, et al. Higher 25-OH vitamin D concentrations are associated with better lower-extremity function in both active and inactive persons aged > 60 years. Am J Clin Nutr. 2004;80(3):752-758.

9. Pfeifer M, Begerow B, Minne HW, Abrams C, Nachtigall D, Hansen C. Effects of a short-term vitamin D and calcium supplementation on body sway and secondary hyperparathyroidism in elderly women. J Bone Miner Res. 2000;15(6):1113-1118.

10. Sambrook PN, Chen JS, March LM, et al. Serum parathyroid hormone predicts time to fall independent of vitamin D status in a frail elderly population. J Clin Endocrinol Metab. 2004;89(4):1572-1576.

11. Flicker L, Mead K, MacInnis RJ, et al. Serum vitamin D and falls in older women in residential care in Australia. J Am Geratr Soc. 2003;51(11):1533-1538.

12. Faulkner KA, Cauley JA, Zmuda JM, et al. Higher 1,25-dihydroxyvitamin D3 concentrations associated with lower fall rates in older community-dwelling women. Osteoporos Int. 2006;17(9):1318-1328.

13. Bischoff-Ferrari HA, Dawson-Hughes B, Staehelin HB, et al. Fall prevention with supplemental and active forms of vitamin D: a meta-analysis of randomized controlled trials. BMJ. 2009;339:b3692.

14. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2009;(2):CD007146.

15. American Geriatrics Society Workgroup on Vitamin D Supplementation for Older Adults. Recommendations abstracted from the American Geriatrics Society Consensus Statement on vitamin D for the prevention of falls and their consequences. J Am Geriatr Soc. 2014;62(1):147-152.

16. Bischoff-Ferrari HA, Dawson-Hughes B, Orav EJ, et al. Monthly high-dose vitamin D treatment for the prevention of functional decline: a randomized clinical trial. JAMA Intern Med. 2016;176(2):175-183.

17. American Geriatrics Society 2015 Beers Criteria Update Expert Panel. American Geriatrics Society 2015 updated Beers Criteria for potentially inappropriate medication use in older adults. J Am Geriatr Soc. 2015;63(11):2227-2246.

Article PDF
Author and Disclosure Information

Dr. Albers is a Clinical Pharmacy Specialist with the VA Northern Indiana Health Care System. Dr. Downs is a Clinical Pharmacy Specialist in Geriatrics, and Dr. Lane is the Associate Chief of Pharmacy, both at the Lexington VA Medical Center in Kentucky.
Correspondence: Dr. Albers ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Issue
Federal Practitioner - 35(6)a
Publications
Topics
Page Number
32-36
Sections
Author and Disclosure Information

Dr. Albers is a Clinical Pharmacy Specialist with the VA Northern Indiana Health Care System. Dr. Downs is a Clinical Pharmacy Specialist in Geriatrics, and Dr. Lane is the Associate Chief of Pharmacy, both at the Lexington VA Medical Center in Kentucky.
Correspondence: Dr. Albers ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Author and Disclosure Information

Dr. Albers is a Clinical Pharmacy Specialist with the VA Northern Indiana Health Care System. Dr. Downs is a Clinical Pharmacy Specialist in Geriatrics, and Dr. Lane is the Associate Chief of Pharmacy, both at the Lexington VA Medical Center in Kentucky.
Correspondence: Dr. Albers ([email protected])

Author disclosures
The authors report no actual or potential conflicts of interest with regard to this article.

Disclaimer
The opinions expressed herein are those of the authors and do not necessarily reflect those of Federal Practitioner, Frontline Medical Communications Inc., the US Government, or any of its agencies.

Article PDF
Article PDF
There was no difference identified in the rate of falls immediately prior to and following initiation of ergocalciferol 50,000 IU self-administered once weekly.
There was no difference identified in the rate of falls immediately prior to and following initiation of ergocalciferol 50,000 IU self-administered once weekly.

Annually, about 1 in 4 individuals aged ≥ 65 years will experience at least 1 fall, resulting in nearly 2.8 million cases of emergently treated injuries and more than 800,000 hospitalizations.1-3 Therefore, fall prevention has garnered heightened attention as the population ages. Many factors are at play in fall risk, including vitamin D levels.

Although vitamin D is essential for a multitude of physiologic processes, evidence suggests that serum concentrations of 25-hydroxy vitamin D (25[OH]D) < 30 ng/mL are associated with decreased bone mineral density, muscle weakness, impaired lower extremity function, balance problems, and high fall rates.4-12 Through a meta-analysis published in 2009 that included 8 randomized controlled trials of 2,426 participants aged ≥ 65 years, Bischoff-Ferrari and colleagues found that a dose of 700 to 1,000 IU/d significantly reduced the risk of falling compared with doses of 200 to 600 IU/d.13 A subsequent meta-analysis published in 2012 including 14 randomized trials across 28,135 participants aged ≥ 65 years evaluated the efficacy of supplementation with vitamin D with or without calcium cosupplement on fall prevention.14 Although no difference was found in falls across the total sample, a subgroup analysis exploring the effect in participants with lower vitamin D levels demonstrated a statistically significant benefit of vitamin D supplementation. To decrease the risk of fractures and falls, the American Geriatric Society (AGS) recommends vitamin D supplementation of at least 1,000 IU/d in combination with calcium supplementation in older adults, with a minimum goal 25(OH)D level of 30 ng/mL.15

Alarmingly, Bischoff-Ferrari and colleagues published a double-blind, randomized trial that described an association between higher monthly doses of vitamin D3 (cholecalciferol) and an increased risk of falls compared with 24,000 IU/mo. Particularly at higher achieved levels of 25(OH)D, with no difference in benefit was noted on the primary endpoint of lower extremity function.16

Although there exists limited representation of high-dose vitamin D2 and its resultant effects on falls in those aged ≥ 65 years, once weekly prescribing of vitamin D2 in the form of ergocalciferol 50,000 IU remains a commonly used option for repletion of low 25(OH)D. In this study, the authors evaluated the effect of high-dose ergocalciferol on rate of falls in a community-dwelling veteran population ≥ 65 years with low 25(OH)D.

Methods

Following approval from the Lexington Veteran Affairs Medical Center (Lexington VAMC) Institutional Review Board and Research and Development Committee, a retrospective chart review was conducted. Subjects were identified through use of Microsoft SQL (Redmond, WA). Veterans included were those enrolled in home-based primary care (HBPC), a primary care assignment for those individuals requiring skilled services and case management within the home and for whom falls are documented within the electronic health record (EHR). As fall data in a community-dwelling population are difficult to obtain in a retrospective analysis, the HBPC population offered a viable pool of data for evaluation. Some patients eligible for HBPC at the Lexington VAMC may be more dependent on specialized services offered through HBPC or have a reduced ability to perform activities of daily living (ADLs). Other patients can ambulate but may have difficulty traveling great distances to Lexington VAMC.

In addition to HBPC enrollment, veterans were included in the study if they were aged ≥ 65 years and had a 25(OH)D level < 20 ng/mL with subsequent prescribing of high-dose vitamin D2 for repletion, namely, ergocalciferol 50,000 IU once weekly, between March 1, 2005, and September 30, 2016.

Veterans were excluded if they had been enrolled in HBPC for less than 60 days before ergocalciferol initiation, if they were deceased or had been discharged from HBPC within 60 days of ergocalciferol initiation, if they had comorbid conditions that inherently increase the risk of falls (eg, Lewy body dementia, Parkinson disease, bilateral below-the-hip amputation, and hemi- or quadriplegia), or if they had been dispensed a previous prescription of ergocalciferol in the preceding 9 months.

A case-crossover study design was used, which compared the 60-day period prior to initiation of ergocalciferol supplementation with the 60-day period following initiation of supplementation. A 7-day period between these 2 periods was allotted to allow time for mailing of the new prescription and initiation of the supplement.

Data Collection

Data collected included age, sex, levels of 25(OH)D, ergocalciferol prescription data (dose, administration frequency, quantity, day supply, and fill date), falls documented during the 60 days preceding and during supplementation, and the number of medications that posed an increased risk of falls actively prescribed prior to and during supplementation. Those medications considered to increase risk of falls were determined according to the medications listed in the AGS 2015 Updated Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.17

 

 

Endpoints

The primary endpoint assessed was the change in rate of falls between the time preceding and during supplementation. The number of falls during the 60 days preceding ergocalciferol supplementation was standardized to falls per person per 30 days and compared with the same parameter during the 60-day period following initiation of ergocalciferol.

The secondary outcome was the rate of falls according to the level of 25(OH)D achieved as a result of supplementation in those patients who achieved a minimum 25(OH)D level of 30 ng/mL according to AGS recommendations. Those patients who achieved a minimum 25(OH)D concentration of 30 ng/mL were separated into 2 equal groups according to their respective concentration relative to the median.

Statistical Analysis

Numerical variables were compared using a Student t test. For the primary outcome, 64 participants were required in order to achieve 80% power at a significance of .05 for a 2-tailed assessment, each serving as his or her own control in the case-crossover study design. For the secondary outcome of falls according to 25(OH)D level following supplementation in order to achieve 80% power at a significance of 0.05 for a 2-tailed assessment, a total of 128 participants who reach a minimum 25(OH)D level of 30 ng/mL were required.

Results

After screening 187 subjects who met the inclusion criteria, 107 subjects were excluded (Figure ). 

Of the 80 study enrollees, 78 were male. The mean age was 81 years with 81.3% (n = 65) aged ≥ 75 years. The mean 25(OH)D level prior to supplementation was 14.5 ng/mL (SD 4.2). 
The mean number of potentially inappropriate medications that may increase risk of falls (PIMs-F) was 0.81 PIMs-F per person (SD 0.92). Baseline patient characteristic data are summarized in Table 1. 


Primary Endpoint

Following once weekly supplementation with ergocalciferol 50,000 IU, 25(OH)D levels increased from 14.5 ng/mL (SD 4.2) to 27.6 (SD 9.6) (P < .01). Of note, the timing of the 25(OH)D level obtained following initiation of supplementation ranged between 8 weeks and 24 weeks. The number of PIMs-F decreased marginally, although to a not statistically significant degree, from 0.81 PIMs-F per person (SD 0.92) to 0.76 PIMs-F per person (SD 0.88). 

The number of falls among the group was identical both preceding and during supplementation, totaling 24 falls in each 60-day period and equating to a rate of 0.15 falls per person per 30 days to which this was standardized (P = .99) (Table 2).

Secondary Endpoint

Although 51 of the subjects (63.8%) failed to achieve the target 25(OH)D level of ≥ 30 ng/mL, 29 were successful (Table 3). 

Of those, 14 subjects achieved a 25(OH)D level of 30 to 36 ng/mL (mean 33.5 ng/mL, SD 2.0), and the remaining 15 subjects achieved a 25(OH)D level of > 36 ng/mL (mean 42.8 ng/mL, SD 5.2). In subjects whose achieved 25(OH)D level was < 30 ng/mL, the rate of falls per person per 30 days decreased from 0.2 during the 60 days preceding ergocalciferol supplementation to 0.1 on initiation of supplementation.

In subjects whose achieved 25(OH)D level was 30 to 36 ng/mL, the rate of falls per person per 30 days increased from 0.036 to 0.18. Similarly, an increase in rate of falls per person per 30 days from 0.1 to 0.3 was noted in subjects whose attained 25(OH)D level was > 36.0 ng/mL. However, study enrollment was underpowered to claim statistical significance in these findings related to the secondary endpoint.

Discussion

In this retrospective chart review, individuals aged ≥ 65 years who were prescribed once weekly ergocalciferol 50,000 IU for increase of 25(OH)D levels < 20 ng/mL experienced no change in rate of falls across the entire study population. In those individuals whose achieved 25(OH)D level met the AGS recommendation of ≥ 30 ng/mL, there was a trend toward an increased rate of falls while the rate of falls decreased for subjects whose achieved 25(OH)D level was < 30 ng/mL.

High-dose vitamin D supplementation, albeit with vitamin D3, and its effect on falls have been evaluated in the geriatric population previously, most notably and recently, by Bischoff-Ferrari and colleagues.16 In a study comparing 24,000 IU vitamin D3 per month vs 60,000 IU vitamin D3 per month vs 24,000 IU vitamin D3 plus calcifediol 300 µg per month, lower extremity function did not differ in the 3 groups. However, an increased number of falls was noted in the second and third arm, respectively. Furthermore, after 12 months of treatment, those individuals who achieved the highest quartile of 25(OH)D level (44.7-98.9 ng/mL) had starkly increased odds of falling and number of falls compared with those achieving the lowest quartile (21.3-30.3 ng/mL).

The results of this study suggest that once-weekly high-dose vitamin D2 may carry a similar risk of increasing falls as found with high-dose vitamin D3, particularly at higher achieved levels of 25(OH)D. A possible explanation for a lower rate of falls in those individuals who did not achieve a 25(OH)D level of at least 30 ng/mL could be that these individuals may not have initiated the medication appropriately or administered it adherently, thereby avoiding a possible deleterious effect that the high-dose preparation may pose in this population.

Given the retrospective nature of the study and the evaluation of the change in the 25(OH)D level following approximately a 90-day supply of ergocalciferol, adherence was not addressed. In this case, although increased 25(OH)D level was the desired outcome of vitamin D supplementation, the increase in rate of falls may be attributable to the high-dose preparation itself. Alternatively, the 25(OH)D target of ≥ 30 ng/mL may be worth reconsidering in favor of a lower target with an upper limit.

The rate of falls in this study was collected over the 60 days following initiation of ergocalciferol. However, the achieved 25(OH)D level was not evaluated until between 8 and 24 weeks following initiation. In this context, it may be more likely that the increased rate of falls could be attributable to the high-dose nature of vitamin D2 supplementation or the rate of 25(OH)D repletion rather than the 25(OH)D level ultimately achieved.

 

 

Limitations

Given the study’s retrospective nature, at times there was difficulty in locating information in the EHR, including accurate reports of active medication use during study periods or documentation of all falls that had occurred in the appropriate format. This was further complicated by the reliance on self-reporting of falls, which may potentiate an underestimation of total falls.

The largely homogenous study population may limit extrapolating these results. Additionally, although some diseases and medications with an inherent risk on fall risk were incorporated into the exclusion criteria, on analysis, other diseases and medications were identified that also may pose a similar risk. These include legal blindness and a history of below-the-knee amputation as well as long-term opioid therapy and intensive antihypertensive therapy with multiple agents. Furthermore, other potential risk factors for falls were not addressed, such as functional status, use of assistive devices, or unsafe home environments.

For the secondary endpoint, sample size was not met for statistical significance, which limited the study’s ability to confirm the veracity of the trend of increased falls. Study duration posed an additional limitation. As most veterans enrolled in HBPC have vitamin D supplementation initiated soon after enrollment when the need for vitamin D repletion is routinely assessed, a 2-month duration for evaluation prior to and immediately following initiation of ergocalciferol was necessary to allow for adequate study enrollment for analysis of the primary endpoint. However, this may be resolved through conduction of a prospective study in the future.

Conclusion

There was no difference identified in the rate of falls immediately prior to and following initiation of ergocalciferol 50,000 IU self-administered once weekly. There was a trend of increased rate of falls in subjects with high levels of 25(OH)D achieved. In light of a similar finding of high-dose vitamin D3 associated with an increased rate of falls, particularly with higher achieved levels of 25(OH)D, it may be warranted to consider avoiding high-dose vitamin D2 supplementation. Future research including prospective, randomized clinical studies with a longer duration of follow-up would be recommended to confirm these findings and test the generalizability in the non-HBPC community-dwelling population.

Annually, about 1 in 4 individuals aged ≥ 65 years will experience at least 1 fall, resulting in nearly 2.8 million cases of emergently treated injuries and more than 800,000 hospitalizations.1-3 Therefore, fall prevention has garnered heightened attention as the population ages. Many factors are at play in fall risk, including vitamin D levels.

Although vitamin D is essential for a multitude of physiologic processes, evidence suggests that serum concentrations of 25-hydroxy vitamin D (25[OH]D) < 30 ng/mL are associated with decreased bone mineral density, muscle weakness, impaired lower extremity function, balance problems, and high fall rates.4-12 Through a meta-analysis published in 2009 that included 8 randomized controlled trials of 2,426 participants aged ≥ 65 years, Bischoff-Ferrari and colleagues found that a dose of 700 to 1,000 IU/d significantly reduced the risk of falling compared with doses of 200 to 600 IU/d.13 A subsequent meta-analysis published in 2012 including 14 randomized trials across 28,135 participants aged ≥ 65 years evaluated the efficacy of supplementation with vitamin D with or without calcium cosupplement on fall prevention.14 Although no difference was found in falls across the total sample, a subgroup analysis exploring the effect in participants with lower vitamin D levels demonstrated a statistically significant benefit of vitamin D supplementation. To decrease the risk of fractures and falls, the American Geriatric Society (AGS) recommends vitamin D supplementation of at least 1,000 IU/d in combination with calcium supplementation in older adults, with a minimum goal 25(OH)D level of 30 ng/mL.15

Alarmingly, Bischoff-Ferrari and colleagues published a double-blind, randomized trial that described an association between higher monthly doses of vitamin D3 (cholecalciferol) and an increased risk of falls compared with 24,000 IU/mo. Particularly at higher achieved levels of 25(OH)D, with no difference in benefit was noted on the primary endpoint of lower extremity function.16

Although there exists limited representation of high-dose vitamin D2 and its resultant effects on falls in those aged ≥ 65 years, once weekly prescribing of vitamin D2 in the form of ergocalciferol 50,000 IU remains a commonly used option for repletion of low 25(OH)D. In this study, the authors evaluated the effect of high-dose ergocalciferol on rate of falls in a community-dwelling veteran population ≥ 65 years with low 25(OH)D.

Methods

Following approval from the Lexington Veteran Affairs Medical Center (Lexington VAMC) Institutional Review Board and Research and Development Committee, a retrospective chart review was conducted. Subjects were identified through use of Microsoft SQL (Redmond, WA). Veterans included were those enrolled in home-based primary care (HBPC), a primary care assignment for those individuals requiring skilled services and case management within the home and for whom falls are documented within the electronic health record (EHR). As fall data in a community-dwelling population are difficult to obtain in a retrospective analysis, the HBPC population offered a viable pool of data for evaluation. Some patients eligible for HBPC at the Lexington VAMC may be more dependent on specialized services offered through HBPC or have a reduced ability to perform activities of daily living (ADLs). Other patients can ambulate but may have difficulty traveling great distances to Lexington VAMC.

In addition to HBPC enrollment, veterans were included in the study if they were aged ≥ 65 years and had a 25(OH)D level < 20 ng/mL with subsequent prescribing of high-dose vitamin D2 for repletion, namely, ergocalciferol 50,000 IU once weekly, between March 1, 2005, and September 30, 2016.

Veterans were excluded if they had been enrolled in HBPC for less than 60 days before ergocalciferol initiation, if they were deceased or had been discharged from HBPC within 60 days of ergocalciferol initiation, if they had comorbid conditions that inherently increase the risk of falls (eg, Lewy body dementia, Parkinson disease, bilateral below-the-hip amputation, and hemi- or quadriplegia), or if they had been dispensed a previous prescription of ergocalciferol in the preceding 9 months.

A case-crossover study design was used, which compared the 60-day period prior to initiation of ergocalciferol supplementation with the 60-day period following initiation of supplementation. A 7-day period between these 2 periods was allotted to allow time for mailing of the new prescription and initiation of the supplement.

Data Collection

Data collected included age, sex, levels of 25(OH)D, ergocalciferol prescription data (dose, administration frequency, quantity, day supply, and fill date), falls documented during the 60 days preceding and during supplementation, and the number of medications that posed an increased risk of falls actively prescribed prior to and during supplementation. Those medications considered to increase risk of falls were determined according to the medications listed in the AGS 2015 Updated Beers Criteria for Potentially Inappropriate Medication Use in Older Adults.17

 

 

Endpoints

The primary endpoint assessed was the change in rate of falls between the time preceding and during supplementation. The number of falls during the 60 days preceding ergocalciferol supplementation was standardized to falls per person per 30 days and compared with the same parameter during the 60-day period following initiation of ergocalciferol.

The secondary outcome was the rate of falls according to the level of 25(OH)D achieved as a result of supplementation in those patients who achieved a minimum 25(OH)D level of 30 ng/mL according to AGS recommendations. Those patients who achieved a minimum 25(OH)D concentration of 30 ng/mL were separated into 2 equal groups according to their respective concentration relative to the median.

Statistical Analysis

Numerical variables were compared using a Student t test. For the primary outcome, 64 participants were required in order to achieve 80% power at a significance of .05 for a 2-tailed assessment, each serving as his or her own control in the case-crossover study design. For the secondary outcome of falls according to 25(OH)D level following supplementation in order to achieve 80% power at a significance of 0.05 for a 2-tailed assessment, a total of 128 participants who reach a minimum 25(OH)D level of 30 ng/mL were required.

Results

After screening 187 subjects who met the inclusion criteria, 107 subjects were excluded (Figure ). 

Of the 80 study enrollees, 78 were male. The mean age was 81 years with 81.3% (n = 65) aged ≥ 75 years. The mean 25(OH)D level prior to supplementation was 14.5 ng/mL (SD 4.2). 
The mean number of potentially inappropriate medications that may increase risk of falls (PIMs-F) was 0.81 PIMs-F per person (SD 0.92). Baseline patient characteristic data are summarized in Table 1. 


Primary Endpoint

Following once weekly supplementation with ergocalciferol 50,000 IU, 25(OH)D levels increased from 14.5 ng/mL (SD 4.2) to 27.6 (SD 9.6) (P < .01). Of note, the timing of the 25(OH)D level obtained following initiation of supplementation ranged between 8 weeks and 24 weeks. The number of PIMs-F decreased marginally, although to a not statistically significant degree, from 0.81 PIMs-F per person (SD 0.92) to 0.76 PIMs-F per person (SD 0.88). 

The number of falls among the group was identical both preceding and during supplementation, totaling 24 falls in each 60-day period and equating to a rate of 0.15 falls per person per 30 days to which this was standardized (P = .99) (Table 2).

Secondary Endpoint

Although 51 of the subjects (63.8%) failed to achieve the target 25(OH)D level of ≥ 30 ng/mL, 29 were successful (Table 3). 

Of those, 14 subjects achieved a 25(OH)D level of 30 to 36 ng/mL (mean 33.5 ng/mL, SD 2.0), and the remaining 15 subjects achieved a 25(OH)D level of > 36 ng/mL (mean 42.8 ng/mL, SD 5.2). In subjects whose achieved 25(OH)D level was < 30 ng/mL, the rate of falls per person per 30 days decreased from 0.2 during the 60 days preceding ergocalciferol supplementation to 0.1 on initiation of supplementation.

In subjects whose achieved 25(OH)D level was 30 to 36 ng/mL, the rate of falls per person per 30 days increased from 0.036 to 0.18. Similarly, an increase in rate of falls per person per 30 days from 0.1 to 0.3 was noted in subjects whose attained 25(OH)D level was > 36.0 ng/mL. However, study enrollment was underpowered to claim statistical significance in these findings related to the secondary endpoint.

Discussion

In this retrospective chart review, individuals aged ≥ 65 years who were prescribed once weekly ergocalciferol 50,000 IU for increase of 25(OH)D levels < 20 ng/mL experienced no change in rate of falls across the entire study population. In those individuals whose achieved 25(OH)D level met the AGS recommendation of ≥ 30 ng/mL, there was a trend toward an increased rate of falls while the rate of falls decreased for subjects whose achieved 25(OH)D level was < 30 ng/mL.

High-dose vitamin D supplementation, albeit with vitamin D3, and its effect on falls have been evaluated in the geriatric population previously, most notably and recently, by Bischoff-Ferrari and colleagues.16 In a study comparing 24,000 IU vitamin D3 per month vs 60,000 IU vitamin D3 per month vs 24,000 IU vitamin D3 plus calcifediol 300 µg per month, lower extremity function did not differ in the 3 groups. However, an increased number of falls was noted in the second and third arm, respectively. Furthermore, after 12 months of treatment, those individuals who achieved the highest quartile of 25(OH)D level (44.7-98.9 ng/mL) had starkly increased odds of falling and number of falls compared with those achieving the lowest quartile (21.3-30.3 ng/mL).

The results of this study suggest that once-weekly high-dose vitamin D2 may carry a similar risk of increasing falls as found with high-dose vitamin D3, particularly at higher achieved levels of 25(OH)D. A possible explanation for a lower rate of falls in those individuals who did not achieve a 25(OH)D level of at least 30 ng/mL could be that these individuals may not have initiated the medication appropriately or administered it adherently, thereby avoiding a possible deleterious effect that the high-dose preparation may pose in this population.

Given the retrospective nature of the study and the evaluation of the change in the 25(OH)D level following approximately a 90-day supply of ergocalciferol, adherence was not addressed. In this case, although increased 25(OH)D level was the desired outcome of vitamin D supplementation, the increase in rate of falls may be attributable to the high-dose preparation itself. Alternatively, the 25(OH)D target of ≥ 30 ng/mL may be worth reconsidering in favor of a lower target with an upper limit.

The rate of falls in this study was collected over the 60 days following initiation of ergocalciferol. However, the achieved 25(OH)D level was not evaluated until between 8 and 24 weeks following initiation. In this context, it may be more likely that the increased rate of falls could be attributable to the high-dose nature of vitamin D2 supplementation or the rate of 25(OH)D repletion rather than the 25(OH)D level ultimately achieved.

 

 

Limitations

Given the study’s retrospective nature, at times there was difficulty in locating information in the EHR, including accurate reports of active medication use during study periods or documentation of all falls that had occurred in the appropriate format. This was further complicated by the reliance on self-reporting of falls, which may potentiate an underestimation of total falls.

The largely homogenous study population may limit extrapolating these results. Additionally, although some diseases and medications with an inherent risk on fall risk were incorporated into the exclusion criteria, on analysis, other diseases and medications were identified that also may pose a similar risk. These include legal blindness and a history of below-the-knee amputation as well as long-term opioid therapy and intensive antihypertensive therapy with multiple agents. Furthermore, other potential risk factors for falls were not addressed, such as functional status, use of assistive devices, or unsafe home environments.

For the secondary endpoint, sample size was not met for statistical significance, which limited the study’s ability to confirm the veracity of the trend of increased falls. Study duration posed an additional limitation. As most veterans enrolled in HBPC have vitamin D supplementation initiated soon after enrollment when the need for vitamin D repletion is routinely assessed, a 2-month duration for evaluation prior to and immediately following initiation of ergocalciferol was necessary to allow for adequate study enrollment for analysis of the primary endpoint. However, this may be resolved through conduction of a prospective study in the future.

Conclusion

There was no difference identified in the rate of falls immediately prior to and following initiation of ergocalciferol 50,000 IU self-administered once weekly. There was a trend of increased rate of falls in subjects with high levels of 25(OH)D achieved. In light of a similar finding of high-dose vitamin D3 associated with an increased rate of falls, particularly with higher achieved levels of 25(OH)D, it may be warranted to consider avoiding high-dose vitamin D2 supplementation. Future research including prospective, randomized clinical studies with a longer duration of follow-up would be recommended to confirm these findings and test the generalizability in the non-HBPC community-dwelling population.

References

1. Stevens JA, Ballesteros MF, Mack KA, Rudd RA, DeCaro E, Adler G. Gender differences in seeking care for falls in the aged Medicare population. Am J Prev Med. 2012;43(1):59-62.

2. Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. Welcome to WISQARS. https://www.cdc.gov/injury/wisqars/index.html. Updated February 5, 2018. Accessed April 10, 2018.

3. O’Loughlin JL, Robitaille Y, Boivin JF, Suissa S. Incidence of and risk factors for falls and injurious falls among the community-dwelling elderly. Am J Epidemiol. 1993;137(3):342-354.

4. Bischhoff-Ferrari HA, Dawson-Hughes B, Willet WC, et al. Effect of vitamin D on falls: a meta-analysis. JAMA. 2004;291(16):1999-2006.

5. Holick MF. Resurrection of vitamin D deficiency and rickets. J Clin Invest. 2006;116(8):2062-2072.

6. Bischoff-Ferrari HA, Giovannucci E, Willett WC, Dietrich T, Dawson-Hughes B. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84(1):18-28.

7. Bischoff HA, Stähelin HB, Dick W, et al. Effects of vitamin D and calcium supplementation on falls: a randomized controlled trial. J Bone Miner Res. 2003;18(2):343-351.

8. Bischoff-Ferrari HA, Dietrich T, Orav EJ, et al. Higher 25-OH vitamin D concentrations are associated with better lower-extremity function in both active and inactive persons aged > 60 years. Am J Clin Nutr. 2004;80(3):752-758.

9. Pfeifer M, Begerow B, Minne HW, Abrams C, Nachtigall D, Hansen C. Effects of a short-term vitamin D and calcium supplementation on body sway and secondary hyperparathyroidism in elderly women. J Bone Miner Res. 2000;15(6):1113-1118.

10. Sambrook PN, Chen JS, March LM, et al. Serum parathyroid hormone predicts time to fall independent of vitamin D status in a frail elderly population. J Clin Endocrinol Metab. 2004;89(4):1572-1576.

11. Flicker L, Mead K, MacInnis RJ, et al. Serum vitamin D and falls in older women in residential care in Australia. J Am Geratr Soc. 2003;51(11):1533-1538.

12. Faulkner KA, Cauley JA, Zmuda JM, et al. Higher 1,25-dihydroxyvitamin D3 concentrations associated with lower fall rates in older community-dwelling women. Osteoporos Int. 2006;17(9):1318-1328.

13. Bischoff-Ferrari HA, Dawson-Hughes B, Staehelin HB, et al. Fall prevention with supplemental and active forms of vitamin D: a meta-analysis of randomized controlled trials. BMJ. 2009;339:b3692.

14. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2009;(2):CD007146.

15. American Geriatrics Society Workgroup on Vitamin D Supplementation for Older Adults. Recommendations abstracted from the American Geriatrics Society Consensus Statement on vitamin D for the prevention of falls and their consequences. J Am Geriatr Soc. 2014;62(1):147-152.

16. Bischoff-Ferrari HA, Dawson-Hughes B, Orav EJ, et al. Monthly high-dose vitamin D treatment for the prevention of functional decline: a randomized clinical trial. JAMA Intern Med. 2016;176(2):175-183.

17. American Geriatrics Society 2015 Beers Criteria Update Expert Panel. American Geriatrics Society 2015 updated Beers Criteria for potentially inappropriate medication use in older adults. J Am Geriatr Soc. 2015;63(11):2227-2246.

References

1. Stevens JA, Ballesteros MF, Mack KA, Rudd RA, DeCaro E, Adler G. Gender differences in seeking care for falls in the aged Medicare population. Am J Prev Med. 2012;43(1):59-62.

2. Centers for Disease Control and Prevention, National Center for Injury Prevention and Control. Welcome to WISQARS. https://www.cdc.gov/injury/wisqars/index.html. Updated February 5, 2018. Accessed April 10, 2018.

3. O’Loughlin JL, Robitaille Y, Boivin JF, Suissa S. Incidence of and risk factors for falls and injurious falls among the community-dwelling elderly. Am J Epidemiol. 1993;137(3):342-354.

4. Bischhoff-Ferrari HA, Dawson-Hughes B, Willet WC, et al. Effect of vitamin D on falls: a meta-analysis. JAMA. 2004;291(16):1999-2006.

5. Holick MF. Resurrection of vitamin D deficiency and rickets. J Clin Invest. 2006;116(8):2062-2072.

6. Bischoff-Ferrari HA, Giovannucci E, Willett WC, Dietrich T, Dawson-Hughes B. Estimation of optimal serum concentrations of 25-hydroxyvitamin D for multiple health outcomes. Am J Clin Nutr. 2006;84(1):18-28.

7. Bischoff HA, Stähelin HB, Dick W, et al. Effects of vitamin D and calcium supplementation on falls: a randomized controlled trial. J Bone Miner Res. 2003;18(2):343-351.

8. Bischoff-Ferrari HA, Dietrich T, Orav EJ, et al. Higher 25-OH vitamin D concentrations are associated with better lower-extremity function in both active and inactive persons aged > 60 years. Am J Clin Nutr. 2004;80(3):752-758.

9. Pfeifer M, Begerow B, Minne HW, Abrams C, Nachtigall D, Hansen C. Effects of a short-term vitamin D and calcium supplementation on body sway and secondary hyperparathyroidism in elderly women. J Bone Miner Res. 2000;15(6):1113-1118.

10. Sambrook PN, Chen JS, March LM, et al. Serum parathyroid hormone predicts time to fall independent of vitamin D status in a frail elderly population. J Clin Endocrinol Metab. 2004;89(4):1572-1576.

11. Flicker L, Mead K, MacInnis RJ, et al. Serum vitamin D and falls in older women in residential care in Australia. J Am Geratr Soc. 2003;51(11):1533-1538.

12. Faulkner KA, Cauley JA, Zmuda JM, et al. Higher 1,25-dihydroxyvitamin D3 concentrations associated with lower fall rates in older community-dwelling women. Osteoporos Int. 2006;17(9):1318-1328.

13. Bischoff-Ferrari HA, Dawson-Hughes B, Staehelin HB, et al. Fall prevention with supplemental and active forms of vitamin D: a meta-analysis of randomized controlled trials. BMJ. 2009;339:b3692.

14. Gillespie LD, Robertson MC, Gillespie WJ, et al. Interventions for preventing falls in older people living in the community. Cochrane Database Syst Rev. 2009;(2):CD007146.

15. American Geriatrics Society Workgroup on Vitamin D Supplementation for Older Adults. Recommendations abstracted from the American Geriatrics Society Consensus Statement on vitamin D for the prevention of falls and their consequences. J Am Geriatr Soc. 2014;62(1):147-152.

16. Bischoff-Ferrari HA, Dawson-Hughes B, Orav EJ, et al. Monthly high-dose vitamin D treatment for the prevention of functional decline: a randomized clinical trial. JAMA Intern Med. 2016;176(2):175-183.

17. American Geriatrics Society 2015 Beers Criteria Update Expert Panel. American Geriatrics Society 2015 updated Beers Criteria for potentially inappropriate medication use in older adults. J Am Geriatr Soc. 2015;63(11):2227-2246.

Issue
Federal Practitioner - 35(6)a
Issue
Federal Practitioner - 35(6)a
Page Number
32-36
Page Number
32-36
Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media

Ibrutinib and venetoclax combo promising in frontline CLL

Article Type
Changed
Wed, 06/13/2018 - 00:10
Display Headline
Ibrutinib and venetoclax combo promising in frontline CLL

Photo by © ASCO/Zach Boyden-Holmes 2018
Attendees at ASCO 2018 Annual Meeting

CHICAGO—Ibrutinib combined with venetoclax is showing promising clinical activity in the frontline treatment of patients with chronic lymphocytic leukemia (CLL), according to investigators for the CAPTIVATE study.

In the first 30 patients, 77% of treatment-naïve patients had undetected minimal residual disease (MRD; <10-4 cells) in the blood and 86% showed a similar response in the bone marrow.

The overall response rate (ORR) was 100% in 11 evaluable patients. The investigators reported this initial data at the 2018 Annual Meeting of the American Society of Clinical Oncology (abstract 7502).

“These early results show a highly active and safe treatment with 12 cycles of combined treatment with ibrutinib and venetoclax,” said William G. Wierda, MD, PhD, of the MD Anderson Cancer Center in Houston, Texas, who presented the findings at ASCO.

Ibrutinib, a Bruton-kinase inhibitor, has already been approved for the treatment of CLL and venetoclax, a Bcl-2 inhibitor, is currently used to treat relapsed del 17p CLL.

Venetoclax in combination with rituximab was recently approved by the US Food and Drug Administration to treat patients with CLL or small lymphocytic lymphoma whether or not patients have del 17p.

With complementary mechanisms of action and preclinical studies suggesting synergy with the combination, CAPTIVATE was designed to test the efficacy of the oral combination given for 12 cycles.

Study design

 CAPTIVATE (NCT02910583) is an ongoing phase 2 study that enrolled 164 patients with treatment-naïve CLL. Patients first received 3 cycles of ibrutinib monotherapy at the standard dose. This was intended to debulk the disease and reduce risk for venetoclax-associated tumor lysis syndrome (TLS).

Venetoclax 400 mg was initiated at cycle 4. After 12 cycles of the combination, patients with confirmed MRD negativity were randomized to receive ibrutinib with a placebo or to continue with the combination therapy.

In this initial report, Dr Wierda highlighted safety data for all 164 enrolled patients and efficacy data for the first 30 patients who had 6 cycles of combination therapy (MRD assessment cohort).

Dr Wierda also reported bone marrow data for the first 14 patients, who received a total of 12 cycles of the combination and represent the safety run-in cohort.

Ibrutinib and venetoclax show promising activity

 Median age of patients was 58 years; about 2/3 of patients had unmutated IGHV and 1/3 had a creatine clearance of <80 mL/min.

Of 164 patients, 95% remain on therapy, with discontinuations reported for adverse events; one patient had disease progression to Richter’s transformation.

For the MRD evaluation, all 30 patients had 6 months of combination therapy and continue on treatment.

As expected, lead-in with ibrutinib monotherapy debulked the disease.

Investigators observed a reduction in the proportion of patients at high risk for TLS (24% to 3%) and an increase in the proportion of patients at low risk for TLS (12% to 29%).

A similar picture emerged for debulking of lymph node disease. No patient developed clinical TLS.

Other adverse events were consistent with the safety profile of single-agent ibrutinib and venetoclax. No new safety signals were seen.

After 6 cycles of the combination, blood MRD negativity was reported in 77% of the patients in the MRD assessment cohort.

In the safety-run in cohort of 14 patients, blood MRD negativity was reported in 86% of patients after 12 cycles and 93% of patients after 15 cycles of the combination. In these patients, bone marrow MRD negativity was achieved in 86%.

After 12 cycles of combination therapy, the objective response rate was 100% for 11 of the 14 evaluable patients from the safety run-in cohort: 6 patients showed complete remission (CR) or CR with incomplete blood count recovery (CRi) for a CR/CRi of 55%. All patients had confirmed undetectable MRD.

 

 

Investigators considered these responses promising and an assessment of the full treatment plan and durability of response are awaited.

The study was sponsored by Pharmacyclics. 

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Photo by © ASCO/Zach Boyden-Holmes 2018
Attendees at ASCO 2018 Annual Meeting

CHICAGO—Ibrutinib combined with venetoclax is showing promising clinical activity in the frontline treatment of patients with chronic lymphocytic leukemia (CLL), according to investigators for the CAPTIVATE study.

In the first 30 patients, 77% of treatment-naïve patients had undetected minimal residual disease (MRD; <10-4 cells) in the blood and 86% showed a similar response in the bone marrow.

The overall response rate (ORR) was 100% in 11 evaluable patients. The investigators reported this initial data at the 2018 Annual Meeting of the American Society of Clinical Oncology (abstract 7502).

“These early results show a highly active and safe treatment with 12 cycles of combined treatment with ibrutinib and venetoclax,” said William G. Wierda, MD, PhD, of the MD Anderson Cancer Center in Houston, Texas, who presented the findings at ASCO.

Ibrutinib, a Bruton-kinase inhibitor, has already been approved for the treatment of CLL and venetoclax, a Bcl-2 inhibitor, is currently used to treat relapsed del 17p CLL.

Venetoclax in combination with rituximab was recently approved by the US Food and Drug Administration to treat patients with CLL or small lymphocytic lymphoma whether or not patients have del 17p.

With complementary mechanisms of action and preclinical studies suggesting synergy with the combination, CAPTIVATE was designed to test the efficacy of the oral combination given for 12 cycles.

Study design

 CAPTIVATE (NCT02910583) is an ongoing phase 2 study that enrolled 164 patients with treatment-naïve CLL. Patients first received 3 cycles of ibrutinib monotherapy at the standard dose. This was intended to debulk the disease and reduce risk for venetoclax-associated tumor lysis syndrome (TLS).

Venetoclax 400 mg was initiated at cycle 4. After 12 cycles of the combination, patients with confirmed MRD negativity were randomized to receive ibrutinib with a placebo or to continue with the combination therapy.

In this initial report, Dr Wierda highlighted safety data for all 164 enrolled patients and efficacy data for the first 30 patients who had 6 cycles of combination therapy (MRD assessment cohort).

Dr Wierda also reported bone marrow data for the first 14 patients, who received a total of 12 cycles of the combination and represent the safety run-in cohort.

Ibrutinib and venetoclax show promising activity

 Median age of patients was 58 years; about 2/3 of patients had unmutated IGHV and 1/3 had a creatine clearance of <80 mL/min.

Of 164 patients, 95% remain on therapy, with discontinuations reported for adverse events; one patient had disease progression to Richter’s transformation.

For the MRD evaluation, all 30 patients had 6 months of combination therapy and continue on treatment.

As expected, lead-in with ibrutinib monotherapy debulked the disease.

Investigators observed a reduction in the proportion of patients at high risk for TLS (24% to 3%) and an increase in the proportion of patients at low risk for TLS (12% to 29%).

A similar picture emerged for debulking of lymph node disease. No patient developed clinical TLS.

Other adverse events were consistent with the safety profile of single-agent ibrutinib and venetoclax. No new safety signals were seen.

After 6 cycles of the combination, blood MRD negativity was reported in 77% of the patients in the MRD assessment cohort.

In the safety-run in cohort of 14 patients, blood MRD negativity was reported in 86% of patients after 12 cycles and 93% of patients after 15 cycles of the combination. In these patients, bone marrow MRD negativity was achieved in 86%.

After 12 cycles of combination therapy, the objective response rate was 100% for 11 of the 14 evaluable patients from the safety run-in cohort: 6 patients showed complete remission (CR) or CR with incomplete blood count recovery (CRi) for a CR/CRi of 55%. All patients had confirmed undetectable MRD.

 

 

Investigators considered these responses promising and an assessment of the full treatment plan and durability of response are awaited.

The study was sponsored by Pharmacyclics. 

Photo by © ASCO/Zach Boyden-Holmes 2018
Attendees at ASCO 2018 Annual Meeting

CHICAGO—Ibrutinib combined with venetoclax is showing promising clinical activity in the frontline treatment of patients with chronic lymphocytic leukemia (CLL), according to investigators for the CAPTIVATE study.

In the first 30 patients, 77% of treatment-naïve patients had undetected minimal residual disease (MRD; <10-4 cells) in the blood and 86% showed a similar response in the bone marrow.

The overall response rate (ORR) was 100% in 11 evaluable patients. The investigators reported this initial data at the 2018 Annual Meeting of the American Society of Clinical Oncology (abstract 7502).

“These early results show a highly active and safe treatment with 12 cycles of combined treatment with ibrutinib and venetoclax,” said William G. Wierda, MD, PhD, of the MD Anderson Cancer Center in Houston, Texas, who presented the findings at ASCO.

Ibrutinib, a Bruton-kinase inhibitor, has already been approved for the treatment of CLL and venetoclax, a Bcl-2 inhibitor, is currently used to treat relapsed del 17p CLL.

Venetoclax in combination with rituximab was recently approved by the US Food and Drug Administration to treat patients with CLL or small lymphocytic lymphoma whether or not patients have del 17p.

With complementary mechanisms of action and preclinical studies suggesting synergy with the combination, CAPTIVATE was designed to test the efficacy of the oral combination given for 12 cycles.

Study design

 CAPTIVATE (NCT02910583) is an ongoing phase 2 study that enrolled 164 patients with treatment-naïve CLL. Patients first received 3 cycles of ibrutinib monotherapy at the standard dose. This was intended to debulk the disease and reduce risk for venetoclax-associated tumor lysis syndrome (TLS).

Venetoclax 400 mg was initiated at cycle 4. After 12 cycles of the combination, patients with confirmed MRD negativity were randomized to receive ibrutinib with a placebo or to continue with the combination therapy.

In this initial report, Dr Wierda highlighted safety data for all 164 enrolled patients and efficacy data for the first 30 patients who had 6 cycles of combination therapy (MRD assessment cohort).

Dr Wierda also reported bone marrow data for the first 14 patients, who received a total of 12 cycles of the combination and represent the safety run-in cohort.

Ibrutinib and venetoclax show promising activity

 Median age of patients was 58 years; about 2/3 of patients had unmutated IGHV and 1/3 had a creatine clearance of <80 mL/min.

Of 164 patients, 95% remain on therapy, with discontinuations reported for adverse events; one patient had disease progression to Richter’s transformation.

For the MRD evaluation, all 30 patients had 6 months of combination therapy and continue on treatment.

As expected, lead-in with ibrutinib monotherapy debulked the disease.

Investigators observed a reduction in the proportion of patients at high risk for TLS (24% to 3%) and an increase in the proportion of patients at low risk for TLS (12% to 29%).

A similar picture emerged for debulking of lymph node disease. No patient developed clinical TLS.

Other adverse events were consistent with the safety profile of single-agent ibrutinib and venetoclax. No new safety signals were seen.

After 6 cycles of the combination, blood MRD negativity was reported in 77% of the patients in the MRD assessment cohort.

In the safety-run in cohort of 14 patients, blood MRD negativity was reported in 86% of patients after 12 cycles and 93% of patients after 15 cycles of the combination. In these patients, bone marrow MRD negativity was achieved in 86%.

After 12 cycles of combination therapy, the objective response rate was 100% for 11 of the 14 evaluable patients from the safety run-in cohort: 6 patients showed complete remission (CR) or CR with incomplete blood count recovery (CRi) for a CR/CRi of 55%. All patients had confirmed undetectable MRD.

 

 

Investigators considered these responses promising and an assessment of the full treatment plan and durability of response are awaited.

The study was sponsored by Pharmacyclics. 

Publications
Publications
Topics
Article Type
Display Headline
Ibrutinib and venetoclax combo promising in frontline CLL
Display Headline
Ibrutinib and venetoclax combo promising in frontline CLL
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Mircera approved for anemia in pediatric patients with CKD

Article Type
Changed
Wed, 06/13/2018 - 00:02
Display Headline
Mircera approved for anemia in pediatric patients with CKD

Red blood cells

Mircera®, methoxy polyethylene glycol-epoetin beta, was approved by the US Food and Drug Administration (FDA) to treat anemia in pediatric patients who have chronic kidney disease (CKD).

The drug is indicated for patients ages 5 to 17 years on hemodialysis who are switching from another erythropoiesis-stimulating agent (ESA) after their hemoglobin levels have stabilized.

The FDA also approved the agent to treat adult patients with CKD-associated anemia.

However, the drug is not approved to treat anemia caused by cancer chemotherapy.

The FDA based its approval on data from an open-label, multiple-dose, multicenter, dose-finding trial (NCT00717366).

Investigators enrolled 64 pediatric patients with CKD on hemodialysis. The patients had to have stable hemoglobin levels while receiving another ESA, such as epoetin alfa/beta or darbepoetin alfa.

Patients received Mircera intravenously once every 4 weeks for 20 weeks. Investigators adjusted the dosages, if necessary, after the first administration to maintain target hemoglobin levels.

Efficacy was based on the patients’ ability to maintain target hemoglobin levels and also on data extrapolated from trials of Mircera in adults with CKD.

Patients who received Mircera had a mean change in hemoglobin concentration from baseline of -0.15g/dL and 75% maintained hemoglobin values within ± 1g/dL of baseline.

Eighty-one percent maintained hemoglobin values within 10–12g/dL during the evaluation period.

The safety findings in pediatric patients were consistent with those previously reported in adults.

The most common adverse reactions occurring in 10% or more patients, as indicated in the prescribing information, are hypertension, diarrhea, and nasopharyngitis.

The drug carries a black box warning for increased risk of death, myocardial infarction, stroke, venous thromboembolism, thrombosis of vascular access, and tumor progression of recurrence.

Mircera is an erythropoietin receptor activator with greater activity in vivo as well as increased half-life, compared to erythropoietin.

Mircera is manufactured by Vifor (International) Inc. 

Publications
Topics

Red blood cells

Mircera®, methoxy polyethylene glycol-epoetin beta, was approved by the US Food and Drug Administration (FDA) to treat anemia in pediatric patients who have chronic kidney disease (CKD).

The drug is indicated for patients ages 5 to 17 years on hemodialysis who are switching from another erythropoiesis-stimulating agent (ESA) after their hemoglobin levels have stabilized.

The FDA also approved the agent to treat adult patients with CKD-associated anemia.

However, the drug is not approved to treat anemia caused by cancer chemotherapy.

The FDA based its approval on data from an open-label, multiple-dose, multicenter, dose-finding trial (NCT00717366).

Investigators enrolled 64 pediatric patients with CKD on hemodialysis. The patients had to have stable hemoglobin levels while receiving another ESA, such as epoetin alfa/beta or darbepoetin alfa.

Patients received Mircera intravenously once every 4 weeks for 20 weeks. Investigators adjusted the dosages, if necessary, after the first administration to maintain target hemoglobin levels.

Efficacy was based on the patients’ ability to maintain target hemoglobin levels and also on data extrapolated from trials of Mircera in adults with CKD.

Patients who received Mircera had a mean change in hemoglobin concentration from baseline of -0.15g/dL and 75% maintained hemoglobin values within ± 1g/dL of baseline.

Eighty-one percent maintained hemoglobin values within 10–12g/dL during the evaluation period.

The safety findings in pediatric patients were consistent with those previously reported in adults.

The most common adverse reactions occurring in 10% or more patients, as indicated in the prescribing information, are hypertension, diarrhea, and nasopharyngitis.

The drug carries a black box warning for increased risk of death, myocardial infarction, stroke, venous thromboembolism, thrombosis of vascular access, and tumor progression of recurrence.

Mircera is an erythropoietin receptor activator with greater activity in vivo as well as increased half-life, compared to erythropoietin.

Mircera is manufactured by Vifor (International) Inc. 

Red blood cells

Mircera®, methoxy polyethylene glycol-epoetin beta, was approved by the US Food and Drug Administration (FDA) to treat anemia in pediatric patients who have chronic kidney disease (CKD).

The drug is indicated for patients ages 5 to 17 years on hemodialysis who are switching from another erythropoiesis-stimulating agent (ESA) after their hemoglobin levels have stabilized.

The FDA also approved the agent to treat adult patients with CKD-associated anemia.

However, the drug is not approved to treat anemia caused by cancer chemotherapy.

The FDA based its approval on data from an open-label, multiple-dose, multicenter, dose-finding trial (NCT00717366).

Investigators enrolled 64 pediatric patients with CKD on hemodialysis. The patients had to have stable hemoglobin levels while receiving another ESA, such as epoetin alfa/beta or darbepoetin alfa.

Patients received Mircera intravenously once every 4 weeks for 20 weeks. Investigators adjusted the dosages, if necessary, after the first administration to maintain target hemoglobin levels.

Efficacy was based on the patients’ ability to maintain target hemoglobin levels and also on data extrapolated from trials of Mircera in adults with CKD.

Patients who received Mircera had a mean change in hemoglobin concentration from baseline of -0.15g/dL and 75% maintained hemoglobin values within ± 1g/dL of baseline.

Eighty-one percent maintained hemoglobin values within 10–12g/dL during the evaluation period.

The safety findings in pediatric patients were consistent with those previously reported in adults.

The most common adverse reactions occurring in 10% or more patients, as indicated in the prescribing information, are hypertension, diarrhea, and nasopharyngitis.

The drug carries a black box warning for increased risk of death, myocardial infarction, stroke, venous thromboembolism, thrombosis of vascular access, and tumor progression of recurrence.

Mircera is an erythropoietin receptor activator with greater activity in vivo as well as increased half-life, compared to erythropoietin.

Mircera is manufactured by Vifor (International) Inc. 

Publications
Publications
Topics
Article Type
Display Headline
Mircera approved for anemia in pediatric patients with CKD
Display Headline
Mircera approved for anemia in pediatric patients with CKD
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

A New Protocol for RhD-negative Pregnant Women?

Article Type
Changed
Wed, 08/01/2018 - 12:18
Display Headline
A New Protocol for RhD-negative Pregnant Women?

Practice Changer

A 30-year-old G1P0 woman presents to your office for routine obstetric care at 18 weeks’ gestation. Her pregnancy has been uncomplicated, but her prenatal lab evaluation is notable for blood type A-negative. She wants to know if she really needs the anti-D immune globulin injection.

Rhesus (Rh)D-negative women carrying an RhD-positive fetus are at risk for anti-D antibodies, placing the fetus at risk for hemolytic disease of the fetus and newborn (HDFN). If undiagnosed and/or untreated, HDFN carries significant risk for perinatal morbidity and mortality.2

With routine postnatal anti-D immunoglobulin prophylaxis of RhD-negative women who delivered an RhD-positive child (which began around 1970), the risk for maternal alloimmunization was reduced from 16% to 1.12%-1.3%.3-5 The risk was further reduced to approximately 0.28% with the addition of consistent prophylaxis at 28 weeks’ gestation.4 As a result, the current standard of care is to administer anti-D immunoglobulin at 28 weeks’ gestation, within 72 hours of delivery of an RhD-positive fetus, and after events with risk for fetal-to-maternal transfusion (eg, spontaneous, threatened, or induced abortion; invasive prenatal diagnostic procedures such as amniocentesis; blunt abdominal trauma; external cephalic version; second or third trimester antepartum bleeding).6

The problem of unnecessary Tx. However, under this current practice, many RhD-negative women are receiving anti-D immunoglobulin unnecessarily. This is because the fetus’s RhD status is not routinely known during the prenatal period.

Enter cell-free DNA testing. Cell-free DNA testing analyzes fragments of fetal DNA found in maternal blood. The use of cell-free DNA testing at 10 to 13 weeks’ gestation to screen for fetal chromosomal abnormalities is reliable (91%-99% sensitivity for trisomies 21, 18, and 137) and becoming increasingly more common.

A notable meta-analysis. A 2017 meta-analysis of 30 studies of cell-free DNA testing of RhD status in the first and second trimesters calculated a sensitivity of 99.3% and a specificity of 98.4%.7 Denmark, the Netherlands, Sweden, France, and Finland are using this method routinely. As of this writing, the American College of Obstetricians and Gynecologists (ACOG) has not recommended the use of cell-free DNA RhD testing in the United States, but they do note that as the cost of the assay declines, this method may become preferred.8 The National Institute for Health and Care Excellence in England recommends its use as long as its cost remains below a set threshold.9

This study evaluated the accuracy of using cell-free DNA testing at 27 weeks’ gestation to determine fetal RhD status compared with serologic typing of cord blood at delivery.

Continue to: STUDY SUMMARY

 

 

STUDY SUMMARY

Test gets high marks in Netherlands trial

This large observational cohort trial from the Netherlands examined the accuracy of identifying RhD-positive fetuses using cell-free DNA isolates in maternal plasma. Over the 15-month study period, fetal RhD testing was conducted during Week 27 of gestation, and results were compared with those obtained using neonatal cord blood at birth. If the fetal RhD test was positive, providers administered 200 µg anti-D immunoglobulin during the 30th week of gestation and within 48 hours of birth. If fetal RhD was negative, providers were told immunoglobulin was unnecessary.

More than 32,000 RhD-negative women were screened. The cell-free DNA test showed fetal RhD-positive results 62% of the time and RhD-negative results in the remainder. Cord blood samples were available for 25,789 pregnancies (80%).

Sensitivity, specificity. The sensitivity for identifying fetal RhD was 99% and the specificity was 98%. Both negative and positive predictive values were 99%. Overall, there were 225 false-positive results and nine false-negative results. In the nine false negatives, six were due to a lack of fetal DNA in the sample and three were due to technical error (defined as an operator ignoring a failure of the robot pipetting the plasma or other technical failures).

The false-negative rate (0.03%) was lower than the predetermined estimated false-negative rate of cord blood serology (0.25%). In 22 of the supposed false positives, follow-up serology or molecular testing found an RhD gene was actually present, meaning the results of the neonatal cord blood serology in these cases were falsely negative. If you recalculate with these data in mind, the false-negative rate for fetal DNA testing was actually less than half that of typical serologic determination.

Continue to: WHAT'S NEW

 

 

WHAT’S NEW

Accurate test, potential to reduce unnecessary Tx

Fetal RhD testing at 27 weeks’ gestation appears to be highly accurate and could reduce the unnecessary use of anti-D immunoglobulin when the fetal RhD is negative.

CAVEATS

Different results by ethnicity?

Dutch participants are not necessarily reflective of the US population. Known variation in the rate of fetal RhD positivity among RhD-negative pregnant women by race and ethnicity could mean that the number of women able to forego anti-D immunoglobulin prophylaxis would be different in the United States than in other countries.

Also, in this study, polymerase chain reaction for two RhD sequences was run in triplicate, and a computer-based algorithm was used to automatically score samples to provide results. For safe implementation, the cell-free fetal RhD DNA testing process would need to follow similar methods.

 

CHALLENGES TO IMPLEMENTATION

Cost and availability are big unknowns

Cost and availability of the test may be barriers, but there is currently too little information on either subject in the United States to make a determination. A 2013 study indicated that the use of cell-free DNA testing to determine fetal RhD status was then approximately $682.10

ACKNOWLEDGEMENT

The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center for Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Research Resources or the National Institutes of Health.

Copyright © 2018. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice (2018;67[5]: 306, 308, 319).

References

1. de Haas M, Thurik FF, van der Ploeg CP, et al. Sensitivity of fetal RHD screening for safe guidance of targeted anti-D immunoglobulin prophylaxis: prospective cohort study of a nationwide programme in the Netherlands. BMJ. 2016;355:i5789.
2. American College of Obstetricians and Gynecologists. ACOG Practice Bulletin No. 75: Management of alloimmunization during pregnancy. Obstet Gynecol. 2006; 108:457-464.

3. Urbaniak SJ, Greiss MA. RhD haemolytic disease of the fetus and the newborn. Blood Rev. 2000;14(1):44-61.
4. Mayne S, Parker JH, Harden TA, et al. Rate of RhD sensitisation before and after implementation of a community based antenatal prophylaxis programme. BMJ. 1997;315(7122):1588.
5. MacKenzie IZ, Bowell P, Gregory H, et al. Routine antenatal Rhesus D immunoglobulin prophylaxis: the results of a prospective 10 year study. Br J Obstet Gynecol. 1999;106:492-497.
6. Zolotor AJ, Carlough MC. Update on prenatal care. Am Fam Physician. 2014;89(3):199-208.
7. Mackie FL, Hemming K, Allen S, et al. The accuracy of cell-free fetal DNA-based non-invasive prenatal testing in singleton pregnancies: a systematic review and bivariate meta-analysis. BJOG. 2017;124(1):32-46.
8. American College of Obstetricians and Gynecologists Committee on Practice Bulletins-Obstetrics. Practice Bulletin No. 181: Prevention of Rh D Alloimmunization. Obstet Gynecol. 2017;130:e57-e70.
9. National Institute for Health and Care Excellence. High-throughput non-invasive prenatal testing for fetal RHD genotype 1: Recommendations. www.nice.org.uk/guidance/dg25/chapter/1-Recommendations. Accessed May 7, 2018.

10. Hawk AF, Chang EY, Shields SM, Simpson KN. Costs and clinical outcomes of noninvasive fetal RhD typing for targeted prophylaxis. Obstet Gynecol. 2013;122(3):579-585.

Article PDF
Author and Disclosure Information

Corey Lyon and Aimee English are with the University of Colorado Family Medicine Residency Program in Denver.

Issue
Clinician Reviews - 28(6)
Publications
Topics
Page Number
e5-e7
Sections
Author and Disclosure Information

Corey Lyon and Aimee English are with the University of Colorado Family Medicine Residency Program in Denver.

Author and Disclosure Information

Corey Lyon and Aimee English are with the University of Colorado Family Medicine Residency Program in Denver.

Article PDF
Article PDF

Practice Changer

A 30-year-old G1P0 woman presents to your office for routine obstetric care at 18 weeks’ gestation. Her pregnancy has been uncomplicated, but her prenatal lab evaluation is notable for blood type A-negative. She wants to know if she really needs the anti-D immune globulin injection.

Rhesus (Rh)D-negative women carrying an RhD-positive fetus are at risk for anti-D antibodies, placing the fetus at risk for hemolytic disease of the fetus and newborn (HDFN). If undiagnosed and/or untreated, HDFN carries significant risk for perinatal morbidity and mortality.2

With routine postnatal anti-D immunoglobulin prophylaxis of RhD-negative women who delivered an RhD-positive child (which began around 1970), the risk for maternal alloimmunization was reduced from 16% to 1.12%-1.3%.3-5 The risk was further reduced to approximately 0.28% with the addition of consistent prophylaxis at 28 weeks’ gestation.4 As a result, the current standard of care is to administer anti-D immunoglobulin at 28 weeks’ gestation, within 72 hours of delivery of an RhD-positive fetus, and after events with risk for fetal-to-maternal transfusion (eg, spontaneous, threatened, or induced abortion; invasive prenatal diagnostic procedures such as amniocentesis; blunt abdominal trauma; external cephalic version; second or third trimester antepartum bleeding).6

The problem of unnecessary Tx. However, under this current practice, many RhD-negative women are receiving anti-D immunoglobulin unnecessarily. This is because the fetus’s RhD status is not routinely known during the prenatal period.

Enter cell-free DNA testing. Cell-free DNA testing analyzes fragments of fetal DNA found in maternal blood. The use of cell-free DNA testing at 10 to 13 weeks’ gestation to screen for fetal chromosomal abnormalities is reliable (91%-99% sensitivity for trisomies 21, 18, and 137) and becoming increasingly more common.

A notable meta-analysis. A 2017 meta-analysis of 30 studies of cell-free DNA testing of RhD status in the first and second trimesters calculated a sensitivity of 99.3% and a specificity of 98.4%.7 Denmark, the Netherlands, Sweden, France, and Finland are using this method routinely. As of this writing, the American College of Obstetricians and Gynecologists (ACOG) has not recommended the use of cell-free DNA RhD testing in the United States, but they do note that as the cost of the assay declines, this method may become preferred.8 The National Institute for Health and Care Excellence in England recommends its use as long as its cost remains below a set threshold.9

This study evaluated the accuracy of using cell-free DNA testing at 27 weeks’ gestation to determine fetal RhD status compared with serologic typing of cord blood at delivery.

Continue to: STUDY SUMMARY

 

 

STUDY SUMMARY

Test gets high marks in Netherlands trial

This large observational cohort trial from the Netherlands examined the accuracy of identifying RhD-positive fetuses using cell-free DNA isolates in maternal plasma. Over the 15-month study period, fetal RhD testing was conducted during Week 27 of gestation, and results were compared with those obtained using neonatal cord blood at birth. If the fetal RhD test was positive, providers administered 200 µg anti-D immunoglobulin during the 30th week of gestation and within 48 hours of birth. If fetal RhD was negative, providers were told immunoglobulin was unnecessary.

More than 32,000 RhD-negative women were screened. The cell-free DNA test showed fetal RhD-positive results 62% of the time and RhD-negative results in the remainder. Cord blood samples were available for 25,789 pregnancies (80%).

Sensitivity, specificity. The sensitivity for identifying fetal RhD was 99% and the specificity was 98%. Both negative and positive predictive values were 99%. Overall, there were 225 false-positive results and nine false-negative results. In the nine false negatives, six were due to a lack of fetal DNA in the sample and three were due to technical error (defined as an operator ignoring a failure of the robot pipetting the plasma or other technical failures).

The false-negative rate (0.03%) was lower than the predetermined estimated false-negative rate of cord blood serology (0.25%). In 22 of the supposed false positives, follow-up serology or molecular testing found an RhD gene was actually present, meaning the results of the neonatal cord blood serology in these cases were falsely negative. If you recalculate with these data in mind, the false-negative rate for fetal DNA testing was actually less than half that of typical serologic determination.

Continue to: WHAT'S NEW

 

 

WHAT’S NEW

Accurate test, potential to reduce unnecessary Tx

Fetal RhD testing at 27 weeks’ gestation appears to be highly accurate and could reduce the unnecessary use of anti-D immunoglobulin when the fetal RhD is negative.

CAVEATS

Different results by ethnicity?

Dutch participants are not necessarily reflective of the US population. Known variation in the rate of fetal RhD positivity among RhD-negative pregnant women by race and ethnicity could mean that the number of women able to forego anti-D immunoglobulin prophylaxis would be different in the United States than in other countries.

Also, in this study, polymerase chain reaction for two RhD sequences was run in triplicate, and a computer-based algorithm was used to automatically score samples to provide results. For safe implementation, the cell-free fetal RhD DNA testing process would need to follow similar methods.

 

CHALLENGES TO IMPLEMENTATION

Cost and availability are big unknowns

Cost and availability of the test may be barriers, but there is currently too little information on either subject in the United States to make a determination. A 2013 study indicated that the use of cell-free DNA testing to determine fetal RhD status was then approximately $682.10

ACKNOWLEDGEMENT

The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center for Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Research Resources or the National Institutes of Health.

Copyright © 2018. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice (2018;67[5]: 306, 308, 319).

Practice Changer

A 30-year-old G1P0 woman presents to your office for routine obstetric care at 18 weeks’ gestation. Her pregnancy has been uncomplicated, but her prenatal lab evaluation is notable for blood type A-negative. She wants to know if she really needs the anti-D immune globulin injection.

Rhesus (Rh)D-negative women carrying an RhD-positive fetus are at risk for anti-D antibodies, placing the fetus at risk for hemolytic disease of the fetus and newborn (HDFN). If undiagnosed and/or untreated, HDFN carries significant risk for perinatal morbidity and mortality.2

With routine postnatal anti-D immunoglobulin prophylaxis of RhD-negative women who delivered an RhD-positive child (which began around 1970), the risk for maternal alloimmunization was reduced from 16% to 1.12%-1.3%.3-5 The risk was further reduced to approximately 0.28% with the addition of consistent prophylaxis at 28 weeks’ gestation.4 As a result, the current standard of care is to administer anti-D immunoglobulin at 28 weeks’ gestation, within 72 hours of delivery of an RhD-positive fetus, and after events with risk for fetal-to-maternal transfusion (eg, spontaneous, threatened, or induced abortion; invasive prenatal diagnostic procedures such as amniocentesis; blunt abdominal trauma; external cephalic version; second or third trimester antepartum bleeding).6

The problem of unnecessary Tx. However, under this current practice, many RhD-negative women are receiving anti-D immunoglobulin unnecessarily. This is because the fetus’s RhD status is not routinely known during the prenatal period.

Enter cell-free DNA testing. Cell-free DNA testing analyzes fragments of fetal DNA found in maternal blood. The use of cell-free DNA testing at 10 to 13 weeks’ gestation to screen for fetal chromosomal abnormalities is reliable (91%-99% sensitivity for trisomies 21, 18, and 137) and becoming increasingly more common.

A notable meta-analysis. A 2017 meta-analysis of 30 studies of cell-free DNA testing of RhD status in the first and second trimesters calculated a sensitivity of 99.3% and a specificity of 98.4%.7 Denmark, the Netherlands, Sweden, France, and Finland are using this method routinely. As of this writing, the American College of Obstetricians and Gynecologists (ACOG) has not recommended the use of cell-free DNA RhD testing in the United States, but they do note that as the cost of the assay declines, this method may become preferred.8 The National Institute for Health and Care Excellence in England recommends its use as long as its cost remains below a set threshold.9

This study evaluated the accuracy of using cell-free DNA testing at 27 weeks’ gestation to determine fetal RhD status compared with serologic typing of cord blood at delivery.

Continue to: STUDY SUMMARY

 

 

STUDY SUMMARY

Test gets high marks in Netherlands trial

This large observational cohort trial from the Netherlands examined the accuracy of identifying RhD-positive fetuses using cell-free DNA isolates in maternal plasma. Over the 15-month study period, fetal RhD testing was conducted during Week 27 of gestation, and results were compared with those obtained using neonatal cord blood at birth. If the fetal RhD test was positive, providers administered 200 µg anti-D immunoglobulin during the 30th week of gestation and within 48 hours of birth. If fetal RhD was negative, providers were told immunoglobulin was unnecessary.

More than 32,000 RhD-negative women were screened. The cell-free DNA test showed fetal RhD-positive results 62% of the time and RhD-negative results in the remainder. Cord blood samples were available for 25,789 pregnancies (80%).

Sensitivity, specificity. The sensitivity for identifying fetal RhD was 99% and the specificity was 98%. Both negative and positive predictive values were 99%. Overall, there were 225 false-positive results and nine false-negative results. In the nine false negatives, six were due to a lack of fetal DNA in the sample and three were due to technical error (defined as an operator ignoring a failure of the robot pipetting the plasma or other technical failures).

The false-negative rate (0.03%) was lower than the predetermined estimated false-negative rate of cord blood serology (0.25%). In 22 of the supposed false positives, follow-up serology or molecular testing found an RhD gene was actually present, meaning the results of the neonatal cord blood serology in these cases were falsely negative. If you recalculate with these data in mind, the false-negative rate for fetal DNA testing was actually less than half that of typical serologic determination.

Continue to: WHAT'S NEW

 

 

WHAT’S NEW

Accurate test, potential to reduce unnecessary Tx

Fetal RhD testing at 27 weeks’ gestation appears to be highly accurate and could reduce the unnecessary use of anti-D immunoglobulin when the fetal RhD is negative.

CAVEATS

Different results by ethnicity?

Dutch participants are not necessarily reflective of the US population. Known variation in the rate of fetal RhD positivity among RhD-negative pregnant women by race and ethnicity could mean that the number of women able to forego anti-D immunoglobulin prophylaxis would be different in the United States than in other countries.

Also, in this study, polymerase chain reaction for two RhD sequences was run in triplicate, and a computer-based algorithm was used to automatically score samples to provide results. For safe implementation, the cell-free fetal RhD DNA testing process would need to follow similar methods.

 

CHALLENGES TO IMPLEMENTATION

Cost and availability are big unknowns

Cost and availability of the test may be barriers, but there is currently too little information on either subject in the United States to make a determination. A 2013 study indicated that the use of cell-free DNA testing to determine fetal RhD status was then approximately $682.10

ACKNOWLEDGEMENT

The PURLs Surveillance System was supported in part by Grant Number UL1RR024999 from the National Center for Research Resources, a Clinical Translational Science Award to the University of Chicago. The content is solely the responsibility of the authors and does not necessarily represent the official views of the National Center for Research Resources or the National Institutes of Health.

Copyright © 2018. The Family Physicians Inquiries Network. All rights reserved.

Reprinted with permission from the Family Physicians Inquiries Network and The Journal of Family Practice (2018;67[5]: 306, 308, 319).

References

1. de Haas M, Thurik FF, van der Ploeg CP, et al. Sensitivity of fetal RHD screening for safe guidance of targeted anti-D immunoglobulin prophylaxis: prospective cohort study of a nationwide programme in the Netherlands. BMJ. 2016;355:i5789.
2. American College of Obstetricians and Gynecologists. ACOG Practice Bulletin No. 75: Management of alloimmunization during pregnancy. Obstet Gynecol. 2006; 108:457-464.

3. Urbaniak SJ, Greiss MA. RhD haemolytic disease of the fetus and the newborn. Blood Rev. 2000;14(1):44-61.
4. Mayne S, Parker JH, Harden TA, et al. Rate of RhD sensitisation before and after implementation of a community based antenatal prophylaxis programme. BMJ. 1997;315(7122):1588.
5. MacKenzie IZ, Bowell P, Gregory H, et al. Routine antenatal Rhesus D immunoglobulin prophylaxis: the results of a prospective 10 year study. Br J Obstet Gynecol. 1999;106:492-497.
6. Zolotor AJ, Carlough MC. Update on prenatal care. Am Fam Physician. 2014;89(3):199-208.
7. Mackie FL, Hemming K, Allen S, et al. The accuracy of cell-free fetal DNA-based non-invasive prenatal testing in singleton pregnancies: a systematic review and bivariate meta-analysis. BJOG. 2017;124(1):32-46.
8. American College of Obstetricians and Gynecologists Committee on Practice Bulletins-Obstetrics. Practice Bulletin No. 181: Prevention of Rh D Alloimmunization. Obstet Gynecol. 2017;130:e57-e70.
9. National Institute for Health and Care Excellence. High-throughput non-invasive prenatal testing for fetal RHD genotype 1: Recommendations. www.nice.org.uk/guidance/dg25/chapter/1-Recommendations. Accessed May 7, 2018.

10. Hawk AF, Chang EY, Shields SM, Simpson KN. Costs and clinical outcomes of noninvasive fetal RhD typing for targeted prophylaxis. Obstet Gynecol. 2013;122(3):579-585.

References

1. de Haas M, Thurik FF, van der Ploeg CP, et al. Sensitivity of fetal RHD screening for safe guidance of targeted anti-D immunoglobulin prophylaxis: prospective cohort study of a nationwide programme in the Netherlands. BMJ. 2016;355:i5789.
2. American College of Obstetricians and Gynecologists. ACOG Practice Bulletin No. 75: Management of alloimmunization during pregnancy. Obstet Gynecol. 2006; 108:457-464.

3. Urbaniak SJ, Greiss MA. RhD haemolytic disease of the fetus and the newborn. Blood Rev. 2000;14(1):44-61.
4. Mayne S, Parker JH, Harden TA, et al. Rate of RhD sensitisation before and after implementation of a community based antenatal prophylaxis programme. BMJ. 1997;315(7122):1588.
5. MacKenzie IZ, Bowell P, Gregory H, et al. Routine antenatal Rhesus D immunoglobulin prophylaxis: the results of a prospective 10 year study. Br J Obstet Gynecol. 1999;106:492-497.
6. Zolotor AJ, Carlough MC. Update on prenatal care. Am Fam Physician. 2014;89(3):199-208.
7. Mackie FL, Hemming K, Allen S, et al. The accuracy of cell-free fetal DNA-based non-invasive prenatal testing in singleton pregnancies: a systematic review and bivariate meta-analysis. BJOG. 2017;124(1):32-46.
8. American College of Obstetricians and Gynecologists Committee on Practice Bulletins-Obstetrics. Practice Bulletin No. 181: Prevention of Rh D Alloimmunization. Obstet Gynecol. 2017;130:e57-e70.
9. National Institute for Health and Care Excellence. High-throughput non-invasive prenatal testing for fetal RHD genotype 1: Recommendations. www.nice.org.uk/guidance/dg25/chapter/1-Recommendations. Accessed May 7, 2018.

10. Hawk AF, Chang EY, Shields SM, Simpson KN. Costs and clinical outcomes of noninvasive fetal RhD typing for targeted prophylaxis. Obstet Gynecol. 2013;122(3):579-585.

Issue
Clinician Reviews - 28(6)
Issue
Clinician Reviews - 28(6)
Page Number
e5-e7
Page Number
e5-e7
Publications
Publications
Topics
Article Type
Display Headline
A New Protocol for RhD-negative Pregnant Women?
Display Headline
A New Protocol for RhD-negative Pregnant Women?
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica
Article PDF Media

USPSTF: Don’t add ECG for cardio risk assessment

Recommendations focus on asymptomatic adults
Article Type
Changed
Fri, 01/18/2019 - 17:44

 

Adding electrocardiography screening to standard cardiovascular disease assessment is not necessary for asymptomatic, low-risk adults, according to final recommendations from the U.S. Preventive Services Task Force.

In the statement published June 12 in JAMA, the USPSTF gave a D recommendation against using ECG screening to evaluate cardiovascular disease risk in asymptomatic, low-risk individuals and issued a statement that current evidence is inadequate (I statement) to evaluate the harms versus benefits of additional ECG for asymptomatic individuals who may be at medium to high risk for future cardiovascular events.

hepatus/iStockphoto
The recommendation against screening ECG applies to adults with no CVD symptoms or CVD diagnosis, wrote lead author Susan J. Curry, PhD, of the University of Iowa, Iowa City, and her colleagues.

The Task Force concluded that the potential harms of screening ECG outweigh or equal potential benefits in the asymptomatic low-risk population. However, they noted clinical considerations for screening in moderate to high-risk individuals including the potential for more intensive medical management in those identified as higher risk after an ECG, balanced by the potential for harms from medication side effects or follow-up procedures.

Treatment for asymptomatic adults at increased risk for CVD may include lipid-lowering medications, tobacco cessation, and lifestyle modifications regarding diet and exercise, according to the Task Force, and guidelines already exist for many of these factors.

ECG screening could reclassify individuals as higher or lower risk, which could potentially improve health outcomes, wrote Daniel E. Jonas, MD, of the University of North Carolina, Chapel Hill, and his colleagues in the evidence report accompanying the recommendations. The researchers reviewed data from 16 studies including 77,140 individuals. However, the strength of evidence was low for the value of ECG to reclassify individuals, and no improvements in health outcomes were noted, even in high-risk populations such as diabetes patients, the researchers said.

In particular, no significant improvement from additional exercise ECG occurred in a pair of randomized controlled trials including 1,151 individuals, they noted.

The final recommendation reflects the 2017 draft statement and the 2012 final recommendation statement. The full recommendation statement is available online in JAMA and on the Task Force website.

The research was funded by the Agency for Healthcare Research and Quality under a grant from the U.S. Department of Health and Human Services. The researchers had no financial conflicts to disclose.

SOURCES: Jonas D et al. JAMA. 2018 Jun 12;319(22):2315-28; Curry S et al. JAMA. 2018 Jun 12;319(22):2308-14.

Body

 

The conclusions reached by the USPSTF were warranted, based on the latest research, but may be modified by future information as the science evolves.

In contrast to the 2004 and 2012 task force statements, which were focused on coronary heart disease events, the current analysis used a measure of cardiovascular events, defined as the composite of coronary heart disease, cerebrovascular disease, and peripheral artery disease. Given that ECG parameters usually reflect the presence of coronary heart disease, their value as a predictor of cardiovascular disease in asymptomatic adults may be limited.

The evidence reviewed by the USPSTF shows that ECG screening of low-risk individuals is unlikely to prevent CVD; however, the assessment of risk remains a challenge and puts the decision on physicians based on individual risk factors. It would be an overstatement of current knowledge to conclude that patients at the higher end of the intermediate to high-risk classification would benefit from routine ECG testing with repeated measures over time,” he said.

However, risk factors aside, one special population to be considered for ECG screening is competitive athletes. Screening athletes is common in many countries, though somewhat controversial in the United States, despite its increasing use by professional and college sports team. More research is needed on the value of resting and exercise ECG as markers of CVD risk, and new data may lead researchers to reassess the value of ECG procedures and use them for improved risk classification.

Robert J. Myerburg, MD, an electrophysiologist at the University of Miami, made these comments in an editorial accompanying the article (JAMA. 2018 June 12;319[2]:2277-9). He had no financial conflicts to disclose.

Publications
Topics
Sections
Body

 

The conclusions reached by the USPSTF were warranted, based on the latest research, but may be modified by future information as the science evolves.

In contrast to the 2004 and 2012 task force statements, which were focused on coronary heart disease events, the current analysis used a measure of cardiovascular events, defined as the composite of coronary heart disease, cerebrovascular disease, and peripheral artery disease. Given that ECG parameters usually reflect the presence of coronary heart disease, their value as a predictor of cardiovascular disease in asymptomatic adults may be limited.

The evidence reviewed by the USPSTF shows that ECG screening of low-risk individuals is unlikely to prevent CVD; however, the assessment of risk remains a challenge and puts the decision on physicians based on individual risk factors. It would be an overstatement of current knowledge to conclude that patients at the higher end of the intermediate to high-risk classification would benefit from routine ECG testing with repeated measures over time,” he said.

However, risk factors aside, one special population to be considered for ECG screening is competitive athletes. Screening athletes is common in many countries, though somewhat controversial in the United States, despite its increasing use by professional and college sports team. More research is needed on the value of resting and exercise ECG as markers of CVD risk, and new data may lead researchers to reassess the value of ECG procedures and use them for improved risk classification.

Robert J. Myerburg, MD, an electrophysiologist at the University of Miami, made these comments in an editorial accompanying the article (JAMA. 2018 June 12;319[2]:2277-9). He had no financial conflicts to disclose.

Body

 

The conclusions reached by the USPSTF were warranted, based on the latest research, but may be modified by future information as the science evolves.

In contrast to the 2004 and 2012 task force statements, which were focused on coronary heart disease events, the current analysis used a measure of cardiovascular events, defined as the composite of coronary heart disease, cerebrovascular disease, and peripheral artery disease. Given that ECG parameters usually reflect the presence of coronary heart disease, their value as a predictor of cardiovascular disease in asymptomatic adults may be limited.

The evidence reviewed by the USPSTF shows that ECG screening of low-risk individuals is unlikely to prevent CVD; however, the assessment of risk remains a challenge and puts the decision on physicians based on individual risk factors. It would be an overstatement of current knowledge to conclude that patients at the higher end of the intermediate to high-risk classification would benefit from routine ECG testing with repeated measures over time,” he said.

However, risk factors aside, one special population to be considered for ECG screening is competitive athletes. Screening athletes is common in many countries, though somewhat controversial in the United States, despite its increasing use by professional and college sports team. More research is needed on the value of resting and exercise ECG as markers of CVD risk, and new data may lead researchers to reassess the value of ECG procedures and use them for improved risk classification.

Robert J. Myerburg, MD, an electrophysiologist at the University of Miami, made these comments in an editorial accompanying the article (JAMA. 2018 June 12;319[2]:2277-9). He had no financial conflicts to disclose.

Title
Recommendations focus on asymptomatic adults
Recommendations focus on asymptomatic adults

 

Adding electrocardiography screening to standard cardiovascular disease assessment is not necessary for asymptomatic, low-risk adults, according to final recommendations from the U.S. Preventive Services Task Force.

In the statement published June 12 in JAMA, the USPSTF gave a D recommendation against using ECG screening to evaluate cardiovascular disease risk in asymptomatic, low-risk individuals and issued a statement that current evidence is inadequate (I statement) to evaluate the harms versus benefits of additional ECG for asymptomatic individuals who may be at medium to high risk for future cardiovascular events.

hepatus/iStockphoto
The recommendation against screening ECG applies to adults with no CVD symptoms or CVD diagnosis, wrote lead author Susan J. Curry, PhD, of the University of Iowa, Iowa City, and her colleagues.

The Task Force concluded that the potential harms of screening ECG outweigh or equal potential benefits in the asymptomatic low-risk population. However, they noted clinical considerations for screening in moderate to high-risk individuals including the potential for more intensive medical management in those identified as higher risk after an ECG, balanced by the potential for harms from medication side effects or follow-up procedures.

Treatment for asymptomatic adults at increased risk for CVD may include lipid-lowering medications, tobacco cessation, and lifestyle modifications regarding diet and exercise, according to the Task Force, and guidelines already exist for many of these factors.

ECG screening could reclassify individuals as higher or lower risk, which could potentially improve health outcomes, wrote Daniel E. Jonas, MD, of the University of North Carolina, Chapel Hill, and his colleagues in the evidence report accompanying the recommendations. The researchers reviewed data from 16 studies including 77,140 individuals. However, the strength of evidence was low for the value of ECG to reclassify individuals, and no improvements in health outcomes were noted, even in high-risk populations such as diabetes patients, the researchers said.

In particular, no significant improvement from additional exercise ECG occurred in a pair of randomized controlled trials including 1,151 individuals, they noted.

The final recommendation reflects the 2017 draft statement and the 2012 final recommendation statement. The full recommendation statement is available online in JAMA and on the Task Force website.

The research was funded by the Agency for Healthcare Research and Quality under a grant from the U.S. Department of Health and Human Services. The researchers had no financial conflicts to disclose.

SOURCES: Jonas D et al. JAMA. 2018 Jun 12;319(22):2315-28; Curry S et al. JAMA. 2018 Jun 12;319(22):2308-14.

 

Adding electrocardiography screening to standard cardiovascular disease assessment is not necessary for asymptomatic, low-risk adults, according to final recommendations from the U.S. Preventive Services Task Force.

In the statement published June 12 in JAMA, the USPSTF gave a D recommendation against using ECG screening to evaluate cardiovascular disease risk in asymptomatic, low-risk individuals and issued a statement that current evidence is inadequate (I statement) to evaluate the harms versus benefits of additional ECG for asymptomatic individuals who may be at medium to high risk for future cardiovascular events.

hepatus/iStockphoto
The recommendation against screening ECG applies to adults with no CVD symptoms or CVD diagnosis, wrote lead author Susan J. Curry, PhD, of the University of Iowa, Iowa City, and her colleagues.

The Task Force concluded that the potential harms of screening ECG outweigh or equal potential benefits in the asymptomatic low-risk population. However, they noted clinical considerations for screening in moderate to high-risk individuals including the potential for more intensive medical management in those identified as higher risk after an ECG, balanced by the potential for harms from medication side effects or follow-up procedures.

Treatment for asymptomatic adults at increased risk for CVD may include lipid-lowering medications, tobacco cessation, and lifestyle modifications regarding diet and exercise, according to the Task Force, and guidelines already exist for many of these factors.

ECG screening could reclassify individuals as higher or lower risk, which could potentially improve health outcomes, wrote Daniel E. Jonas, MD, of the University of North Carolina, Chapel Hill, and his colleagues in the evidence report accompanying the recommendations. The researchers reviewed data from 16 studies including 77,140 individuals. However, the strength of evidence was low for the value of ECG to reclassify individuals, and no improvements in health outcomes were noted, even in high-risk populations such as diabetes patients, the researchers said.

In particular, no significant improvement from additional exercise ECG occurred in a pair of randomized controlled trials including 1,151 individuals, they noted.

The final recommendation reflects the 2017 draft statement and the 2012 final recommendation statement. The full recommendation statement is available online in JAMA and on the Task Force website.

The research was funded by the Agency for Healthcare Research and Quality under a grant from the U.S. Department of Health and Human Services. The researchers had no financial conflicts to disclose.

SOURCES: Jonas D et al. JAMA. 2018 Jun 12;319(22):2315-28; Curry S et al. JAMA. 2018 Jun 12;319(22):2308-14.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

FROM JAMA

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: The USPSTF recommends against additional ECG to assess CVD risk in asymptomatic, low-risk adults.

Major finding: Two randomized controlled trials including 1,151 individuals found no significant improvement from additional exercise ECG.

Study details: Researchers reviewed data from 16 studies including 77,140 individuals.

Disclosures: The research was funded by the Agency for Healthcare Research and Quality under a grant from the U.S. Department of Health & Human Services. The researchers had no financial conflicts to disclose.

Sources: Jonas D et al. JAMA.2018;319[22]:2315-28; Curry S et al. JAMA.2018;319[22]:2308-14.

Disqus Comments
Default
Use ProPublica

What underlies post–bariatric surgery bone fragility?

Article Type
Changed
Fri, 01/18/2019 - 17:44

 

– Charting a healthy path for patients after bariatric surgery can be complicated and addressing bone health is an important part of the endocrinologist’s role in keeping patients safe from postsurgical fractures, according to John Bilezikian, MD.

“Abnormal bone metabolism is a feature of both obesity and gastric bypass surgery,” said Dr. Bilezikian, speaking during a bariatric surgery–focused session at the annual scientific & clinical congress of the American Academy of Clinical Endocrinologists.

It’s not easy to assess bone health, even before surgery, said Dr. Bilezikian. Even objective measures of bone density, such as dual-energy x-ray absorptiometry (DXA), may be skewed: very high fat mass causes artifact that interferes with accurate measurement of bone density, and DXA can’t distinguish between cortical and trabecular bone. The latter is a particular issue in high body mass index patients, since obesity is known to be associated with a more fragile bone microarchitecture, said Dr. Bilezikian, the Dorothy L. and Daniel H. Silberberg Professor of Medicine and director of the metabolic bone diseases unit at Columbia University, New York.

With these caveats in mind, Dr. Bilezikian said, there are some lessons to be learned from existing research to better manage bone health in bariatric patients.

After Roux-en-Y gastric bypass surgery (RYGB), bone turnover soon increases, with bone resorption markers increasing by up to 200% in the first 12-18 months after surgery. Bone formation markers also are elevated but to a lesser extent, said Dr. Bilezikian. Over time, the weight loss from RYGB is associated with a significant drop in bone mineral density (BMD) at weight-bearing sites. Weight loss was associated with bone loss at the total hip (r = 0.70; P less than .0003) and femoral neck (r = 0.47; P = .03 (J Clin Endocrinol Metab. 2013 Feb;98[2] 541-9).

A newer-technology, high-resolution peripheral quantitative CT (HR-pQCT) offers a noninvasive look not just at bone size and density but also at microarchitecture, including cortical thickness and details of trabecular structure. This technology “can help elucidate the structural basis for fragility,” said Dr. Bilezikian.

HR-pQCT was used in a recent study (J Bone Min Res. 2017 Dec. 27. doi: 10.1002/jbmr.3371) that followed 48 patients for 1 year after RYGB. Using HR-pQCT, DXA, and serum markers of bone turnover, the researchers found significant decrease in BMD and estimated decrease in bone strength after RYGB. Bone cortex became increasingly porous as well. Taken together, these changes may indicate an increased fracture risk, concluded the investigators.

A longer study that followed RYGB recipients for 2 years and used similar imaging and serum parameters also found that participants had decreased BMD. Tellingly, these investigators saw more marked increase in cortical porosity in the second year after bypass. Estimated bone strength continued to decline during the study period, even after weight loss had stopped.

All of these findings, said Dr. Bilezikian, point to a pathogenetic process other than weight loss that promotes the deteriorating bone microarchitecture seen years after RYGB. “Loss of bone mass and skeletal deterioration after gastric bypass surgery cannot be explained by weight loss alone,” said Dr. Bilezikian.

Another recent study was able to follow a small cohort of patients for a full 5 years, using DXA, lumbar CT, and Hr-pQCT. Though weight loss stabilized after 2 years and 25-OH D and calcium levels were unchanged from presurgical baseline, bone density continued to drop, and bone microarchitecture further deteriorated, said Dr. Bilezikian (Greenblatt L et al. ASBMR 2017, Abstract 1125).

Initially, post–bariatric surgery weight loss may induce bone changes because of skeletal unloading; further down the road, estrogen production by adipose tissue is decreased with ongoing fat loss, and sarcopenia may have an adverse effect on bone microarchitecture. Postsurgical malabsorption may also be an early mechanism of bone loss.

Other hormonal changes can include secondary hyperparathyroidism. Leptin, adiponectin, and peptide YY levels also may be altered.

Do these changes in BMD and bone architecture result in increased fracture risk? This question is difficult to answer, for the same reasons that other bariatric surgery research can be challenging, said Dr. Bilezikian. There is heterogeneity of procedures and supplement regimens, sample sizes can be small, follow-up times short, and adherence often is not tracked.

However, there are some clues that RYGB may be associated with an increased risk of all fractures and of fragility fractures, with appendicular fractures seen most frequently (Osteoporos Int. 2014 Jan; 25[1]:151-8). A larger study that tracked 12,676 patients receiving bariatric surgery, 38,028 patients with obesity, and 126,760 nonobese participants found that the bariatric patients had a 4.1% risk of fracture at 4 years post surgery, compared with 2.7% and 2.4% fracture rates in the participants with and without obesity, respectively (BMJ. 2016;354:i3794).

Other retrospective studies have found “a time-dependent increase in nonvertebral fractures with Roux-en-Y gastric bypass compared to gastric banding,” said Dr. Bilezikian.

How can these risks be managed after gastric bypass surgery? “Strive for nutritional adequacy” as the first step, said Dr. Bilezikian, meaning that calcium and vitamin D should be prescribed – and adherence encouraged – as indicated. Levels of 25-OH D should be checked regularly, with supplementation managed to keep levels over 30 ng/mL, he said.

All patients should be encouraged to develop and maintain an appropriate exercise regimen, and BMD should be followed over time. Those caring for post–gastric bypass patients can still use a bisphosphonate or other bone-health medication, if indicated using standard parameters. However, “You probably shouldn’t use an oral bisphosphonate in this population,” said Dr. Bilezikian.

Dr. Bilezikian reported that he has consulting or advisory relationships with Amgen, Radius Pharmaceuticals, Shire Pharmaceuticals, and Ultragenyx, and serves on a data safety monitoring board for Regeneron.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

 

– Charting a healthy path for patients after bariatric surgery can be complicated and addressing bone health is an important part of the endocrinologist’s role in keeping patients safe from postsurgical fractures, according to John Bilezikian, MD.

“Abnormal bone metabolism is a feature of both obesity and gastric bypass surgery,” said Dr. Bilezikian, speaking during a bariatric surgery–focused session at the annual scientific & clinical congress of the American Academy of Clinical Endocrinologists.

It’s not easy to assess bone health, even before surgery, said Dr. Bilezikian. Even objective measures of bone density, such as dual-energy x-ray absorptiometry (DXA), may be skewed: very high fat mass causes artifact that interferes with accurate measurement of bone density, and DXA can’t distinguish between cortical and trabecular bone. The latter is a particular issue in high body mass index patients, since obesity is known to be associated with a more fragile bone microarchitecture, said Dr. Bilezikian, the Dorothy L. and Daniel H. Silberberg Professor of Medicine and director of the metabolic bone diseases unit at Columbia University, New York.

With these caveats in mind, Dr. Bilezikian said, there are some lessons to be learned from existing research to better manage bone health in bariatric patients.

After Roux-en-Y gastric bypass surgery (RYGB), bone turnover soon increases, with bone resorption markers increasing by up to 200% in the first 12-18 months after surgery. Bone formation markers also are elevated but to a lesser extent, said Dr. Bilezikian. Over time, the weight loss from RYGB is associated with a significant drop in bone mineral density (BMD) at weight-bearing sites. Weight loss was associated with bone loss at the total hip (r = 0.70; P less than .0003) and femoral neck (r = 0.47; P = .03 (J Clin Endocrinol Metab. 2013 Feb;98[2] 541-9).

A newer-technology, high-resolution peripheral quantitative CT (HR-pQCT) offers a noninvasive look not just at bone size and density but also at microarchitecture, including cortical thickness and details of trabecular structure. This technology “can help elucidate the structural basis for fragility,” said Dr. Bilezikian.

HR-pQCT was used in a recent study (J Bone Min Res. 2017 Dec. 27. doi: 10.1002/jbmr.3371) that followed 48 patients for 1 year after RYGB. Using HR-pQCT, DXA, and serum markers of bone turnover, the researchers found significant decrease in BMD and estimated decrease in bone strength after RYGB. Bone cortex became increasingly porous as well. Taken together, these changes may indicate an increased fracture risk, concluded the investigators.

A longer study that followed RYGB recipients for 2 years and used similar imaging and serum parameters also found that participants had decreased BMD. Tellingly, these investigators saw more marked increase in cortical porosity in the second year after bypass. Estimated bone strength continued to decline during the study period, even after weight loss had stopped.

All of these findings, said Dr. Bilezikian, point to a pathogenetic process other than weight loss that promotes the deteriorating bone microarchitecture seen years after RYGB. “Loss of bone mass and skeletal deterioration after gastric bypass surgery cannot be explained by weight loss alone,” said Dr. Bilezikian.

Another recent study was able to follow a small cohort of patients for a full 5 years, using DXA, lumbar CT, and Hr-pQCT. Though weight loss stabilized after 2 years and 25-OH D and calcium levels were unchanged from presurgical baseline, bone density continued to drop, and bone microarchitecture further deteriorated, said Dr. Bilezikian (Greenblatt L et al. ASBMR 2017, Abstract 1125).

Initially, post–bariatric surgery weight loss may induce bone changes because of skeletal unloading; further down the road, estrogen production by adipose tissue is decreased with ongoing fat loss, and sarcopenia may have an adverse effect on bone microarchitecture. Postsurgical malabsorption may also be an early mechanism of bone loss.

Other hormonal changes can include secondary hyperparathyroidism. Leptin, adiponectin, and peptide YY levels also may be altered.

Do these changes in BMD and bone architecture result in increased fracture risk? This question is difficult to answer, for the same reasons that other bariatric surgery research can be challenging, said Dr. Bilezikian. There is heterogeneity of procedures and supplement regimens, sample sizes can be small, follow-up times short, and adherence often is not tracked.

However, there are some clues that RYGB may be associated with an increased risk of all fractures and of fragility fractures, with appendicular fractures seen most frequently (Osteoporos Int. 2014 Jan; 25[1]:151-8). A larger study that tracked 12,676 patients receiving bariatric surgery, 38,028 patients with obesity, and 126,760 nonobese participants found that the bariatric patients had a 4.1% risk of fracture at 4 years post surgery, compared with 2.7% and 2.4% fracture rates in the participants with and without obesity, respectively (BMJ. 2016;354:i3794).

Other retrospective studies have found “a time-dependent increase in nonvertebral fractures with Roux-en-Y gastric bypass compared to gastric banding,” said Dr. Bilezikian.

How can these risks be managed after gastric bypass surgery? “Strive for nutritional adequacy” as the first step, said Dr. Bilezikian, meaning that calcium and vitamin D should be prescribed – and adherence encouraged – as indicated. Levels of 25-OH D should be checked regularly, with supplementation managed to keep levels over 30 ng/mL, he said.

All patients should be encouraged to develop and maintain an appropriate exercise regimen, and BMD should be followed over time. Those caring for post–gastric bypass patients can still use a bisphosphonate or other bone-health medication, if indicated using standard parameters. However, “You probably shouldn’t use an oral bisphosphonate in this population,” said Dr. Bilezikian.

Dr. Bilezikian reported that he has consulting or advisory relationships with Amgen, Radius Pharmaceuticals, Shire Pharmaceuticals, and Ultragenyx, and serves on a data safety monitoring board for Regeneron.

 

– Charting a healthy path for patients after bariatric surgery can be complicated and addressing bone health is an important part of the endocrinologist’s role in keeping patients safe from postsurgical fractures, according to John Bilezikian, MD.

“Abnormal bone metabolism is a feature of both obesity and gastric bypass surgery,” said Dr. Bilezikian, speaking during a bariatric surgery–focused session at the annual scientific & clinical congress of the American Academy of Clinical Endocrinologists.

It’s not easy to assess bone health, even before surgery, said Dr. Bilezikian. Even objective measures of bone density, such as dual-energy x-ray absorptiometry (DXA), may be skewed: very high fat mass causes artifact that interferes with accurate measurement of bone density, and DXA can’t distinguish between cortical and trabecular bone. The latter is a particular issue in high body mass index patients, since obesity is known to be associated with a more fragile bone microarchitecture, said Dr. Bilezikian, the Dorothy L. and Daniel H. Silberberg Professor of Medicine and director of the metabolic bone diseases unit at Columbia University, New York.

With these caveats in mind, Dr. Bilezikian said, there are some lessons to be learned from existing research to better manage bone health in bariatric patients.

After Roux-en-Y gastric bypass surgery (RYGB), bone turnover soon increases, with bone resorption markers increasing by up to 200% in the first 12-18 months after surgery. Bone formation markers also are elevated but to a lesser extent, said Dr. Bilezikian. Over time, the weight loss from RYGB is associated with a significant drop in bone mineral density (BMD) at weight-bearing sites. Weight loss was associated with bone loss at the total hip (r = 0.70; P less than .0003) and femoral neck (r = 0.47; P = .03 (J Clin Endocrinol Metab. 2013 Feb;98[2] 541-9).

A newer-technology, high-resolution peripheral quantitative CT (HR-pQCT) offers a noninvasive look not just at bone size and density but also at microarchitecture, including cortical thickness and details of trabecular structure. This technology “can help elucidate the structural basis for fragility,” said Dr. Bilezikian.

HR-pQCT was used in a recent study (J Bone Min Res. 2017 Dec. 27. doi: 10.1002/jbmr.3371) that followed 48 patients for 1 year after RYGB. Using HR-pQCT, DXA, and serum markers of bone turnover, the researchers found significant decrease in BMD and estimated decrease in bone strength after RYGB. Bone cortex became increasingly porous as well. Taken together, these changes may indicate an increased fracture risk, concluded the investigators.

A longer study that followed RYGB recipients for 2 years and used similar imaging and serum parameters also found that participants had decreased BMD. Tellingly, these investigators saw more marked increase in cortical porosity in the second year after bypass. Estimated bone strength continued to decline during the study period, even after weight loss had stopped.

All of these findings, said Dr. Bilezikian, point to a pathogenetic process other than weight loss that promotes the deteriorating bone microarchitecture seen years after RYGB. “Loss of bone mass and skeletal deterioration after gastric bypass surgery cannot be explained by weight loss alone,” said Dr. Bilezikian.

Another recent study was able to follow a small cohort of patients for a full 5 years, using DXA, lumbar CT, and Hr-pQCT. Though weight loss stabilized after 2 years and 25-OH D and calcium levels were unchanged from presurgical baseline, bone density continued to drop, and bone microarchitecture further deteriorated, said Dr. Bilezikian (Greenblatt L et al. ASBMR 2017, Abstract 1125).

Initially, post–bariatric surgery weight loss may induce bone changes because of skeletal unloading; further down the road, estrogen production by adipose tissue is decreased with ongoing fat loss, and sarcopenia may have an adverse effect on bone microarchitecture. Postsurgical malabsorption may also be an early mechanism of bone loss.

Other hormonal changes can include secondary hyperparathyroidism. Leptin, adiponectin, and peptide YY levels also may be altered.

Do these changes in BMD and bone architecture result in increased fracture risk? This question is difficult to answer, for the same reasons that other bariatric surgery research can be challenging, said Dr. Bilezikian. There is heterogeneity of procedures and supplement regimens, sample sizes can be small, follow-up times short, and adherence often is not tracked.

However, there are some clues that RYGB may be associated with an increased risk of all fractures and of fragility fractures, with appendicular fractures seen most frequently (Osteoporos Int. 2014 Jan; 25[1]:151-8). A larger study that tracked 12,676 patients receiving bariatric surgery, 38,028 patients with obesity, and 126,760 nonobese participants found that the bariatric patients had a 4.1% risk of fracture at 4 years post surgery, compared with 2.7% and 2.4% fracture rates in the participants with and without obesity, respectively (BMJ. 2016;354:i3794).

Other retrospective studies have found “a time-dependent increase in nonvertebral fractures with Roux-en-Y gastric bypass compared to gastric banding,” said Dr. Bilezikian.

How can these risks be managed after gastric bypass surgery? “Strive for nutritional adequacy” as the first step, said Dr. Bilezikian, meaning that calcium and vitamin D should be prescribed – and adherence encouraged – as indicated. Levels of 25-OH D should be checked regularly, with supplementation managed to keep levels over 30 ng/mL, he said.

All patients should be encouraged to develop and maintain an appropriate exercise regimen, and BMD should be followed over time. Those caring for post–gastric bypass patients can still use a bisphosphonate or other bone-health medication, if indicated using standard parameters. However, “You probably shouldn’t use an oral bisphosphonate in this population,” said Dr. Bilezikian.

Dr. Bilezikian reported that he has consulting or advisory relationships with Amgen, Radius Pharmaceuticals, Shire Pharmaceuticals, and Ultragenyx, and serves on a data safety monitoring board for Regeneron.

Publications
Publications
Topics
Article Type
Click for Credit Status
Ready
Sections
Article Source

REPORTING FROM AACE 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Hemostatic clipping cuts bleeds after large polyp removal

Article Type
Changed
Tue, 07/21/2020 - 14:18

Using hemostatic clips to close colonic mucosal defects following endoscopic removal of larger polyps cut the rate of delayed, severe bleeding episodes in half in a multicenter, randomized trial with 918 patients.

“The benefit appears limited to proximal polyps,” Heiko Pohl, MD, said at the annual Digestive Disease Week®. In that prespecified subgroup, which included two-thirds of enrolled patients, placement of hemostatic clips on defects left after removing polyps 20 mm in diameter or larger cut the rate of delayed, severe bleeding by two-thirds, compared with patients with large defects not treated with clips. This result represented a number needed to treat with clips of 15 patients with large proximal polyps to prevent one episode of delayed severe bleeding, said Dr. Pohl, a gastroenterologist at the VA Medical Center in White River Junction, Vt.

Mitchel L. Zoler/MDedge News
Dr. Heiko Pohl

Although the results that Dr. Pohl reported came from a trial that originally had been designed to generate data for Food and Drug Administration approval for using the clips to close defects following large polyp removal, the clips received approval for this indication from the agency in 2016 while the study was still in progress.

But Dr. Pohl maintained that the new evidence for efficacy that he reported will provide further impetus for gastroenterologists to use clips when they remove larger polyps in proximal locations. “I think this study will help standardize treatment of mucosal resections and change clip use,” he said in an interview.

“This was a terrific study, and one that needed to be done,” commented John R. Saltzman, MD, professor of medicine at Harvard Medical School and director of endoscopy at Brigham and Women’s Hospital in Boston. But Dr. Saltzman, who spoke from the floor during discussion of Dr. Pohl’s report, added that data on the average number of clips required to close defects were needed to assess the cost-effectiveness of the treatment, data that Dr. Pohl said were available but still being analyzed.

“We have to know how many clips to use and how to close the polyp,” Dr. Saltzman said. Dr. Pohl estimated that roughly four or five clips had been used per defect, but he cautioned that this estimate was preliminary pending his complete analysis of the data.

The CLIP (Clip Closure After Endoscopic Resection of Large Polyps) study enrolled patients with at least one nonpedunculated colonic polyp that was at least 20 mm in diameter at 16 U.S. centers, as well as one center in Montreal and one in Barcelona. The patients averaged 65 years of age, and 6%-7% of patients had more than one large polyp removed during their procedure. Randomization produced one important imbalance in assignment: 25% of the 454 patients in the clipped arm were on an antithrombotic drug (either an anticoagulant or antiplatelet drug) at the time of their endoscopy, compared with 33% of the 464 patients in the control arm.

The study’s primary endpoint was the incidence of “severe” bleeding within 30 days after the procedure. The study defined severe bleeding as an event that required hospitalization, need for repeat endoscopy, need for a blood transfusion, or need for any other major intervention, explained Dr. Pohl, who is also on the staff of Dartmouth-Hitchcock Medical Center in Lebanon, N.H.

Such events occurred in 3.5% of the patients who underwent clipping and in 7.3% of control patients who received no clipping, a statistically significant difference (P = .01). Among patients with proximal polyps, the bleeding rates were 3.3% among clipped patients and 9.9% among controls, also a statistically significant difference. Among patients with distal polyps the bleeding rates were 4.0% among clipped patients and 1.4% among controls, a difference that was not statistically significant.

Dr. Pohl and his associates ran three other prespecified, secondary analyses that divided the enrolled patients into subgroups. These analyses showed no significant effect on outcome by polyp size when comparing 20-39 mm polyps with polyps 40 mm or larger, treatment with an antithrombotic drug, or method of cauterization. The median time to severe bleeding was 1 day among the controls and 7 days among the clipped patients.

Aside from the difference in rates of delayed bleeding, the two study arms showed no significant differences in the incidence of any other serious postprocedure events. The rates of these nonbleeding events were 1.3% among clipped patients and 2.4% among the controls.

The researchers ran all these analyses based on the intention-to-treat assignment of patients. However, during the study, 9% of patients assigned to the control arm crossed over ended up receiving clips during their procedure after all, a rate that Dr. Pohl called “surprisingly high,” whereas 14% of patients assigned to the clip arm never received clips. A per-protocol analysis that censored patients who did not receive their assigned treatment showed that, among the remaining patients who underwent their assigned treatment, the rate of delayed, severe bleeds was 2.3% among the 390 patients actually treated with clips and 7.2% among the 419 controls who never received clips, a statistically significant difference, he reported.

Dr. Pohl also noted that it was “somewhat surprising” that clipping appeared to result in complete closure in “only” 68% of patients who underwent clipping and that it produced partial closure in an additional 20% of patients, with the remaining patients having mucosal defects that were not considered closed by clipping.

The study was funded by Boston Scientific, the company that markets the hemostatic clip (Resolution 360) tested in the study. Dr. Pohl had no additional disclosures. Dr. Saltzman had no disclosures.

[email protected]

SOURCE: Pohl H et al. Digestive Disease Week, Presentation 886.

Meeting/Event
Publications
Topics
Sections
Meeting/Event
Meeting/Event

Using hemostatic clips to close colonic mucosal defects following endoscopic removal of larger polyps cut the rate of delayed, severe bleeding episodes in half in a multicenter, randomized trial with 918 patients.

“The benefit appears limited to proximal polyps,” Heiko Pohl, MD, said at the annual Digestive Disease Week®. In that prespecified subgroup, which included two-thirds of enrolled patients, placement of hemostatic clips on defects left after removing polyps 20 mm in diameter or larger cut the rate of delayed, severe bleeding by two-thirds, compared with patients with large defects not treated with clips. This result represented a number needed to treat with clips of 15 patients with large proximal polyps to prevent one episode of delayed severe bleeding, said Dr. Pohl, a gastroenterologist at the VA Medical Center in White River Junction, Vt.

Mitchel L. Zoler/MDedge News
Dr. Heiko Pohl

Although the results that Dr. Pohl reported came from a trial that originally had been designed to generate data for Food and Drug Administration approval for using the clips to close defects following large polyp removal, the clips received approval for this indication from the agency in 2016 while the study was still in progress.

But Dr. Pohl maintained that the new evidence for efficacy that he reported will provide further impetus for gastroenterologists to use clips when they remove larger polyps in proximal locations. “I think this study will help standardize treatment of mucosal resections and change clip use,” he said in an interview.

“This was a terrific study, and one that needed to be done,” commented John R. Saltzman, MD, professor of medicine at Harvard Medical School and director of endoscopy at Brigham and Women’s Hospital in Boston. But Dr. Saltzman, who spoke from the floor during discussion of Dr. Pohl’s report, added that data on the average number of clips required to close defects were needed to assess the cost-effectiveness of the treatment, data that Dr. Pohl said were available but still being analyzed.

“We have to know how many clips to use and how to close the polyp,” Dr. Saltzman said. Dr. Pohl estimated that roughly four or five clips had been used per defect, but he cautioned that this estimate was preliminary pending his complete analysis of the data.

The CLIP (Clip Closure After Endoscopic Resection of Large Polyps) study enrolled patients with at least one nonpedunculated colonic polyp that was at least 20 mm in diameter at 16 U.S. centers, as well as one center in Montreal and one in Barcelona. The patients averaged 65 years of age, and 6%-7% of patients had more than one large polyp removed during their procedure. Randomization produced one important imbalance in assignment: 25% of the 454 patients in the clipped arm were on an antithrombotic drug (either an anticoagulant or antiplatelet drug) at the time of their endoscopy, compared with 33% of the 464 patients in the control arm.

The study’s primary endpoint was the incidence of “severe” bleeding within 30 days after the procedure. The study defined severe bleeding as an event that required hospitalization, need for repeat endoscopy, need for a blood transfusion, or need for any other major intervention, explained Dr. Pohl, who is also on the staff of Dartmouth-Hitchcock Medical Center in Lebanon, N.H.

Such events occurred in 3.5% of the patients who underwent clipping and in 7.3% of control patients who received no clipping, a statistically significant difference (P = .01). Among patients with proximal polyps, the bleeding rates were 3.3% among clipped patients and 9.9% among controls, also a statistically significant difference. Among patients with distal polyps the bleeding rates were 4.0% among clipped patients and 1.4% among controls, a difference that was not statistically significant.

Dr. Pohl and his associates ran three other prespecified, secondary analyses that divided the enrolled patients into subgroups. These analyses showed no significant effect on outcome by polyp size when comparing 20-39 mm polyps with polyps 40 mm or larger, treatment with an antithrombotic drug, or method of cauterization. The median time to severe bleeding was 1 day among the controls and 7 days among the clipped patients.

Aside from the difference in rates of delayed bleeding, the two study arms showed no significant differences in the incidence of any other serious postprocedure events. The rates of these nonbleeding events were 1.3% among clipped patients and 2.4% among the controls.

The researchers ran all these analyses based on the intention-to-treat assignment of patients. However, during the study, 9% of patients assigned to the control arm crossed over ended up receiving clips during their procedure after all, a rate that Dr. Pohl called “surprisingly high,” whereas 14% of patients assigned to the clip arm never received clips. A per-protocol analysis that censored patients who did not receive their assigned treatment showed that, among the remaining patients who underwent their assigned treatment, the rate of delayed, severe bleeds was 2.3% among the 390 patients actually treated with clips and 7.2% among the 419 controls who never received clips, a statistically significant difference, he reported.

Dr. Pohl also noted that it was “somewhat surprising” that clipping appeared to result in complete closure in “only” 68% of patients who underwent clipping and that it produced partial closure in an additional 20% of patients, with the remaining patients having mucosal defects that were not considered closed by clipping.

The study was funded by Boston Scientific, the company that markets the hemostatic clip (Resolution 360) tested in the study. Dr. Pohl had no additional disclosures. Dr. Saltzman had no disclosures.

[email protected]

SOURCE: Pohl H et al. Digestive Disease Week, Presentation 886.

Using hemostatic clips to close colonic mucosal defects following endoscopic removal of larger polyps cut the rate of delayed, severe bleeding episodes in half in a multicenter, randomized trial with 918 patients.

“The benefit appears limited to proximal polyps,” Heiko Pohl, MD, said at the annual Digestive Disease Week®. In that prespecified subgroup, which included two-thirds of enrolled patients, placement of hemostatic clips on defects left after removing polyps 20 mm in diameter or larger cut the rate of delayed, severe bleeding by two-thirds, compared with patients with large defects not treated with clips. This result represented a number needed to treat with clips of 15 patients with large proximal polyps to prevent one episode of delayed severe bleeding, said Dr. Pohl, a gastroenterologist at the VA Medical Center in White River Junction, Vt.

Mitchel L. Zoler/MDedge News
Dr. Heiko Pohl

Although the results that Dr. Pohl reported came from a trial that originally had been designed to generate data for Food and Drug Administration approval for using the clips to close defects following large polyp removal, the clips received approval for this indication from the agency in 2016 while the study was still in progress.

But Dr. Pohl maintained that the new evidence for efficacy that he reported will provide further impetus for gastroenterologists to use clips when they remove larger polyps in proximal locations. “I think this study will help standardize treatment of mucosal resections and change clip use,” he said in an interview.

“This was a terrific study, and one that needed to be done,” commented John R. Saltzman, MD, professor of medicine at Harvard Medical School and director of endoscopy at Brigham and Women’s Hospital in Boston. But Dr. Saltzman, who spoke from the floor during discussion of Dr. Pohl’s report, added that data on the average number of clips required to close defects were needed to assess the cost-effectiveness of the treatment, data that Dr. Pohl said were available but still being analyzed.

“We have to know how many clips to use and how to close the polyp,” Dr. Saltzman said. Dr. Pohl estimated that roughly four or five clips had been used per defect, but he cautioned that this estimate was preliminary pending his complete analysis of the data.

The CLIP (Clip Closure After Endoscopic Resection of Large Polyps) study enrolled patients with at least one nonpedunculated colonic polyp that was at least 20 mm in diameter at 16 U.S. centers, as well as one center in Montreal and one in Barcelona. The patients averaged 65 years of age, and 6%-7% of patients had more than one large polyp removed during their procedure. Randomization produced one important imbalance in assignment: 25% of the 454 patients in the clipped arm were on an antithrombotic drug (either an anticoagulant or antiplatelet drug) at the time of their endoscopy, compared with 33% of the 464 patients in the control arm.

The study’s primary endpoint was the incidence of “severe” bleeding within 30 days after the procedure. The study defined severe bleeding as an event that required hospitalization, need for repeat endoscopy, need for a blood transfusion, or need for any other major intervention, explained Dr. Pohl, who is also on the staff of Dartmouth-Hitchcock Medical Center in Lebanon, N.H.

Such events occurred in 3.5% of the patients who underwent clipping and in 7.3% of control patients who received no clipping, a statistically significant difference (P = .01). Among patients with proximal polyps, the bleeding rates were 3.3% among clipped patients and 9.9% among controls, also a statistically significant difference. Among patients with distal polyps the bleeding rates were 4.0% among clipped patients and 1.4% among controls, a difference that was not statistically significant.

Dr. Pohl and his associates ran three other prespecified, secondary analyses that divided the enrolled patients into subgroups. These analyses showed no significant effect on outcome by polyp size when comparing 20-39 mm polyps with polyps 40 mm or larger, treatment with an antithrombotic drug, or method of cauterization. The median time to severe bleeding was 1 day among the controls and 7 days among the clipped patients.

Aside from the difference in rates of delayed bleeding, the two study arms showed no significant differences in the incidence of any other serious postprocedure events. The rates of these nonbleeding events were 1.3% among clipped patients and 2.4% among the controls.

The researchers ran all these analyses based on the intention-to-treat assignment of patients. However, during the study, 9% of patients assigned to the control arm crossed over ended up receiving clips during their procedure after all, a rate that Dr. Pohl called “surprisingly high,” whereas 14% of patients assigned to the clip arm never received clips. A per-protocol analysis that censored patients who did not receive their assigned treatment showed that, among the remaining patients who underwent their assigned treatment, the rate of delayed, severe bleeds was 2.3% among the 390 patients actually treated with clips and 7.2% among the 419 controls who never received clips, a statistically significant difference, he reported.

Dr. Pohl also noted that it was “somewhat surprising” that clipping appeared to result in complete closure in “only” 68% of patients who underwent clipping and that it produced partial closure in an additional 20% of patients, with the remaining patients having mucosal defects that were not considered closed by clipping.

The study was funded by Boston Scientific, the company that markets the hemostatic clip (Resolution 360) tested in the study. Dr. Pohl had no additional disclosures. Dr. Saltzman had no disclosures.

[email protected]

SOURCE: Pohl H et al. Digestive Disease Week, Presentation 886.

Publications
Publications
Topics
Article Type
Sections
Article Source

REPORTING FROM DDW 2018

Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Vitals

 

Key clinical point: Hemostatic wound clipping after large polyp removal cuts delayed bleeding, especially for proximal polyps.

Major finding: The incidence of severe, delayed bleeds was 3.5% among clipped patients and 7.3% among controls.

Study details: The CLIP study, a multicenter, randomized trial with 918 patients.

Disclosures: The study was funded by Boston Scientific, the company that markets the hemostatic clip (Resolution 360) tested in the study. Dr. Pohl had no additional disclosures. Dr. Saltzman had no disclosures.

Source: Pohl H et al. Digestive Disease Week, Presentation 886.

Disqus Comments
Default
Use ProPublica

A new way to classify endometrial cancer

Article Type
Changed
Fri, 01/18/2019 - 17:44

 

We classify endometrial cancer so that we can communicate and define each patient’s disease status, the potential for harm, and the likelihood that adjuvant therapies might provide help. Traditional forms of classification have clearly fallen short in achieving this aim, as we all know of patients with apparent low-risk disease (such as stage IA grade 1 endometrioid carcinoma) who have had recurrences and died from their disease, and we know that many patients have been subjected to overtreatment for their cancer and have acquired lifelong toxicities of therapy. This column will explore the newer, more sophisticated molecular-based classifications that are being validated for endometrial cancer, and the ways in which this promises to personalize the treatment of endometrial cancer.

Dr. Emma C. Rossi
We historically considered endometrial cancer with respect to “types”: type 1 cancer being estrogen dependent, featuring PTEN mutations, and affecting more obese patients; type 2 cancer being associated with p53 mutations, not estrogen dependent, and affecting older, less obese individuals.1 These categories were reasonable guides but ultimately oversimplified the disease and its affected patients. Additionally we have used histologic types, International Federation of Gynecology and Obstetrics grading, and surgical staging to categorize tumors. Unfortunately, histologic cell type and grade are limited by poor agreement among pathologists, with up to 50% discordance between readers, and surgical staging information may be limited in its completeness.2 Therefore, these categorizations lack the precision and accuracy to serve as prognosticators or to direct therapy. Reliance upon these inaccurate and imprecise methods of characterization may be part of the reason why most major clinical trials have failed to identify survival benefits for experimental therapies in early-stage disease. We may have been indiscriminately applying therapies instead of targeting the patients who are the most likely to derive benefit.

Breast cancer and melanoma are examples of the inclusion of molecular data such as hormone receptor status, HER2/neu status, or BRAF positivity resulting in advancements in personalizing therapeutics. We are now moving toward this for endometrial cancer.
 

What is the Cancer Genome Atlas?

In 2006 the National Institutes of Health announced an initiative to coordinate work between the National Cancer Institute and the National Human Genome Research Institute taking information about the human genome and analyzing it for key genomic alterations found in 33 common cancers. These data were combined with clinical information (such as survival) to classify the behaviors of those cancers with respect to their individual genomic alternations, in order to look for patterns in mutations and behaviors. The goal of this analysis was to shift the paradigm of cancer classification from being centered around primary organ site toward tumors’ shared genomic patterns.

In 2013 the Cancer Genome Atlas published their results of complete gene sequencing in endometrial cancer.3 The authors identified four discrete subgroups of endometrial cancer with distinct molecular mutational profiles and distinct clinical outcomes: polymerase epsilon (POLE, pronounced “pole-ee”) ultramutated, microsatellite instability (MSI) high, copy number high, and copy number low.
 

POLE ultramutated

An important subgroup identified in the Cancer Genome Atlas was a group of patients with a POLE ultramutated state. POLE encodes for a subunit of DNA polymerase, the enzyme responsible for replicating the leading DNA strand. Nonfunctioning POLE results in proofreading errors and a subsequent ultramutated cellular state with a predominance of single nucleotide variants. POLE proofreading domain mutations in endometrial cancer and colon cancer are associated with excellent prognosis, likely secondary to the immune response that is elicited by this ultramutated state from creation of “antigenic neoepitopes” that stimulate T-cell response. Effectively, the very mutated cell is seen as “more foreign” to the body’s immune system.

Approximately 10% of patients with endometrial cancer have a POLE ultramutated state, and, as stated above, prognosis is excellent, even if coexisting with a histologic cell type (such as serous) that is normally associated with adverse outcomes. These women tend to be younger, with a lower body mass index, higher-grade endometrioid cell type, the presence of lymphovascular space invasion, and low stage.
 

MSI high

MSI (microsatellite instability) is a result of epigenetic/hypermethylations or loss of expression in mismatch repair genes (such as MLH1, MSH2, MSH6, PMS2). These genes code for proteins critical in the repair of mismatches in short repeated sequences of DNA. Loss of their function results in an accumulation of errors in these sequences: MSI. It is a feature of the Lynch syndrome inherited state, but is also found sporadically in endometrial tumors. These tumors accumulate a number of mutations during cell replication that, as in POLE hypermutated tumors, are associated with eliciting an immune response.

 

 

These tumors tend to be associated with a higher-grade endometrioid cell type, the presence of lymphovascular space invasion, and an advanced stage. Patients with tumors that have been described as MSI high are candidates for “immune therapy” with the PDL1 inhibitor pembrolizumab because of their proinflammatory state and observed favorable responses in clinical trials.4
 

Copy number high/low

Copy number (CN) high and low refers to the results of microarrays in which hierarchical clustering was applied to identify reoccurring amplification or deletion regions. The CN-high group was associated with the poorest outcomes (recurrence and survival). There is significant overlap with mutations in TP53. Most serous carcinomas were CN high; however, 25% of patients with high-grade endometrioid cell type shared the CN-high classification. These tumors shared great molecular similarity to high-grade serous ovarian cancers and basal-like breast cancer.

Those patients who did not possess mutations that classified them as POLE hypermutated, MSI high, or CN high were classified as CN low. This group included predominantly grades 1 and 2 endometrioid adenocarcinomas of an early stage and had a favorable prognostic profile, though less favorable than those with a POLE ultramutated state, which appears to be somewhat protective.
 

Molecular/metabolic interactions

While molecular data are clearly important in driving a cancer cell’s behavior, other clinical and metabolic factors influence cancer behavior. For example, body mass index, adiposity, glucose, and lipid metabolism have been shown to be important drivers of cellular behavior and responsiveness to targeted therapies.5,6 Additionally age, race, and other metabolic states contribute to oncologic behavior. Future classifications of endometrial cancer are unlikely to use molecular profiles in isolation but will need to incorporate these additional patient-specific data to better predict and prognosticate outcomes.

Clinical applications

If researchers can better define and describe a patient’s endometrial cancer from the time of their biopsy, important clinical decisions might be able to be tackled. For example, in a premenopausal patient with an endometrial cancer who is considering fertility-sparing treatments, preoperative knowledge of a POLE ultramutated state (and therefore an anticipated good prognosis) might favor fertility preservation or avoid comprehensive staging which may be of limited value. Similarly, if an MSI-high profile is identified leading to a Lynch syndrome diagnosis, she may be more inclined to undergo a hysterectomy with bilateral salpingo-oophorectomy and staging as she is at known increased risk for a more advanced endometrial cancer, as well as the potential for ovarian cancer.

Postoperative incorporation of molecular data promises to be particularly helpful in guiding adjuvant therapies and sparing some women from unnecessary treatments. For example, women with high-grade endometrioid tumors who are CN high were historically treated with radiotherapy but might do better treated with systemic adjuvant therapies traditionally reserved for nonendometrioid carcinomas. Costly therapies such as immunotherapy can be directed toward those with MSI-high tumors, and the rare patient with a POLE ultramutated state who has a recurrence or advanced disease. Clinical trials will be able to cluster enrollment of patients with CN-high, serouslike cancers with those with serous cancers, rather than combining them with patients whose cancers predictably behave much differently.

Much work is still needed to validate this molecular profiling in endometrial cancer and define the algorithms associated with treatment decisions; however, it is likely that the way we describe endometrial cancer in the near future will be quite different.
 

Dr. Rossi is an assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no disclosures.

References

1. Bokhman JV. Two pathogenetic types of endometrial carcinoma. Gynecol Oncol. 1983;15(1):10-7.

2. Clarke BA et al. Endometrial carcinoma: controversies in histopathological assessment of grade and tumour cell type. J Clin Pathol. 2010;63(5):410-5.

3. Cancer Genome Atlas Research Network. Integrated genomic characterization of endometrial carcinoma. Nature. 2013;497(7447):67-73.

4. Ott PA et al. Pembrolizumab in advanced endometrial cancer: Preliminary results from the phase Ib KEYNOTE-028 study. J Clin Oncol. 2016;34(suppl):Abstract 5581.

5. Roque DR et al. Association between differential gene expression and body mass index among endometrial cancers from the Cancer Genome Atlas Project. Gynecol Oncol. 2016;142(2):317-22.

6. Talhouk A et al. New classification of endometrial cancers: The development and potential applications of genomic-based classification in research and clinical care. Gynecol Oncol Res Pract. 2016 Dec;3:14.

Publications
Topics
Sections

 

We classify endometrial cancer so that we can communicate and define each patient’s disease status, the potential for harm, and the likelihood that adjuvant therapies might provide help. Traditional forms of classification have clearly fallen short in achieving this aim, as we all know of patients with apparent low-risk disease (such as stage IA grade 1 endometrioid carcinoma) who have had recurrences and died from their disease, and we know that many patients have been subjected to overtreatment for their cancer and have acquired lifelong toxicities of therapy. This column will explore the newer, more sophisticated molecular-based classifications that are being validated for endometrial cancer, and the ways in which this promises to personalize the treatment of endometrial cancer.

Dr. Emma C. Rossi
We historically considered endometrial cancer with respect to “types”: type 1 cancer being estrogen dependent, featuring PTEN mutations, and affecting more obese patients; type 2 cancer being associated with p53 mutations, not estrogen dependent, and affecting older, less obese individuals.1 These categories were reasonable guides but ultimately oversimplified the disease and its affected patients. Additionally we have used histologic types, International Federation of Gynecology and Obstetrics grading, and surgical staging to categorize tumors. Unfortunately, histologic cell type and grade are limited by poor agreement among pathologists, with up to 50% discordance between readers, and surgical staging information may be limited in its completeness.2 Therefore, these categorizations lack the precision and accuracy to serve as prognosticators or to direct therapy. Reliance upon these inaccurate and imprecise methods of characterization may be part of the reason why most major clinical trials have failed to identify survival benefits for experimental therapies in early-stage disease. We may have been indiscriminately applying therapies instead of targeting the patients who are the most likely to derive benefit.

Breast cancer and melanoma are examples of the inclusion of molecular data such as hormone receptor status, HER2/neu status, or BRAF positivity resulting in advancements in personalizing therapeutics. We are now moving toward this for endometrial cancer.
 

What is the Cancer Genome Atlas?

In 2006 the National Institutes of Health announced an initiative to coordinate work between the National Cancer Institute and the National Human Genome Research Institute taking information about the human genome and analyzing it for key genomic alterations found in 33 common cancers. These data were combined with clinical information (such as survival) to classify the behaviors of those cancers with respect to their individual genomic alternations, in order to look for patterns in mutations and behaviors. The goal of this analysis was to shift the paradigm of cancer classification from being centered around primary organ site toward tumors’ shared genomic patterns.

In 2013 the Cancer Genome Atlas published their results of complete gene sequencing in endometrial cancer.3 The authors identified four discrete subgroups of endometrial cancer with distinct molecular mutational profiles and distinct clinical outcomes: polymerase epsilon (POLE, pronounced “pole-ee”) ultramutated, microsatellite instability (MSI) high, copy number high, and copy number low.
 

POLE ultramutated

An important subgroup identified in the Cancer Genome Atlas was a group of patients with a POLE ultramutated state. POLE encodes for a subunit of DNA polymerase, the enzyme responsible for replicating the leading DNA strand. Nonfunctioning POLE results in proofreading errors and a subsequent ultramutated cellular state with a predominance of single nucleotide variants. POLE proofreading domain mutations in endometrial cancer and colon cancer are associated with excellent prognosis, likely secondary to the immune response that is elicited by this ultramutated state from creation of “antigenic neoepitopes” that stimulate T-cell response. Effectively, the very mutated cell is seen as “more foreign” to the body’s immune system.

Approximately 10% of patients with endometrial cancer have a POLE ultramutated state, and, as stated above, prognosis is excellent, even if coexisting with a histologic cell type (such as serous) that is normally associated with adverse outcomes. These women tend to be younger, with a lower body mass index, higher-grade endometrioid cell type, the presence of lymphovascular space invasion, and low stage.
 

MSI high

MSI (microsatellite instability) is a result of epigenetic/hypermethylations or loss of expression in mismatch repair genes (such as MLH1, MSH2, MSH6, PMS2). These genes code for proteins critical in the repair of mismatches in short repeated sequences of DNA. Loss of their function results in an accumulation of errors in these sequences: MSI. It is a feature of the Lynch syndrome inherited state, but is also found sporadically in endometrial tumors. These tumors accumulate a number of mutations during cell replication that, as in POLE hypermutated tumors, are associated with eliciting an immune response.

 

 

These tumors tend to be associated with a higher-grade endometrioid cell type, the presence of lymphovascular space invasion, and an advanced stage. Patients with tumors that have been described as MSI high are candidates for “immune therapy” with the PDL1 inhibitor pembrolizumab because of their proinflammatory state and observed favorable responses in clinical trials.4
 

Copy number high/low

Copy number (CN) high and low refers to the results of microarrays in which hierarchical clustering was applied to identify reoccurring amplification or deletion regions. The CN-high group was associated with the poorest outcomes (recurrence and survival). There is significant overlap with mutations in TP53. Most serous carcinomas were CN high; however, 25% of patients with high-grade endometrioid cell type shared the CN-high classification. These tumors shared great molecular similarity to high-grade serous ovarian cancers and basal-like breast cancer.

Those patients who did not possess mutations that classified them as POLE hypermutated, MSI high, or CN high were classified as CN low. This group included predominantly grades 1 and 2 endometrioid adenocarcinomas of an early stage and had a favorable prognostic profile, though less favorable than those with a POLE ultramutated state, which appears to be somewhat protective.
 

Molecular/metabolic interactions

While molecular data are clearly important in driving a cancer cell’s behavior, other clinical and metabolic factors influence cancer behavior. For example, body mass index, adiposity, glucose, and lipid metabolism have been shown to be important drivers of cellular behavior and responsiveness to targeted therapies.5,6 Additionally age, race, and other metabolic states contribute to oncologic behavior. Future classifications of endometrial cancer are unlikely to use molecular profiles in isolation but will need to incorporate these additional patient-specific data to better predict and prognosticate outcomes.

Clinical applications

If researchers can better define and describe a patient’s endometrial cancer from the time of their biopsy, important clinical decisions might be able to be tackled. For example, in a premenopausal patient with an endometrial cancer who is considering fertility-sparing treatments, preoperative knowledge of a POLE ultramutated state (and therefore an anticipated good prognosis) might favor fertility preservation or avoid comprehensive staging which may be of limited value. Similarly, if an MSI-high profile is identified leading to a Lynch syndrome diagnosis, she may be more inclined to undergo a hysterectomy with bilateral salpingo-oophorectomy and staging as she is at known increased risk for a more advanced endometrial cancer, as well as the potential for ovarian cancer.

Postoperative incorporation of molecular data promises to be particularly helpful in guiding adjuvant therapies and sparing some women from unnecessary treatments. For example, women with high-grade endometrioid tumors who are CN high were historically treated with radiotherapy but might do better treated with systemic adjuvant therapies traditionally reserved for nonendometrioid carcinomas. Costly therapies such as immunotherapy can be directed toward those with MSI-high tumors, and the rare patient with a POLE ultramutated state who has a recurrence or advanced disease. Clinical trials will be able to cluster enrollment of patients with CN-high, serouslike cancers with those with serous cancers, rather than combining them with patients whose cancers predictably behave much differently.

Much work is still needed to validate this molecular profiling in endometrial cancer and define the algorithms associated with treatment decisions; however, it is likely that the way we describe endometrial cancer in the near future will be quite different.
 

Dr. Rossi is an assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no disclosures.

References

1. Bokhman JV. Two pathogenetic types of endometrial carcinoma. Gynecol Oncol. 1983;15(1):10-7.

2. Clarke BA et al. Endometrial carcinoma: controversies in histopathological assessment of grade and tumour cell type. J Clin Pathol. 2010;63(5):410-5.

3. Cancer Genome Atlas Research Network. Integrated genomic characterization of endometrial carcinoma. Nature. 2013;497(7447):67-73.

4. Ott PA et al. Pembrolizumab in advanced endometrial cancer: Preliminary results from the phase Ib KEYNOTE-028 study. J Clin Oncol. 2016;34(suppl):Abstract 5581.

5. Roque DR et al. Association between differential gene expression and body mass index among endometrial cancers from the Cancer Genome Atlas Project. Gynecol Oncol. 2016;142(2):317-22.

6. Talhouk A et al. New classification of endometrial cancers: The development and potential applications of genomic-based classification in research and clinical care. Gynecol Oncol Res Pract. 2016 Dec;3:14.

 

We classify endometrial cancer so that we can communicate and define each patient’s disease status, the potential for harm, and the likelihood that adjuvant therapies might provide help. Traditional forms of classification have clearly fallen short in achieving this aim, as we all know of patients with apparent low-risk disease (such as stage IA grade 1 endometrioid carcinoma) who have had recurrences and died from their disease, and we know that many patients have been subjected to overtreatment for their cancer and have acquired lifelong toxicities of therapy. This column will explore the newer, more sophisticated molecular-based classifications that are being validated for endometrial cancer, and the ways in which this promises to personalize the treatment of endometrial cancer.

Dr. Emma C. Rossi
We historically considered endometrial cancer with respect to “types”: type 1 cancer being estrogen dependent, featuring PTEN mutations, and affecting more obese patients; type 2 cancer being associated with p53 mutations, not estrogen dependent, and affecting older, less obese individuals.1 These categories were reasonable guides but ultimately oversimplified the disease and its affected patients. Additionally we have used histologic types, International Federation of Gynecology and Obstetrics grading, and surgical staging to categorize tumors. Unfortunately, histologic cell type and grade are limited by poor agreement among pathologists, with up to 50% discordance between readers, and surgical staging information may be limited in its completeness.2 Therefore, these categorizations lack the precision and accuracy to serve as prognosticators or to direct therapy. Reliance upon these inaccurate and imprecise methods of characterization may be part of the reason why most major clinical trials have failed to identify survival benefits for experimental therapies in early-stage disease. We may have been indiscriminately applying therapies instead of targeting the patients who are the most likely to derive benefit.

Breast cancer and melanoma are examples of the inclusion of molecular data such as hormone receptor status, HER2/neu status, or BRAF positivity resulting in advancements in personalizing therapeutics. We are now moving toward this for endometrial cancer.
 

What is the Cancer Genome Atlas?

In 2006 the National Institutes of Health announced an initiative to coordinate work between the National Cancer Institute and the National Human Genome Research Institute taking information about the human genome and analyzing it for key genomic alterations found in 33 common cancers. These data were combined with clinical information (such as survival) to classify the behaviors of those cancers with respect to their individual genomic alternations, in order to look for patterns in mutations and behaviors. The goal of this analysis was to shift the paradigm of cancer classification from being centered around primary organ site toward tumors’ shared genomic patterns.

In 2013 the Cancer Genome Atlas published their results of complete gene sequencing in endometrial cancer.3 The authors identified four discrete subgroups of endometrial cancer with distinct molecular mutational profiles and distinct clinical outcomes: polymerase epsilon (POLE, pronounced “pole-ee”) ultramutated, microsatellite instability (MSI) high, copy number high, and copy number low.
 

POLE ultramutated

An important subgroup identified in the Cancer Genome Atlas was a group of patients with a POLE ultramutated state. POLE encodes for a subunit of DNA polymerase, the enzyme responsible for replicating the leading DNA strand. Nonfunctioning POLE results in proofreading errors and a subsequent ultramutated cellular state with a predominance of single nucleotide variants. POLE proofreading domain mutations in endometrial cancer and colon cancer are associated with excellent prognosis, likely secondary to the immune response that is elicited by this ultramutated state from creation of “antigenic neoepitopes” that stimulate T-cell response. Effectively, the very mutated cell is seen as “more foreign” to the body’s immune system.

Approximately 10% of patients with endometrial cancer have a POLE ultramutated state, and, as stated above, prognosis is excellent, even if coexisting with a histologic cell type (such as serous) that is normally associated with adverse outcomes. These women tend to be younger, with a lower body mass index, higher-grade endometrioid cell type, the presence of lymphovascular space invasion, and low stage.
 

MSI high

MSI (microsatellite instability) is a result of epigenetic/hypermethylations or loss of expression in mismatch repair genes (such as MLH1, MSH2, MSH6, PMS2). These genes code for proteins critical in the repair of mismatches in short repeated sequences of DNA. Loss of their function results in an accumulation of errors in these sequences: MSI. It is a feature of the Lynch syndrome inherited state, but is also found sporadically in endometrial tumors. These tumors accumulate a number of mutations during cell replication that, as in POLE hypermutated tumors, are associated with eliciting an immune response.

 

 

These tumors tend to be associated with a higher-grade endometrioid cell type, the presence of lymphovascular space invasion, and an advanced stage. Patients with tumors that have been described as MSI high are candidates for “immune therapy” with the PDL1 inhibitor pembrolizumab because of their proinflammatory state and observed favorable responses in clinical trials.4
 

Copy number high/low

Copy number (CN) high and low refers to the results of microarrays in which hierarchical clustering was applied to identify reoccurring amplification or deletion regions. The CN-high group was associated with the poorest outcomes (recurrence and survival). There is significant overlap with mutations in TP53. Most serous carcinomas were CN high; however, 25% of patients with high-grade endometrioid cell type shared the CN-high classification. These tumors shared great molecular similarity to high-grade serous ovarian cancers and basal-like breast cancer.

Those patients who did not possess mutations that classified them as POLE hypermutated, MSI high, or CN high were classified as CN low. This group included predominantly grades 1 and 2 endometrioid adenocarcinomas of an early stage and had a favorable prognostic profile, though less favorable than those with a POLE ultramutated state, which appears to be somewhat protective.
 

Molecular/metabolic interactions

While molecular data are clearly important in driving a cancer cell’s behavior, other clinical and metabolic factors influence cancer behavior. For example, body mass index, adiposity, glucose, and lipid metabolism have been shown to be important drivers of cellular behavior and responsiveness to targeted therapies.5,6 Additionally age, race, and other metabolic states contribute to oncologic behavior. Future classifications of endometrial cancer are unlikely to use molecular profiles in isolation but will need to incorporate these additional patient-specific data to better predict and prognosticate outcomes.

Clinical applications

If researchers can better define and describe a patient’s endometrial cancer from the time of their biopsy, important clinical decisions might be able to be tackled. For example, in a premenopausal patient with an endometrial cancer who is considering fertility-sparing treatments, preoperative knowledge of a POLE ultramutated state (and therefore an anticipated good prognosis) might favor fertility preservation or avoid comprehensive staging which may be of limited value. Similarly, if an MSI-high profile is identified leading to a Lynch syndrome diagnosis, she may be more inclined to undergo a hysterectomy with bilateral salpingo-oophorectomy and staging as she is at known increased risk for a more advanced endometrial cancer, as well as the potential for ovarian cancer.

Postoperative incorporation of molecular data promises to be particularly helpful in guiding adjuvant therapies and sparing some women from unnecessary treatments. For example, women with high-grade endometrioid tumors who are CN high were historically treated with radiotherapy but might do better treated with systemic adjuvant therapies traditionally reserved for nonendometrioid carcinomas. Costly therapies such as immunotherapy can be directed toward those with MSI-high tumors, and the rare patient with a POLE ultramutated state who has a recurrence or advanced disease. Clinical trials will be able to cluster enrollment of patients with CN-high, serouslike cancers with those with serous cancers, rather than combining them with patients whose cancers predictably behave much differently.

Much work is still needed to validate this molecular profiling in endometrial cancer and define the algorithms associated with treatment decisions; however, it is likely that the way we describe endometrial cancer in the near future will be quite different.
 

Dr. Rossi is an assistant professor in the division of gynecologic oncology at the University of North Carolina at Chapel Hill. She has no disclosures.

References

1. Bokhman JV. Two pathogenetic types of endometrial carcinoma. Gynecol Oncol. 1983;15(1):10-7.

2. Clarke BA et al. Endometrial carcinoma: controversies in histopathological assessment of grade and tumour cell type. J Clin Pathol. 2010;63(5):410-5.

3. Cancer Genome Atlas Research Network. Integrated genomic characterization of endometrial carcinoma. Nature. 2013;497(7447):67-73.

4. Ott PA et al. Pembrolizumab in advanced endometrial cancer: Preliminary results from the phase Ib KEYNOTE-028 study. J Clin Oncol. 2016;34(suppl):Abstract 5581.

5. Roque DR et al. Association between differential gene expression and body mass index among endometrial cancers from the Cancer Genome Atlas Project. Gynecol Oncol. 2016;142(2):317-22.

6. Talhouk A et al. New classification of endometrial cancers: The development and potential applications of genomic-based classification in research and clinical care. Gynecol Oncol Res Pract. 2016 Dec;3:14.

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica

Marijuana use is affecting the job market

Article Type
Changed
Fri, 01/18/2019 - 17:44

 

I have a friend who owns a large paving and excavating company. He currently is turning away large contracts because he can’t find employees to drive his dump trucks and operate his heavy machinery. The situation is so dire that he has begun to explore the possibility of recruiting employees out of the corrections system.

Like much of the country, Maine is experiencing a low level of unemployment that few of us over the age of 50 years can recall. Coupled with a confused and unwelcoming immigration policy at the federal level many small and large companies are struggling to find employees. The employment opportunities my friend’s company is offering are well above minimum wage, paying in the $30,000-$70,000 range with benefits. While the jobs require some special skills, his company is large enough that it can provide in-house training.

Doug Menuez/thinkstock
While my friend’s current situation is the result of a perfect storm of economic and political factors, what frustrates him the most is hearing that a significant number of potential employees are scared off when they realize that these good-paying jobs will require them to take and pass a drug test. He has learned of several young men and women who have chosen jobs with significantly lower salaries and fewer benefits simply to avoid taking a drug test.

Maine residents recently have voted to decriminalize the possession of small amounts of marijuana. It is unclear exactly how this change in the official position of the state government will translate into a distribution network and a system of local codes. However, it does reflect a more tolerant attitude toward marijuana use. It also suggests that job seekers who are avoiding positions that require drug testing are not worried about the stigma of being identified as a user. They understand enough pharmacology to know that marijuana is detectable days and even weeks after it was last ingested or inhaled. Even the recreational users realize that the chances of being able to pass a drug test before employment and at any subsequent random testing are slim.

The problem is that these good-paying jobs are going unfilled because of the pharmacologic properties of a drug, and our current inability to devise a test that can accurately and consistently correlate a person’s blood level and his or her ability to safely operate a motor vehicle or piece of heavy equipment (“Establishing legal limit for driving under the influence of marijuana,” Inj Epidemiol. 2014 Dec.;1[1]: 26). There is some correlation between blood levels and whether a person is a heavy or infrequent user. Laws that rely on a zero tolerance philosophy are not bringing us any closer to a solution. And it is probably unrealistic to hope that in the near future scientists will develop a single, simply administered test that can provide a clear yes or no to the issue of impairment in the workplace.

I can envision a two-tier system in which all employees are blood or urine tested on a 3-month schedule. Those with a positive test must then take a 10-minute test on a laptop computer simulator with a joy stick each morning that they arrive on the job to demonstrate that, despite a history of marijuana use, they are not impaired.

Even if such a test is developed, we still owe our patients the reminder that, despite its decriminalization, marijuana is a drug and like any drug has side effects. One of them is that it can put limits on your employment opportunities.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].

Publications
Topics
Sections

 

I have a friend who owns a large paving and excavating company. He currently is turning away large contracts because he can’t find employees to drive his dump trucks and operate his heavy machinery. The situation is so dire that he has begun to explore the possibility of recruiting employees out of the corrections system.

Like much of the country, Maine is experiencing a low level of unemployment that few of us over the age of 50 years can recall. Coupled with a confused and unwelcoming immigration policy at the federal level many small and large companies are struggling to find employees. The employment opportunities my friend’s company is offering are well above minimum wage, paying in the $30,000-$70,000 range with benefits. While the jobs require some special skills, his company is large enough that it can provide in-house training.

Doug Menuez/thinkstock
While my friend’s current situation is the result of a perfect storm of economic and political factors, what frustrates him the most is hearing that a significant number of potential employees are scared off when they realize that these good-paying jobs will require them to take and pass a drug test. He has learned of several young men and women who have chosen jobs with significantly lower salaries and fewer benefits simply to avoid taking a drug test.

Maine residents recently have voted to decriminalize the possession of small amounts of marijuana. It is unclear exactly how this change in the official position of the state government will translate into a distribution network and a system of local codes. However, it does reflect a more tolerant attitude toward marijuana use. It also suggests that job seekers who are avoiding positions that require drug testing are not worried about the stigma of being identified as a user. They understand enough pharmacology to know that marijuana is detectable days and even weeks after it was last ingested or inhaled. Even the recreational users realize that the chances of being able to pass a drug test before employment and at any subsequent random testing are slim.

The problem is that these good-paying jobs are going unfilled because of the pharmacologic properties of a drug, and our current inability to devise a test that can accurately and consistently correlate a person’s blood level and his or her ability to safely operate a motor vehicle or piece of heavy equipment (“Establishing legal limit for driving under the influence of marijuana,” Inj Epidemiol. 2014 Dec.;1[1]: 26). There is some correlation between blood levels and whether a person is a heavy or infrequent user. Laws that rely on a zero tolerance philosophy are not bringing us any closer to a solution. And it is probably unrealistic to hope that in the near future scientists will develop a single, simply administered test that can provide a clear yes or no to the issue of impairment in the workplace.

I can envision a two-tier system in which all employees are blood or urine tested on a 3-month schedule. Those with a positive test must then take a 10-minute test on a laptop computer simulator with a joy stick each morning that they arrive on the job to demonstrate that, despite a history of marijuana use, they are not impaired.

Even if such a test is developed, we still owe our patients the reminder that, despite its decriminalization, marijuana is a drug and like any drug has side effects. One of them is that it can put limits on your employment opportunities.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].

 

I have a friend who owns a large paving and excavating company. He currently is turning away large contracts because he can’t find employees to drive his dump trucks and operate his heavy machinery. The situation is so dire that he has begun to explore the possibility of recruiting employees out of the corrections system.

Like much of the country, Maine is experiencing a low level of unemployment that few of us over the age of 50 years can recall. Coupled with a confused and unwelcoming immigration policy at the federal level many small and large companies are struggling to find employees. The employment opportunities my friend’s company is offering are well above minimum wage, paying in the $30,000-$70,000 range with benefits. While the jobs require some special skills, his company is large enough that it can provide in-house training.

Doug Menuez/thinkstock
While my friend’s current situation is the result of a perfect storm of economic and political factors, what frustrates him the most is hearing that a significant number of potential employees are scared off when they realize that these good-paying jobs will require them to take and pass a drug test. He has learned of several young men and women who have chosen jobs with significantly lower salaries and fewer benefits simply to avoid taking a drug test.

Maine residents recently have voted to decriminalize the possession of small amounts of marijuana. It is unclear exactly how this change in the official position of the state government will translate into a distribution network and a system of local codes. However, it does reflect a more tolerant attitude toward marijuana use. It also suggests that job seekers who are avoiding positions that require drug testing are not worried about the stigma of being identified as a user. They understand enough pharmacology to know that marijuana is detectable days and even weeks after it was last ingested or inhaled. Even the recreational users realize that the chances of being able to pass a drug test before employment and at any subsequent random testing are slim.

The problem is that these good-paying jobs are going unfilled because of the pharmacologic properties of a drug, and our current inability to devise a test that can accurately and consistently correlate a person’s blood level and his or her ability to safely operate a motor vehicle or piece of heavy equipment (“Establishing legal limit for driving under the influence of marijuana,” Inj Epidemiol. 2014 Dec.;1[1]: 26). There is some correlation between blood levels and whether a person is a heavy or infrequent user. Laws that rely on a zero tolerance philosophy are not bringing us any closer to a solution. And it is probably unrealistic to hope that in the near future scientists will develop a single, simply administered test that can provide a clear yes or no to the issue of impairment in the workplace.

I can envision a two-tier system in which all employees are blood or urine tested on a 3-month schedule. Those with a positive test must then take a 10-minute test on a laptop computer simulator with a joy stick each morning that they arrive on the job to demonstrate that, despite a history of marijuana use, they are not impaired.

Even if such a test is developed, we still owe our patients the reminder that, despite its decriminalization, marijuana is a drug and like any drug has side effects. One of them is that it can put limits on your employment opportunities.

Dr. William G. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years.
Dr. William G. Wilkoff

Dr. Wilkoff practiced primary care pediatrics in Brunswick, Maine, for nearly 40 years. He has authored several books on behavioral pediatrics, including “How to Say No to Your Toddler.” Email him at [email protected].

Publications
Publications
Topics
Article Type
Sections
Disallow All Ads
Content Gating
No Gating (article Unlocked/Free)
Alternative CME
Disqus Comments
Default
Use ProPublica