User login
Chronic Myeloid Leukemia: Selecting First-line TKI Therapy
From the Moffitt Cancer Center, Tampa, FL.
Abstract
- Objective: To outline the approach to selecting a tyrosine kinase inhibitor (TKI) for initial treatment of chronic myeloid leukemia (CML) and monitoring patients following initiation of therapy.
- Methods: Review of the literature and evidence-based guidelines.
- Results: The development and availability of TKIs has improved survival for patients diagnosed with CML. The life expectancy of patients diagnosed with chronic-phase CML (CP-CML) is similar to that of the general population, provided they receive appropriate TKI therapy and adhere to treatment. Selection of the most appropriate first-line TKI for newly diagnosed CP-CML requires incorporation of the patient’s baseline karyotype and Sokal or EURO risk score, and a clear understanding of the patient’s comorbidities. The adverse effect profile of all TKIs must be considered in conjunction with the patient’s ongoing medical issues to decrease the likelihood of worsening their current symptoms or causing a severe complication from TKI therapy. After confirming a diagnosis of CML and selecting the most appropriate TKI for first-line therapy, close monitoring and follow-up are necessary to ensure patients are meeting the desired treatment milestones. Responses in CML can be assessed based on hematologic parameters, cytogenetic results, and molecular responses.
- Conclusion: Given the successful treatments available for patients with CML, it is crucial to identify patients with this diagnosis; ensure they receive a complete, appropriate diagnostic workup including a bone marrow biopsy and aspiration with cytogenetic testing; and select the best therapy for each individual patient.
Keywords: chronic myeloid leukemia; CML; tyrosine kinase inhibitor; TKI; cancer; BCR-ABL protein.
Chronic myeloid leukemia (CML) is a rare myeloproliferative neoplasm that is characterized by the presence of the Philadelphia (Ph) chromosome and uninhibited expansion of bone marrow stem cells. The Ph chromosome arises from a reciprocal translocation between the Abelson (ABL) region on chromosome 9 and the breakpoint cluster region (BCR) of chromosome 22 (t(9;22)(q34;q11.2)), resulting in the BCR-ABL1 fusion gene and its protein product, BCR-ABL tyrosine kinase.1 BCR-ABL has constitutive tyrosine kinase activity that promotes growth, replication, and survival of hematopoietic cells through downstream pathways, which is the driving factor in the pathogenesis of CML.1
CML is divided into 3 phases based on the number of myeloblasts observed in the blood or bone marrow: chronic, accelerated, and blast. Most cases of CML are diagnosed in the chronic phase (CP), which is marked by proliferation of primarily the myeloid element.
Typical treatment for CML involves lifelong use of oral BCR-ABL tyrosine kinase inhibitors (TKIs). Currently, 5 TKIs have regulatory approval for treatment of this disease. The advent of TKIs, a class of small molecules targeting the tyrosine kinases, particularly the BCR-ABL tyrosine kinase, led to rapid changes in the management of CML and improved survival for patients. Patients diagnosed with chronic-phase CML (CP-CML) now have a life expectancy that is similar to that of the general population, as long as they receive appropriate TKI therapy and adhere to treatment. As such, it is crucial to identify patients with CML; ensure they receive a complete, appropriate diagnostic workup; and select the best therapy for each patient.
Epidemiology
According to SEER data estimates, 8430 new cases of CML were diagnosed in the United States in 2018. CML is a disease of older adults, with a median age of 65 years at diagnosis, and there is a slight male predominance. Between 2011 and 2015, the number of new CML cases was 1.8 per 100,000 persons. The median overall survival (OS) in patients with newly diagnosed CP-CML has not been reached.2 Given the effective treatments available for managing CML, it is estimated that the prevalence of CML in the United States will plateau at 180,000 patients by 2050.3
Diagnosis
Clinical Features
The diagnosis of CML is often suspected based on an incidental finding of leukocytosis and, in some cases, thrombocytosis. In many cases, this is an incidental finding on routine blood work, but approximately 50% of patients will present with constitutional symptoms associated with the disease. Characteristic features of the white blood cell differential include left-shifted maturation with neutrophilia and immature circulating myeloid cells. Basophilia and eosinophilia are often present as well. Splenomegaly is a common sign, present in 50% to 90% of patients at diagnosis. In those patients with symptoms related to CML at diagnosis, the most common presentation includes increasing fatigue, fevers, night sweats, early satiety, and weight loss. The diagnosis is confirmed by cytogenetic studies showing the Ph chromosome abnormality, t(9; 22)(q3.4;q1.1), and/or reverse transcriptase polymerase chain reaction (PCR) showing BCR-ABL1 transcripts.
Testing
Bone marrow biopsy. There are 3 distinct phases of CML: CP, accelerated phase (AP), and blast phase (BP). Bone marrow biopsy and aspiration at diagnosis are mandatory in order to determine the phase of the disease at diagnosis. This distinction is based on the percentage of blasts, promyelocytes, and basophils present as well as the platelet count and presence or absence of extramedullary disease.4 The vast majority of patients at diagnosis have CML that is in the chronic phase. The typical appearance in CP-CML is a hypercellular marrow with granulocytic and occasionally megakaryocytic hyperplasia. In many cases, basophilia and/or eosinophilia are noted as well. Dysplasia is not a typical finding in CML.5 Bone marrow fibrosis can be seen in up to one-third of patients at diagnosis, and may indicate a slightly worse prognosis.6 Although a diagnosis of CML can be made without a bone marrow biopsy, complete staging and prognostication are only possible with information gained from this test, including baseline karyotype and confirmation of CP versus a more advanced phase of CML.
Diagnostic criteria. The criteria for diagnosing AP-CML has not been agreed upon by various groups, but the modified MD Anderson Cancer Center (MDACC) criteria are used in the majority of clinical trials evaluating the efficacy of TKIs in preventing progression to advanced phases of CML. MDACC criteria define AP-CML as the presence of 1 of the following: 15% to 29% blasts in the peripheral blood or bone marrow, ≥ 30% peripheral blasts plus promyelocytes, ≥ 20% basophils in the blood or bone marrow, platelet count ≤ 100,000/μL unrelated to therapy, and clonal cytogenetic evolution in Ph-positive metaphases (Table).7
BP-CML is typically defined using the criteria developed by the International Bone Marrow Transplant Registry (IBMTR): ≥ 30% blasts in the peripheral blood and/or the bone marrow or the presence of extramedullary disease.8 Although not typically used in clinical trials, the revised World Health Organization (WHO) criteria for BP-CML include ≥ 20% blasts in the peripheral blood or bone marrow, extramedullary blast proliferation, and large foci or clusters of blasts in the bone marrow biopsy sample (Table).9
The defining feature of CML is the presence of the Ph chromosome abnormality. In a small subset of patients, additional chromosome abnormalities (ACA) in the Ph-positive cells may be identified at diagnosis. Some reports indicate that the presence of “major route” ACA (trisomy 8, isochromosome 17q, a second Ph chromosome, or trisomy 19) at diagnosis may negatively impact prognosis, but other reports contradict these findings.10,11
PCR assay. The typical BCR breakpoint in CML is the major breakpoint cluster region (M-BCR), which results in a 210-kDa protein (p210). Alternate breakpoints that are less frequently identified are the minor BCR (mBCR or p190), which is more commonly found in Ph-positive acute lymphoblastic leukemia (ALL), and the micro BCR (µBCR or p230), which is much less common and is often characterized by chronic neutrophilia.12 Identifying which BCR-ABL1 transcript is present in each patient using qualitative PCR is crucial in order to ensure proper monitoring during treatment.
The most sensitive method for detecting BCR-ABL1 mRNA transcripts is the quantitative real-time PCR (RQ-PCR) assay, which is typically done on peripheral blood. RQ-PCR is capable of detecting a single CML cell in the presence of ≥ 100,000 normal cells. This test should be done during the initial diagnostic workup in order to confirm the presence of BCR-ABL1 transcripts, and it is used as a standard method for monitoring response to TKI therapy.13 The International Scale (IS) is a standardized approach to reporting RQ-PCR results that was developed to allow comparison of results across various laboratories and has become the gold standard for reporting BCR-ABL1 transcript values.14
Determining Risk Scores
Calculating a patient’s Sokal score or EURO risk score at diagnosis remains an important component of the diagnostic workup in CP-CML, as this information has prognostic and therapeutic implications (an online calculator is available through European LeukemiaNet [ELN]). The risk for disease progression to the accelerated or blast phases is higher in patients with intermediate or high risk scores compared to those with a low risk score at diagnosis. The risk of progression in intermediate- or high-risk patients is lower when a second-generation TKI (dasatinib, nilotinib, or bosutinib) is used as frontline therapy compared to imatinib, and therefore, the National Comprehensive Cancer Network (NCCN) CML Panel recommends starting with a second-generation TKI in these patients.15-19
Monitoring Response to Therapy
After confirming a diagnosis of CML and selecting the most appropriate TKI for first-line therapy, the successful management of CML patients relies on close monitoring and follow-up to ensure they are meeting the desired treatment milestones. Responses in CML can be assessed based on hematologic parameters, cytogenetic results, and molecular responses. A complete hematologic response (CHR) implies complete normalization of peripheral blood counts (with the exception of TKI-induced cytopenias) and resolution of any palpable splenomegaly. The majority of patients will achieve a CHR within 4 to 6 weeks after initiating CML-directed therapy.20
Cytogenetic Response
Cytogenetic responses are defined by the decrease in the number of Ph chromosome–positive metaphases when assessed on bone marrow cytogenetics. A partial cytogenetic response (PCyR) is defined as having 1% to 35% Ph-positive metaphases, a major cytogenetic response (MCyR) as having 0% to 35% Ph-positive metaphases, and a complete cytogenetic response (CCyR) implies that no Ph-positive metaphases are identified on bone marrow cytogenetics. An ideal response is the achievement of PCyR after 3 months on a TKI and a CCyR after 12 months on a TKI.21
Molecular Response
Once a patient has achieved a CCyR, monitoring their response to therapy can only be done using RQ-PCR to measure BCR-ABL1 transcripts in the peripheral blood. The NCCN and the ELN recommend monitoring RQ-PCR from the peripheral blood every 3 months in order to assess response to TKIs.19,22 As noted, the IS has become the gold standard reporting system for all BCR-ABL1 transcript levels in the majority of laboratories worldwide.14,23 Molecular responses are based on a log reduction in BCR-ABL1 transcripts from a standardized baseline. Many molecular responses can be correlated with cytogenetic responses such that, if reliable RQ-PCR testing is available, monitoring can be done using only peripheral blood RQ-PCR rather than repeat bone marrow biopsies. For example, an early molecular response (EMR) is defined as a RQ-PCR value of ≤ 10% IS, which is approximately equivalent to a PCyR.24 A value of 1% IS is approximately equivalent to a CCyR. A major molecular response (MMR) is a ≥ 3-log reduction in BCR-ABL1 transcripts from baseline and is a value of ≤ 0.1% IS. Deeper levels of molecular response are best described by the log reduction in BCR-ABL1 transcripts, with a 4-log reduction denoted as MR4.0, a 4.5-log reduction as MR4.5, and so forth. Complete molecular response (CMR) is defined by the level of sensitivity of the RQ-PCR assay being used.14
The definition of relapsed disease in CML is dependent on the type of response the patient had previously achieved. Relapse could be the loss of a hematologic or cytogenetic response, but fluctuations in BCR-ABL1 transcripts on routine RQ-PCR do not necessarily indicate relapsed CML. A 1-log increase in the level of BCR-ABL1 transcripts with a concurrent loss of MMR should prompt a bone marrow biopsy in order to assess for the loss of CCyR, and thus a cytogenetic relapse; however, this loss of MMR does not define relapse in and of itself. In the setting of relapsed disease, testing should be done to look for possible ABL kinase domain mutations, and alternate therapy should be selected.19
Multiple reports have identified the prognostic relevance of achieving an EMR at 3 and 6 months after starting TKI therapy. Marin and colleagues reported that in 282 imatinib-treated patients, there was a significant improvement in 8-year OS, progression-free survival (PFS), and cumulative incidence of CCyR and CMR in patients who had BCR-ABL1 transcripts < 9.84% IS after 3 months on treatment.24 This data highlights the importance of early molecular monitoring in order to ensure the best outcomes for patients with CP-CML.
The NCCN CML guidelines and ELN recommendations both agree that an ideal response after 3 months on a TKI is BCR-ABL1 transcripts < 10% IS, but treatment is not considered to be failing at this point if the patient marginally misses this milestone. After 6 months on treatment, an ideal response is considered BCR-ABL1 transcripts < 1%–10% IS. Ideally, patients will have BCR-ABL1 transcripts < 0.1%–1% IS by the time they complete 12 months of TKI therapy, suggesting that these patients have at least achieved a CCyR.19,22 Even after patients achieve these early milestones, frequent monitoring by RQ-PCR is required to ensure that they are maintaining their response to treatment. This will help to ensure patient compliance with treatment and will also help to identify a select subset of patients who could potentially be considered for an attempt at TKI cessation (not discussed in detail here) after a minimum of 3 years on therapy.19,25
Selecting First-line TKI Therapy
Selection of the most appropriate first-line TKI for newly diagnosed CP-CML patients requires incorporation of many patient-specific factors. These factors include baseline karyotype and confirmation of CP-CML through bone marrow biopsy, Sokal or EURO risk score, and a thorough patient history, including a clear understanding of the patient’s comorbidities. The adverse effect profile of all TKIs must be considered in conjunction with the patient’s ongoing medical issues in order to decrease the likelihood of worsening their current symptoms or causing a severe complication from TKI therapy.
Imatinib
The management of CML was revolutionized by the development and ultimate regulatory approval of imatinib mesylate in 2001. Imatinib was the first small-molecule cancer therapy developed and approved. It acts by binding to the adenosine triphosphate (ATP) binding site in the catalytic domain of BCR-ABL, thus inhibiting the oncoprotein’s tyrosine kinase activity.26
The International Randomized Study of Interferon versus STI571 (IRIS) trial was a randomized phase 3 study that compared imatinib 400 mg daily to interferon alfa (IFNa) plus cytarabine. More than 1000 CP-CML patients were randomly assigned 1:1 to either imatinib or IFNa plus cytarabine and were assessed for event-free survival, hematologic and cytogenetic responses, freedom from progression to AP or BP, and toxicity. Imatinib was superior to the prior standard of care for all these outcomes.21 The long-term follow-up of the IRIS trial reported an 83% estimated 10-year OS and 79% estimated event-free survival for patients on the imatinib arm of this study.15 The cumulative rate of CCyR was 82.8%. Of the 204 imatinib-treated patients who could undergo a molecular response evaluation at 10 years, 93.1% had a MMR and 63.2% had a MR4.5, suggesting durable, deep molecular responses for many patients. The estimated 10-year rate of freedom from progression to AP or BP was 92.1%.
Higher doses of imatinib (600-800 mg daily) have been studied in an attempt to overcome resistance and improve cytogenetic and molecular response rates. The Tyrosine Kinase Inhibitor Optimization and Selectivity (TOPS) trial was a randomized phase 3 study that compared imatinib 800 mg daily to imatinib 400 mg daily. Although the 6-month assessments found increased rates of CCyR and a MMR in the higher-dose imatinib arm, these differences were no longer present at the 12-month assessment. Furthermore, the higher dose of imatinib led to a significantly higher incidence of grade 3/4 hematologic adverse events, and approximately 50% of patients on imatinib 800 mg daily required a dose reduction to less than 600 mg daily because of toxicity.27
The Therapeutic Intensification in De Novo Leukaemia (TIDEL)-II study used plasma trough levels of imatinib on day 22 of treatment with imatinib 600 mg daily to determine if patients should escalate the imatinib dose to 800 mg daily. In patients who did not meet molecular milestones at 3, 6, or 12 months, cohort 1 was dose escalated to imatinib 800 mg daily and subsequently switched to nilotinib 400 mg twice daily for failing the same target 3 months later, and cohort 2 was switched to nilotinib. At 2 years, 73% of patients achieved MMR and 34% achieved MR4.5, suggesting that initial treatment with higher-dose imatinib, followed by a switch to nilotinib in those failing to achieve desired milestones, could be an effective strategy for managing newly diagnosed CP-CML.28
Toxicity. The standard starting dose of imatinib in CP-CML patients is 400 mg. The safety profile of imatinib has been very well established. In the IRIS trial, the most common adverse events (all grades in decreasing order of frequency) were peripheral and periorbital edema (60%), nausea (50%), muscle cramps (49%), musculoskeletal pain (47%), diarrhea (45%), rash (40%), fatigue (39%), abdominal pain (37%), headache (37%), and joint pain (31%). Grade 3/4 liver enzyme elevation can occur in 5% of patients.29 In the event of severe liver toxicity or fluid retention, imatinib should be held until the event resolves. At that time, imatinib can be restarted if deemed appropriate, but this is dependent on the severity of the inciting event. Fluid retention can be managed by the use of supportive care, diuretics, imatinib dose reduction, dose interruption, or imatinib discontinuation if the fluid retention is severe. Muscle cramps can be managed by the use of calcium supplements or tonic water. Management of rash can include topical or systemic steroids, or in some cases imatinib dose reduction, interruption, or discontinuation.19
Grade 3/4 imatinib-induced hematologic toxicity is not uncommon, with 17% of patients experiencing neutropenia, 9% thrombocytopenia, and 4% anemia. These adverse events occurred most commonly during the first year of therapy, and the frequency decreased over time.15,29 Depending on the degree of cytopenias, imatinib dosing should be interrupted until recovery of the absolute neutrophil count or platelet count, and can often be resumed at 400 mg daily. However, if cytopenias recur, imatinib should be held and subsequently restarted at 300 mg daily.19
Dasatinib
Dasatinib is a second-generation TKI that has regulatory approval for treatment of adult patients with newly diagnosed CP-CML or CP-CML in patients with resistance or intolerance to prior TKIs. In addition to dasatinib’s ability to inhibit ABL kinases, it is also known to be a potent inhibitor of Src family kinases. Dasatinib has shown efficacy in patients who have developed imatinib-resistant ABL kinase domain mutations.
Dasatinib was initially approved as second-line therapy in patients with resistance or intolerance to imatinib. This indication was based on the results of the phase 3 CA180-034 trial, which ultimately identified dasatinib 100 mg daily as the optimal dose. In this trial, 74% of patients enrolled had resistance to imatinib and the remainder were intolerant. The 7-year follow-up of patients randomized to dasatinib 100 mg (n = 167) daily indicated that 46% achieved MMR while on study. Of the 124 imatinib-resistant patients on dasatinib 100 mg daily, the 7-year PFS was 39% and OS was 63%. In the 43 imatinib-intolerant patients, the 7-year PFS was 51% and OS was 70%.30
Dasatinib 100 mg daily was compared to imatinib 400 mg daily in newly diagnosed CP-CML patients in the randomized phase 3 DASISION (Dasatinib versus Imatinib Study in Treatment-Naive CML Patients) trial. More patients on the dasatinib arm achieved an EMR of BCR-ABL1 transcripts ≤ 10% IS after 3 months on treatment compared to imatinib (84% versus 64%). Furthermore, the 5-year follow-up reports that the cumulative incidence of MMR and MR4.5 in dasatinib-treated patients was 76% and 42%, and was 64% and 33% with imatinib (P = 0.0022 and P = 0.0251, respectively). Fewer patients treated with dasatinib progressed to AP or BP (4.6%) compared to imatinib (7.3%), but the estimated 5-year OS was similar between the 2 arms (91% for dasatinib versus 90% for imatinib).16 Regulatory approval for dasatinib as first-line therapy in newly diagnosed CML patients was based on results of the DASISION trial.
Toxicity. Most dasatinib-related toxicities are reported as grade 1 or grade 2, but grade 3/4 hematologic adverse events are fairly common. In the DASISION trial, grade 3/4 neutropenia, anemia, and thrombocytopenia occurred in 29%, 13%, and 22% of dasatinib-treated patients, respectively. Cytopenias can generally be managed with temporary dose interruptions or dose reductions.
During the 5-year follow-up of the DASISION trial, pleural effusions were reported in 28% of patients, most of which were grade 1/2. This occurred at a rate of approximately ≤ 8% per year, suggesting a stable incidence over time, and the effusions appear to be dose-dependent.16 Depending on the severity, pleural effusion may be treated with diuretics, dose interruption, and, in some instances, steroids or a thoracentesis. Typically, dasatinib can be restarted at 1 dose level lower than the previous dose once the effusion has resolved.19 Other, less common side effects of dasatinib include pulmonary hypertension (5% of patients), as well as abdominal pain, fluid retention, headaches, fatigue, musculoskeletal pain, rash, nausea, and diarrhea. Pulmonary hypertension is typically reversible after cessation of dasatinib, and thus dasatinib should be permanently discontinued once the diagnosis is confirmed. Fluid retention is often treated with diuretics and supportive care. Nausea and diarrhea are generally manageable and occur less frequently when dasatinib is taken with food and a large glass of water. Antiemetics and antidiarrheals can be used as needed. Troublesome rash can be best managed with topical or systemic steroids as well as possible dose reduction or dose interruption.16,19 In the DASISION trial, adverse events led to therapy discontinuation more often in the dasatinib group than in the imatinib group (16% versus 7%).16 Bleeding, particularly in the setting of thrombocytopenia, has been reported in patients being treated with dasatinib as a result of the drug-induced reversible inhibition of platelet aggregation.31
Nilotinib
The structure of nilotinib is similar to that of imatinib; however, it has a markedly increased affinity for the ATP‐binding site on the BCR-ABL1 protein. It was initially given regulatory approval in the setting of imatinib failure. Nilotinib was studied at a dose of 400 mg twice daily in 321 patients who were imatinib-resistant or -intolerant. It proved to be highly effective at inducing cytogenetic remissions in the second-line setting, with 59% of patients achieving a MCyR and 45% achieving a CCyR. With a median follow-up time of 4 years, the OS was 78%.32
Nilotinib gained regulatory approval for use as a first-line TKI after completion of the randomized phase 3 ENESTnd (Evaluating Nilotinib Efficacy and Safety in Clinical Trials-Newly Diagnosed Patients) trial. ENESTnd was a 3-arm study comparing nilotinib 300 mg twice daily versus nilotinib 400 mg twice daily versus imatinib 400 mg daily in newly diagnosed, previously untreated patients diagnosed with CP-CML. The primary endpoint of this clinical trial was rate of MMR at 12 months.33 Nilotinib surpassed imatinib in this regard, with 44% of patients on nilotinib 300 mg twice daily achieving MMR at 12 months versus 43% of nilotinib 400 mg twice daily patients versus 22% of the imatinib-treated patients (P < 0.001 for both comparisons). Furthermore, the rate of CCyR by 12 months was significantly higher for both nilotinib arms compared with imatinib (80% for nilotinib 300 mg, 78% for nilotinib 400 mg, and 65% for imatinib) (P < 0.001).12 Based on this data, nilotinib 300 mg twice daily was chosen as the standard dose of nilotinib in the first-line setting. After 5 years of follow-up on the ENESTnd study, there were fewer progressions to AP/BP CML in nilotinib-treated patients compared with imatinib. MMR was achieved in 77% of nilotinib 300 mg patients compared with 60.4% of patients on the imatinib arm. MR4.5 was also more common in patients treated with nilotinib 300 mg twice daily, with a rate of 53.5% at 5 years versus 31.4% in the imatinib arm.17 In spite of the deeper cytogenetic and molecular responses achieved with nilotinib, this did not translate into a significant improvement in OS. The 5-year OS rate was 93.7% in nilotinib 300 mg patients versus 91.7% in imatinib-treated patients, and this difference lacked statistical significance.17
Toxicity. Although some similarities exist between the toxicity profiles of nilotinib and imatinib, each drug has some distinct adverse events. On the ENESTnd trial, the rate of any grade 3/4 non-hematologic adverse event was fairly low; however, lower-grade toxicities were not uncommon. Patients treated with nilotinib 300 mg twice daily experienced rash (31%), headache (14%), pruritis (15%), and fatigue (11%) most commonly. The most frequently reported laboratory abnormalities included increased total bilirubin (53%), hypophosphatemia (32%), hyperglycemia (36%), elevated lipase (24%), increased alanine aminotransferase (ALT; 66%), and increased aspartate aminotransferase (AST; 40%). Any grade of neutropenia, thrombocytopenia, or anemia occurred at rates of 43%, 48%, and 38%, respectively.33 Although nilotinib has a Black Box Warning from the US Food and Drug Administration for QT interval prolongation, no patients on the ENESTnd trial experienced a QT interval corrected for heart rate greater than 500 msec.12
More recent concerns have emerged regarding the potential for cardiovascular toxicity after long-term use of nilotinib. The 5-year update of ENESTnd reports cardiovascular events, including ischemic heart disease, ischemic cerebrovascular events, or peripheral arterial disease occurring in 7.5% of patients treated with nilotinib 300 mg twice daily, as compared with a rate of 2.1% in imatinib-treated patients. The frequency of these cardiovascular events increased linearly over time in both arms. Elevations in total cholesterol from baseline occurred in 27.6% of nilotinib patients compared with 3.9% of imatinib patients. Furthermore, clinically meaningful increases in low-density lipoprotein cholesterol and glycated hemoglobin occurred more frequently with nilotinib therapy.33
Nilotinib should be taken on an empty stomach; therefore, patients should be made aware of the need to fast for 2 hours prior to each dose and 1 hour after each dose. Given the potential risk of QT interval prolongation, a baseline electrocardiogram (ECG) is recommended prior to initiating treatment to ensure the QT interval is within a normal range. A repeat ECG should be done approximately 7 days after nilotinib initiation to ensure no prolongation of the QT interval after starting. Close monitoring of potassium and magnesium levels is important to decrease the risk of cardiac arrhythmias, and concomitant use of drugs considered strong CYP3A4 inhibitors should be avoided.19
If the patient experiences any grade 3 or higher laboratory abnormalities, nilotinib should be held until resolution of the toxicity, and then restarted at a lower dose. Similarly, if patients develop significant neutropenia or thrombocytopenia, nilotinib doses should be interrupted until resolution of the cytopenias. At that point, nilotinib can be reinitiated at either the same or a lower dose. Rash can be managed by the use of topical or systemic steroids as well as potential dose reduction, interruption, or discontinuation.
Given the concerns for potential cardiovascular events with long-term use of nilotinib, caution is advised when prescribing it to any patient with a history of cardiovascular disease or peripheral arterial occlusive disease. At the first sign of new occlusive disease, nilotinib should be discontinued.19
Bosutinib
Bosutinib is a second-generation BCR-ABL TKI with activity against the Src family of kinases; it was initially approved to treat patients with CP-, AP-, or BP-CML after resistance or intolerance to imatinib. Long-term data has been reported from the phase 1/2 trial of bosutinib therapy in patients with CP-CML who developed resistance or intolerance to imatinib plus dasatinib and/or nilotinib. A total of 119 patients were included in the 4-year follow-up; 38 were resistant/intolerant to imatinib and resistant to dasatinib, 50 were resistant/intolerant to imatinib and intolerant to dasatinib, 26 were resistant/intolerant to imatinib and resistant to nilotinib, and 5 were resistant/intolerant to imatinib and intolerant to nilotinib or resistant/intolerant to dasatinib and nilotinib. Bosutinib 400 mg daily was studied in this setting. Of the 38 patients with imatinib resistance/intolerance and dasatinib resistance, 39% achieved MCyR, 22% achieved CCyR, and the OS was 67%. Of the 50 patients with imatinib resistance/intolerance and dasatinib intolerance, 42% achieved MCyR, 40% achieved CCyR, and the OS was 80%. Finally, in the 26 patients with imatinib resistance/intolerance and nilotinib resistance, 38% achieved MCyR, 31% achieved CCyR, and the OS was 87%.34
Five-year follow-up from the phase 1/2 clinical trial that studied bosutinib 500 mg daily in CP-CML patients after imatinib failure reported data on 284 patients. By 5 years on study, 60% of patients had achieved MCyR and 50% achieved CCyR with a 71% and 69% probability, respectively, of maintaining these responses at 5 years. The 5-year OS was 84%.35 These data led to the regulatory approval of bosutinib 500 mg daily as second-line or later therapy.
Bosutinib was initially studied in the first-line setting in the randomized phase 3 BELA (Bosutinib Efficacy and Safety in Newly Diagnosed Chronic Myeloid Leukemia) trial. This trial compared bosutinib 500 mg daily to imatinib 400 mg daily in newly diagnosed, previously untreated CP-CML patients. This trial failed to meet its primary endpoint of increased rate of CCyR at 12 months, with 70% of bosutinib patients achieving this response, compared to 68% of imatinib-treated patients (P = 0.601). In spite of this, the rate of MMR at 12 months was significantly higher in the bosutinib arm (41%) compared to the imatinib arm (27%; P = 0.001).36
A second phase 3 trial (BFORE) was designed to study bosutinib 400 mg daily versus imatinib in newly diagnosed, previously untreated CP-CML patients. This study enrolled 536 patients who were randomly assigned 1:1 to bosutinib versus imatinib. The primary endpoint of this trial was rate of MMR at 12 months. A significantly higher number of bosutinib-treated patients achieved this response (47.2%) compared with imatinib-treated patients (36.9%, P = 0.02). Furthermore, by 12 months 77.2% of patients on the bosutinib arm had achieved CCyR compared with 66.4% on the imatinib arm, and this difference did meet statistical significance (P = 0.0075). A lower rate of progression to AP- or BP-CML was noted in bosutinib-treated patients as well (1.6% versus 2.5%). Based on this data, bosutinib gained regulatory approval for first-line therapy in CP-CML at a dose of 400 mg daily.18
Toxicity. On the BFORE trial, the most common treatment-emergent adverse events of any grade reported in the bosutinib-treated patients were diarrhea (70.1%), nausea (35.1%), increased ALT (30.6%), and increased AST (22.8%). Musculoskeletal pain or spasms occurred in 29.5% of patients, rash in 19.8%, fatigue in 19.4%, and headache in 18.7%. Hematologic toxicity was also reported, but most was grade 1/2. Thrombocytopenia was reported in 35.1%, anemia in 18.7%, and neutropenia in 11.2%.18
Cardiovascular events occurred in 5.2% of patients on the bosutinib arm of the BFORE trial, which was similar to the rate observed in imatinib patients. The most common cardiovascular event was QT interval prolongation, which occurred in 1.5% of patients. Pleural effusions were reported in 1.9% of patients treated with bosutinib, and none were grade 3 or higher.18
If liver enzyme elevation occurs at a value greater than 5 times the institutional upper limit of normal, bosutinib should be held until the level recovers to ≤ 2.5 times the upper limit of normal, at which point bosutinib can be restarted at a lower dose. If recovery takes longer than 4 weeks, bosutinib should be permanently discontinued. Liver enzymes elevated greater than 3 times the institutional upper limit of normal and a concurrent elevation in total bilirubin to 2 times the upper limit of normal are consistent with Hy’s law, and bosutinib should be discontinued. Although diarrhea is the most common toxicity associated with bosutinib, it is commonly low grade and transient. Diarrhea occurs most frequently in the first few days after initiating bosutinib. It can often be managed with over-the-counter antidiarrheal medications, but if the diarrhea is grade 3 or higher, bosutinib should be held until recovery to grade 1 or lower. Gastrointestinal side effects may be improved by taking bosutinib with a meal and a large glass of water. Fluid retention can be managed with diuretics and supportive care. Finally, if rash occurs, this can be addressed with topical or systemic steroids as well as bosutinib dose reduction, interruption, or discontinuation.19
Similar to other TKIs, if bosutinib-induced cytopenias occur, treatment should be held and restarted at the same or a lower dose upon blood count recovery.19
Ponatinib
The most common cause of TKI resistance in CP-CML is the development of ABL kinase domain mutations. The majority of imatinib-resistant mutations can be overcome by the use of second-generation TKIs, including dasatinib, nilotinib, or bosutinib. However, ponatinib is the only BCR-ABL TKI able to overcome a T315I mutation. The phase 2 PACE (Ponatinib Ph-positive ALL and CML Evaluation) trial enrolled patients with CP-, AP-, or BP-CML as well as patients with Ph-positive acute lymphoblastic leukemia who were resistant or intolerant to nilotinib or dasatinib, or who had evidence of a T315I mutation. The starting dose of ponatinib on this trial was 45 mg daily.37 The PACE trial enrolled 267 patients with CP-CML: 203 with resistance or intolerance to nilotinib or dasatinib, and 64 with a T315I mutation. The primary endpoint in the CP cohort was rate of MCyR at any time within 12 months of starting ponatinib. The overall rate of MCyR by 12 months in the CP-CML patients was 56%. In those with a T315I mutation, 70% achieved MCyR, which compared favorably with those with resistance or intolerance to nilotinib or dasatinib, 51% of whom achieved MCyR. CCyR was achieved in 46% of CP-CML patients (40% in the resistant/intolerant cohort and 66% in the T315I cohort). In general, patients with T315I mutations received fewer prior therapies than those in the resistant/intolerant cohort, which likely contributed to the higher response rates in the T315I patients. MR4.5 was achieved in 15% of CP-CML patients by 12 months on the PACE trial.37 The 5-year update to this study reported that 60%, 40%, and 24% of CP-CML patients achieved MCyR, MMR, and MR4.5, respectively. In the patients who achieved MCyR, the probability of maintaining this response for 5 years was 82% and the estimated 5-year OS was 73%.19
Toxicity. In 2013, after the regulatory approval of ponatinib, reports became available that the drug can cause an increase in arterial occlusive events, including fatal myocardial infarctions and cerebrovascular accidents. For this reason, dose reductions were implemented in patients who were deriving clinical benefit from ponatinib. In spite of these dose reductions, ≥ 90% of responders maintained their response for up to 40 months.38 Although the likelihood of developing an arterial occlusive event appears higher in the first year after starting ponatinib than in later years, the cumulative incidence of events continues to increase. The 5-year follow-up to the PACE trial reports 31% of patients experiencing any grade of arterial occlusive event while on ponatinib. Aside from these events, the most common treatment-emergent adverse events in ponatinib-treated patients on the PACE trial included rash (47%), abdominal pain (46%), headache (43%), dry skin (42%), constipation (41%), and hypertension (37%). Hematologic toxicity was also common, with 46% of patients experiencing any grade of thrombocytopenia, 20% experiencing neutropenia, and 20% anemia.38
Patients receiving ponatinib therapy should be monitored closely for any evidence of arterial or venous thrombosis. If an occlusive event occurs, ponatinib should be discontinued. Similarly, in the setting of any new or worsening heart failure symptoms, ponatinib should be promptly discontinued. Management of any underlying cardiovascular risk factors, including hypertension, hyperlipidemia, diabetes, or smoking history, is recommended, and these patients should be referred to a cardiologist for a full evaluation. In the absence of any contraindications to aspirin, low-dose aspirin should be considered as a means of decreasing cardiovascular risks associated with ponatinib. In patients with known risk factors, a ponatinib starting dose of 30 mg daily rather than the standard 45 mg daily may be a safer option, resulting in fewer arterial occlusive events, although the efficacy of this dose is still being studied in comparison to 45 mg daily.19
If ponatinib-induced transaminitis greater than 3 times the upper limit of normal occurs, ponatinib should be held until resolution to less than 3 times the upper limit of normal, at which point it should be resumed at a lower dose. Similarly, in the setting of elevated serum lipase or symptomatic pancreatitis, ponatinib should be held and restarted at a lower dose after resolution of symptoms.19
In the event of neutropenia or thrombocytopenia, ponatinib should be held until blood count recovery and then restarted at the same dose. If cytopenias occur for a second time, the dose of ponatinib should be lowered at the time of treatment reinitiation. If rash occurs, it can be addressed with topical or systemic steroids as well as dose reduction, interruption, or discontinuation.19
Conclusion
With the development of imatinib and the subsequent TKIs, dasatinib, nilotinib, bosutinib, and ponatinib, CP-CML has become a chronic disease with a life expectancy that is similar to that of the general population. Given the successful treatments available for these patients, it is crucial to identify patients with this diagnosis, ensure they receive a complete, appropriate diagnostic workup including a bone marrow biopsy and aspiration with cytogenetic testing, and select the best therapy for each individual patient. Once on treatment, the importance of frequent monitoring cannot be overstated. This is the only way to be certain patients are achieving the desired treatment milestones that correlate with the favorable long-term outcomes that have been observed with TKI-based treatment of CP-CML.
Corresponding author: Kendra Sweet, MD, MS, Department of Malignant Hematology, Moffitt Cancer Center, Tampa, FL.
Financial disclosures: Dr. Sweet has served on the Advisory Board and Speakers Bureau of Novartis, Bristol-Meyers Squibb, Ariad Pharmaceuticals, and Pfizer, and has served as a consultant to Pfizer.
1. Faderl S, Talpaz M, Estrov Z, et al. The biology of chronic myeloid leukemia. N Engl J Med. 1999;341:164-172.
2. Surveillance, Epidemiology, and End Results Program. Cancer Stat Facts: Leukemia - Chronic Myeloid Leukemia (CML). 2018.
3. Huang X, Cortes J, Kantarjian H. Estimations of the increasing prevalence and plateau prevalence of chronic myeloid leukemia in the era of tyrosine kinase inhibitor therapy. Cancer. 2012;118:3123-3127.
4. Savage DG, Szydlo RM, Chase A, et al. Bone marrow transplantation for chronic myeloid leukaemia: the effects of differing criteria for defining chronic phase on probabilities of survival and relapse. Br J Haematol. 1997;99:30-35.
5. Knox WF, Bhavnani M, Davson J, Geary CG. Histological classification of chronic granulocytic leukaemia. Clin Lab Haematol. 1984;6:171-175.
6. Kvasnicka HM, Thiele J, Schmitt-Graeff A, et al. Impact of bone marrow morphology on multivariate risk classification in chronic myelogenous leukemia. Acta Haematol. 2003;109:53-56.
7. Cortes JE, Talpaz M, O’Brien S, et al. Staging of chronic myeloid leukemia in the imatinib era: an evaluation of the World Health Organization proposal. Cancer. 2006;106:1306-1315.
8. Druker BJ. Chronic myeloid leukemia. In: DeVita VT, Lawrence TS, Rosenberg SA, eds. DeVita, Hellman, and Rosenberg’s Cancer Principles & Practice of Oncology. 8th ed. Philadelphia, PA: Lippincott, Williams and Wilkins; 2007:2267-2304.
9. Arber DA, Orazi A, Hasserjian R, et al. The 2016 revision to the World Health Organization classification of myeloid neoplasms and acute leukemia. Blood. 2016;127:2391-2405.
10. Fabarius A, Leitner A, Hochhaus A, et al. Impact of additional cytogenetic aberrations at diagnosis on prognosis of CML: long-term observation of 1151 patients from the randomized CML Study IV. Blood. 2011;118:6760-6768.
11. Alhuraiji A, Kantarjian H, Boddu P, et al. Prognostic significance of additional chromosomal abnormalities at the time of diagnosis in patients with chronic myeloid leukemia treated with frontline tyrosine kinase inhibitors. Am J Hematol. 2018;93:84-90.
12. Melo JV. BCR-ABL gene variants. Baillieres Clin Haematol. 1997;10:203-222.
13. Kantarjian HM, Talpaz M, Cortes J, et al. Quantitative polymerase chain reaction monitoring of BCR-ABL during therapy with imatinib mesylate (STI571; gleevec) in chronic-phase chronic myelogenous leukemia. Clin Cancer Res. 2003;9:160-166.
14. Hughes T, Deininger M, Hochhaus A, et al. Monitoring CML patients responding to treatment with tyrosine kinase inhibitors: review and recommendations for harmonizing current methodology for detecting BCR-ABL transcripts and kinase domain mutations and for expressing results. Blood. 2006;108:28-37.
15. Hochhaus A, Larson RA, Guilhot F, et al. Long-term outcomes of imatinib treatment for chronic myeloid leukemia. N Engl J Med. 2017;376:917-927.
16. Cortes JE, Saglio G, Kantarjian HM, et al. Final 5-year study results of DASISION: the Dasatinib Versus Imatinib Study in Treatment-Naive Chronic Myeloid Leukemia Patients trial. J Clin Oncol. 2016;34:2333-2340.
17. Hochhaus A, Saglio G, Hughes TP, et al. Long-term benefits and risks of frontline nilotinib vs imatinib for chronic myeloid leukemia in chronic phase: 5-year update of the randomized ENESTnd trial. Leukemia. 2016;30:1044-1054.
18. Cortes JE, Gambacorti-Passerini C, Deininger MW, et al. Bosutinib versus imatinib for newly diagnosed chronic myeloid leukemia: results from the randomized BFORE trial. J Clin Oncol. 2018;36:231-237.
19. Radich JP, Deininger M, Abboud CN, et al. Chronic Myeloid Leukemia, Version 1.2019, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2018;16:1108-1135.
20. Faderl S, Talpaz M, Estrov Z, Kantarjian HM. Chronic myelogenous leukemia: biology and therapy. Ann Intern Med. 1999;131:207-219.
21. O’Brien SG, Guilhot F, Larson RA, et al. Imatinib compared with interferon and low-dose cytarabine for newly diagnosed chronic-phase chronic myeloid leukemia. N Engl J Med. 2003;348:994-1004.
22. Baccarani M, Deininger MW, Rosti G, et al. European LeukemiaNet recommendations for the management of chronic myeloid leukemia: 2013. Blood. 2013;122:872-884.
23. Larripa I, Ruiz MS, Gutierrez M, Bianchini M. [Guidelines for molecular monitoring of BCR-ABL1 in chronic myeloid leukemia patients by RT-qPCR]. Medicina (B Aires). 2017;77:61-72.
24. Marin D, Ibrahim AR, Lucas C, et al. Assessment of BCR-ABL1 transcript levels at 3 months is the only requirement for predicting outcome for patients with chronic myeloid leukemia treated with tyrosine kinase inhibitors. J Clin Oncol. 2012;30:232-238.
25. Hughes TP, Ross DM. Moving treatment-free remission into mainstream clinical practice in CML. Blood. 2016;128:17-23.
26. Druker BJ, Talpaz M, Resta DJ, et al. Efficacy and safety of a specific inhibitor of the BCR-ABL tyrosine kinase in chronic myeloid leukemia. N Engl J Med. 2001;344:1031-1037.
27. Baccarani M, Druker BJ, Branford S, et al. Long-term response to imatinib is not affected by the initial dose in patients with Philadelphia chromosome-positive chronic myeloid leukemia in chronic phase: final update from the Tyrosine Kinase Inhibitor Optimization and Selectivity (TOPS) study. Int J Hematol. 2014;99:616-624.
28. Yeung DT, Osborn MP, White DL, et al. TIDEL-II: first-line use of imatinib in CML with early switch to nilotinib for failure to achieve time-dependent molecular targets. Blood. 2015;125:915-923.
29. Druker BJ, Guilhot F, O’Brien SG, et al. Five-year follow-up of patients receiving imatinib for chronic myeloid leukemia. N Engl J Med. 2006;355:2408-2417.
30. Shah NP, Rousselot P, Schiffer C, et al. Dasatinib in imatinib-resistant or -intolerant chronic-phase, chronic myeloid leukemia patients: 7-year follow-up of study CA180-034. Am J Hematol. 2016;91:869-874.
31. Quintas-Cardama A, Han X, Kantarjian H, Cortes J. Tyrosine kinase inhibitor-induced platelet dysfunction in patients with chronic myeloid leukemia. Blood. 2009;114:261-263.
32. Giles FJ, le Coutre PD, Pinilla-Ibarz J, et al. Nilotinib in imatinib-resistant or imatinib-intolerant patients with chronic myeloid leukemia in chronic phase: 48-month follow-up results of a phase II study. Leukemia. 2013;27:107-112.
33. Saglio G, Kim DW, Issaragrisil S, et al. Nilotinib versus imatinib for newly diagnosed chronic myeloid leukemia. N Engl J Med. 2010;362:2251-2259.
34. Cortes JE, Khoury HJ, Kantarjian HM, et al. Long-term bosutinib for chronic phase chronic myeloid leukemia after failure of imatinib plus dasatinib and/or nilotinib. Am J Hematol. 2016;91:1206-1214.
35. Gambacorti-Passerini C, Cortes JE, Lipton JH, et al. Safety and efficacy of second-line bosutinib for chronic phase chronic myeloid leukemia over a five-year period: final results of a phase I/II study. Haematologica. 2018;103:1298-1307.
36. Cortes JE, Kim DW, Kantarjian HM, et al. Bosutinib versus imatinib in newly diagnosed chronic-phase chronic myeloid leukemia: results from the BELA trial. J Clin Oncol. 2012;30:3486-3492.
37. Cortes JE, Kim DW, Pinilla-Ibarz J, et al. A phase 2 trial of ponatinib in Philadelphia chromosome-positive leukemias. N Engl J Med. 2013;369:1783-1796.
38. Cortes JE, Kim DW, Pinilla-Ibarz J, et al. Ponatinib efficacy and safety in Philadelphia chromosome-positive leukemia: final 5-year results of the phase 2 PACE trial. Blood. 2018;132:393-404.
From the Moffitt Cancer Center, Tampa, FL.
Abstract
- Objective: To outline the approach to selecting a tyrosine kinase inhibitor (TKI) for initial treatment of chronic myeloid leukemia (CML) and monitoring patients following initiation of therapy.
- Methods: Review of the literature and evidence-based guidelines.
- Results: The development and availability of TKIs has improved survival for patients diagnosed with CML. The life expectancy of patients diagnosed with chronic-phase CML (CP-CML) is similar to that of the general population, provided they receive appropriate TKI therapy and adhere to treatment. Selection of the most appropriate first-line TKI for newly diagnosed CP-CML requires incorporation of the patient’s baseline karyotype and Sokal or EURO risk score, and a clear understanding of the patient’s comorbidities. The adverse effect profile of all TKIs must be considered in conjunction with the patient’s ongoing medical issues to decrease the likelihood of worsening their current symptoms or causing a severe complication from TKI therapy. After confirming a diagnosis of CML and selecting the most appropriate TKI for first-line therapy, close monitoring and follow-up are necessary to ensure patients are meeting the desired treatment milestones. Responses in CML can be assessed based on hematologic parameters, cytogenetic results, and molecular responses.
- Conclusion: Given the successful treatments available for patients with CML, it is crucial to identify patients with this diagnosis; ensure they receive a complete, appropriate diagnostic workup including a bone marrow biopsy and aspiration with cytogenetic testing; and select the best therapy for each individual patient.
Keywords: chronic myeloid leukemia; CML; tyrosine kinase inhibitor; TKI; cancer; BCR-ABL protein.
Chronic myeloid leukemia (CML) is a rare myeloproliferative neoplasm that is characterized by the presence of the Philadelphia (Ph) chromosome and uninhibited expansion of bone marrow stem cells. The Ph chromosome arises from a reciprocal translocation between the Abelson (ABL) region on chromosome 9 and the breakpoint cluster region (BCR) of chromosome 22 (t(9;22)(q34;q11.2)), resulting in the BCR-ABL1 fusion gene and its protein product, BCR-ABL tyrosine kinase.1 BCR-ABL has constitutive tyrosine kinase activity that promotes growth, replication, and survival of hematopoietic cells through downstream pathways, which is the driving factor in the pathogenesis of CML.1
CML is divided into 3 phases based on the number of myeloblasts observed in the blood or bone marrow: chronic, accelerated, and blast. Most cases of CML are diagnosed in the chronic phase (CP), which is marked by proliferation of primarily the myeloid element.
Typical treatment for CML involves lifelong use of oral BCR-ABL tyrosine kinase inhibitors (TKIs). Currently, 5 TKIs have regulatory approval for treatment of this disease. The advent of TKIs, a class of small molecules targeting the tyrosine kinases, particularly the BCR-ABL tyrosine kinase, led to rapid changes in the management of CML and improved survival for patients. Patients diagnosed with chronic-phase CML (CP-CML) now have a life expectancy that is similar to that of the general population, as long as they receive appropriate TKI therapy and adhere to treatment. As such, it is crucial to identify patients with CML; ensure they receive a complete, appropriate diagnostic workup; and select the best therapy for each patient.
Epidemiology
According to SEER data estimates, 8430 new cases of CML were diagnosed in the United States in 2018. CML is a disease of older adults, with a median age of 65 years at diagnosis, and there is a slight male predominance. Between 2011 and 2015, the number of new CML cases was 1.8 per 100,000 persons. The median overall survival (OS) in patients with newly diagnosed CP-CML has not been reached.2 Given the effective treatments available for managing CML, it is estimated that the prevalence of CML in the United States will plateau at 180,000 patients by 2050.3
Diagnosis
Clinical Features
The diagnosis of CML is often suspected based on an incidental finding of leukocytosis and, in some cases, thrombocytosis. In many cases, this is an incidental finding on routine blood work, but approximately 50% of patients will present with constitutional symptoms associated with the disease. Characteristic features of the white blood cell differential include left-shifted maturation with neutrophilia and immature circulating myeloid cells. Basophilia and eosinophilia are often present as well. Splenomegaly is a common sign, present in 50% to 90% of patients at diagnosis. In those patients with symptoms related to CML at diagnosis, the most common presentation includes increasing fatigue, fevers, night sweats, early satiety, and weight loss. The diagnosis is confirmed by cytogenetic studies showing the Ph chromosome abnormality, t(9; 22)(q3.4;q1.1), and/or reverse transcriptase polymerase chain reaction (PCR) showing BCR-ABL1 transcripts.
Testing
Bone marrow biopsy. There are 3 distinct phases of CML: CP, accelerated phase (AP), and blast phase (BP). Bone marrow biopsy and aspiration at diagnosis are mandatory in order to determine the phase of the disease at diagnosis. This distinction is based on the percentage of blasts, promyelocytes, and basophils present as well as the platelet count and presence or absence of extramedullary disease.4 The vast majority of patients at diagnosis have CML that is in the chronic phase. The typical appearance in CP-CML is a hypercellular marrow with granulocytic and occasionally megakaryocytic hyperplasia. In many cases, basophilia and/or eosinophilia are noted as well. Dysplasia is not a typical finding in CML.5 Bone marrow fibrosis can be seen in up to one-third of patients at diagnosis, and may indicate a slightly worse prognosis.6 Although a diagnosis of CML can be made without a bone marrow biopsy, complete staging and prognostication are only possible with information gained from this test, including baseline karyotype and confirmation of CP versus a more advanced phase of CML.
Diagnostic criteria. The criteria for diagnosing AP-CML has not been agreed upon by various groups, but the modified MD Anderson Cancer Center (MDACC) criteria are used in the majority of clinical trials evaluating the efficacy of TKIs in preventing progression to advanced phases of CML. MDACC criteria define AP-CML as the presence of 1 of the following: 15% to 29% blasts in the peripheral blood or bone marrow, ≥ 30% peripheral blasts plus promyelocytes, ≥ 20% basophils in the blood or bone marrow, platelet count ≤ 100,000/μL unrelated to therapy, and clonal cytogenetic evolution in Ph-positive metaphases (Table).7
BP-CML is typically defined using the criteria developed by the International Bone Marrow Transplant Registry (IBMTR): ≥ 30% blasts in the peripheral blood and/or the bone marrow or the presence of extramedullary disease.8 Although not typically used in clinical trials, the revised World Health Organization (WHO) criteria for BP-CML include ≥ 20% blasts in the peripheral blood or bone marrow, extramedullary blast proliferation, and large foci or clusters of blasts in the bone marrow biopsy sample (Table).9
The defining feature of CML is the presence of the Ph chromosome abnormality. In a small subset of patients, additional chromosome abnormalities (ACA) in the Ph-positive cells may be identified at diagnosis. Some reports indicate that the presence of “major route” ACA (trisomy 8, isochromosome 17q, a second Ph chromosome, or trisomy 19) at diagnosis may negatively impact prognosis, but other reports contradict these findings.10,11
PCR assay. The typical BCR breakpoint in CML is the major breakpoint cluster region (M-BCR), which results in a 210-kDa protein (p210). Alternate breakpoints that are less frequently identified are the minor BCR (mBCR or p190), which is more commonly found in Ph-positive acute lymphoblastic leukemia (ALL), and the micro BCR (µBCR or p230), which is much less common and is often characterized by chronic neutrophilia.12 Identifying which BCR-ABL1 transcript is present in each patient using qualitative PCR is crucial in order to ensure proper monitoring during treatment.
The most sensitive method for detecting BCR-ABL1 mRNA transcripts is the quantitative real-time PCR (RQ-PCR) assay, which is typically done on peripheral blood. RQ-PCR is capable of detecting a single CML cell in the presence of ≥ 100,000 normal cells. This test should be done during the initial diagnostic workup in order to confirm the presence of BCR-ABL1 transcripts, and it is used as a standard method for monitoring response to TKI therapy.13 The International Scale (IS) is a standardized approach to reporting RQ-PCR results that was developed to allow comparison of results across various laboratories and has become the gold standard for reporting BCR-ABL1 transcript values.14
Determining Risk Scores
Calculating a patient’s Sokal score or EURO risk score at diagnosis remains an important component of the diagnostic workup in CP-CML, as this information has prognostic and therapeutic implications (an online calculator is available through European LeukemiaNet [ELN]). The risk for disease progression to the accelerated or blast phases is higher in patients with intermediate or high risk scores compared to those with a low risk score at diagnosis. The risk of progression in intermediate- or high-risk patients is lower when a second-generation TKI (dasatinib, nilotinib, or bosutinib) is used as frontline therapy compared to imatinib, and therefore, the National Comprehensive Cancer Network (NCCN) CML Panel recommends starting with a second-generation TKI in these patients.15-19
Monitoring Response to Therapy
After confirming a diagnosis of CML and selecting the most appropriate TKI for first-line therapy, the successful management of CML patients relies on close monitoring and follow-up to ensure they are meeting the desired treatment milestones. Responses in CML can be assessed based on hematologic parameters, cytogenetic results, and molecular responses. A complete hematologic response (CHR) implies complete normalization of peripheral blood counts (with the exception of TKI-induced cytopenias) and resolution of any palpable splenomegaly. The majority of patients will achieve a CHR within 4 to 6 weeks after initiating CML-directed therapy.20
Cytogenetic Response
Cytogenetic responses are defined by the decrease in the number of Ph chromosome–positive metaphases when assessed on bone marrow cytogenetics. A partial cytogenetic response (PCyR) is defined as having 1% to 35% Ph-positive metaphases, a major cytogenetic response (MCyR) as having 0% to 35% Ph-positive metaphases, and a complete cytogenetic response (CCyR) implies that no Ph-positive metaphases are identified on bone marrow cytogenetics. An ideal response is the achievement of PCyR after 3 months on a TKI and a CCyR after 12 months on a TKI.21
Molecular Response
Once a patient has achieved a CCyR, monitoring their response to therapy can only be done using RQ-PCR to measure BCR-ABL1 transcripts in the peripheral blood. The NCCN and the ELN recommend monitoring RQ-PCR from the peripheral blood every 3 months in order to assess response to TKIs.19,22 As noted, the IS has become the gold standard reporting system for all BCR-ABL1 transcript levels in the majority of laboratories worldwide.14,23 Molecular responses are based on a log reduction in BCR-ABL1 transcripts from a standardized baseline. Many molecular responses can be correlated with cytogenetic responses such that, if reliable RQ-PCR testing is available, monitoring can be done using only peripheral blood RQ-PCR rather than repeat bone marrow biopsies. For example, an early molecular response (EMR) is defined as a RQ-PCR value of ≤ 10% IS, which is approximately equivalent to a PCyR.24 A value of 1% IS is approximately equivalent to a CCyR. A major molecular response (MMR) is a ≥ 3-log reduction in BCR-ABL1 transcripts from baseline and is a value of ≤ 0.1% IS. Deeper levels of molecular response are best described by the log reduction in BCR-ABL1 transcripts, with a 4-log reduction denoted as MR4.0, a 4.5-log reduction as MR4.5, and so forth. Complete molecular response (CMR) is defined by the level of sensitivity of the RQ-PCR assay being used.14
The definition of relapsed disease in CML is dependent on the type of response the patient had previously achieved. Relapse could be the loss of a hematologic or cytogenetic response, but fluctuations in BCR-ABL1 transcripts on routine RQ-PCR do not necessarily indicate relapsed CML. A 1-log increase in the level of BCR-ABL1 transcripts with a concurrent loss of MMR should prompt a bone marrow biopsy in order to assess for the loss of CCyR, and thus a cytogenetic relapse; however, this loss of MMR does not define relapse in and of itself. In the setting of relapsed disease, testing should be done to look for possible ABL kinase domain mutations, and alternate therapy should be selected.19
Multiple reports have identified the prognostic relevance of achieving an EMR at 3 and 6 months after starting TKI therapy. Marin and colleagues reported that in 282 imatinib-treated patients, there was a significant improvement in 8-year OS, progression-free survival (PFS), and cumulative incidence of CCyR and CMR in patients who had BCR-ABL1 transcripts < 9.84% IS after 3 months on treatment.24 This data highlights the importance of early molecular monitoring in order to ensure the best outcomes for patients with CP-CML.
The NCCN CML guidelines and ELN recommendations both agree that an ideal response after 3 months on a TKI is BCR-ABL1 transcripts < 10% IS, but treatment is not considered to be failing at this point if the patient marginally misses this milestone. After 6 months on treatment, an ideal response is considered BCR-ABL1 transcripts < 1%–10% IS. Ideally, patients will have BCR-ABL1 transcripts < 0.1%–1% IS by the time they complete 12 months of TKI therapy, suggesting that these patients have at least achieved a CCyR.19,22 Even after patients achieve these early milestones, frequent monitoring by RQ-PCR is required to ensure that they are maintaining their response to treatment. This will help to ensure patient compliance with treatment and will also help to identify a select subset of patients who could potentially be considered for an attempt at TKI cessation (not discussed in detail here) after a minimum of 3 years on therapy.19,25
Selecting First-line TKI Therapy
Selection of the most appropriate first-line TKI for newly diagnosed CP-CML patients requires incorporation of many patient-specific factors. These factors include baseline karyotype and confirmation of CP-CML through bone marrow biopsy, Sokal or EURO risk score, and a thorough patient history, including a clear understanding of the patient’s comorbidities. The adverse effect profile of all TKIs must be considered in conjunction with the patient’s ongoing medical issues in order to decrease the likelihood of worsening their current symptoms or causing a severe complication from TKI therapy.
Imatinib
The management of CML was revolutionized by the development and ultimate regulatory approval of imatinib mesylate in 2001. Imatinib was the first small-molecule cancer therapy developed and approved. It acts by binding to the adenosine triphosphate (ATP) binding site in the catalytic domain of BCR-ABL, thus inhibiting the oncoprotein’s tyrosine kinase activity.26
The International Randomized Study of Interferon versus STI571 (IRIS) trial was a randomized phase 3 study that compared imatinib 400 mg daily to interferon alfa (IFNa) plus cytarabine. More than 1000 CP-CML patients were randomly assigned 1:1 to either imatinib or IFNa plus cytarabine and were assessed for event-free survival, hematologic and cytogenetic responses, freedom from progression to AP or BP, and toxicity. Imatinib was superior to the prior standard of care for all these outcomes.21 The long-term follow-up of the IRIS trial reported an 83% estimated 10-year OS and 79% estimated event-free survival for patients on the imatinib arm of this study.15 The cumulative rate of CCyR was 82.8%. Of the 204 imatinib-treated patients who could undergo a molecular response evaluation at 10 years, 93.1% had a MMR and 63.2% had a MR4.5, suggesting durable, deep molecular responses for many patients. The estimated 10-year rate of freedom from progression to AP or BP was 92.1%.
Higher doses of imatinib (600-800 mg daily) have been studied in an attempt to overcome resistance and improve cytogenetic and molecular response rates. The Tyrosine Kinase Inhibitor Optimization and Selectivity (TOPS) trial was a randomized phase 3 study that compared imatinib 800 mg daily to imatinib 400 mg daily. Although the 6-month assessments found increased rates of CCyR and a MMR in the higher-dose imatinib arm, these differences were no longer present at the 12-month assessment. Furthermore, the higher dose of imatinib led to a significantly higher incidence of grade 3/4 hematologic adverse events, and approximately 50% of patients on imatinib 800 mg daily required a dose reduction to less than 600 mg daily because of toxicity.27
The Therapeutic Intensification in De Novo Leukaemia (TIDEL)-II study used plasma trough levels of imatinib on day 22 of treatment with imatinib 600 mg daily to determine if patients should escalate the imatinib dose to 800 mg daily. In patients who did not meet molecular milestones at 3, 6, or 12 months, cohort 1 was dose escalated to imatinib 800 mg daily and subsequently switched to nilotinib 400 mg twice daily for failing the same target 3 months later, and cohort 2 was switched to nilotinib. At 2 years, 73% of patients achieved MMR and 34% achieved MR4.5, suggesting that initial treatment with higher-dose imatinib, followed by a switch to nilotinib in those failing to achieve desired milestones, could be an effective strategy for managing newly diagnosed CP-CML.28
Toxicity. The standard starting dose of imatinib in CP-CML patients is 400 mg. The safety profile of imatinib has been very well established. In the IRIS trial, the most common adverse events (all grades in decreasing order of frequency) were peripheral and periorbital edema (60%), nausea (50%), muscle cramps (49%), musculoskeletal pain (47%), diarrhea (45%), rash (40%), fatigue (39%), abdominal pain (37%), headache (37%), and joint pain (31%). Grade 3/4 liver enzyme elevation can occur in 5% of patients.29 In the event of severe liver toxicity or fluid retention, imatinib should be held until the event resolves. At that time, imatinib can be restarted if deemed appropriate, but this is dependent on the severity of the inciting event. Fluid retention can be managed by the use of supportive care, diuretics, imatinib dose reduction, dose interruption, or imatinib discontinuation if the fluid retention is severe. Muscle cramps can be managed by the use of calcium supplements or tonic water. Management of rash can include topical or systemic steroids, or in some cases imatinib dose reduction, interruption, or discontinuation.19
Grade 3/4 imatinib-induced hematologic toxicity is not uncommon, with 17% of patients experiencing neutropenia, 9% thrombocytopenia, and 4% anemia. These adverse events occurred most commonly during the first year of therapy, and the frequency decreased over time.15,29 Depending on the degree of cytopenias, imatinib dosing should be interrupted until recovery of the absolute neutrophil count or platelet count, and can often be resumed at 400 mg daily. However, if cytopenias recur, imatinib should be held and subsequently restarted at 300 mg daily.19
Dasatinib
Dasatinib is a second-generation TKI that has regulatory approval for treatment of adult patients with newly diagnosed CP-CML or CP-CML in patients with resistance or intolerance to prior TKIs. In addition to dasatinib’s ability to inhibit ABL kinases, it is also known to be a potent inhibitor of Src family kinases. Dasatinib has shown efficacy in patients who have developed imatinib-resistant ABL kinase domain mutations.
Dasatinib was initially approved as second-line therapy in patients with resistance or intolerance to imatinib. This indication was based on the results of the phase 3 CA180-034 trial, which ultimately identified dasatinib 100 mg daily as the optimal dose. In this trial, 74% of patients enrolled had resistance to imatinib and the remainder were intolerant. The 7-year follow-up of patients randomized to dasatinib 100 mg (n = 167) daily indicated that 46% achieved MMR while on study. Of the 124 imatinib-resistant patients on dasatinib 100 mg daily, the 7-year PFS was 39% and OS was 63%. In the 43 imatinib-intolerant patients, the 7-year PFS was 51% and OS was 70%.30
Dasatinib 100 mg daily was compared to imatinib 400 mg daily in newly diagnosed CP-CML patients in the randomized phase 3 DASISION (Dasatinib versus Imatinib Study in Treatment-Naive CML Patients) trial. More patients on the dasatinib arm achieved an EMR of BCR-ABL1 transcripts ≤ 10% IS after 3 months on treatment compared to imatinib (84% versus 64%). Furthermore, the 5-year follow-up reports that the cumulative incidence of MMR and MR4.5 in dasatinib-treated patients was 76% and 42%, and was 64% and 33% with imatinib (P = 0.0022 and P = 0.0251, respectively). Fewer patients treated with dasatinib progressed to AP or BP (4.6%) compared to imatinib (7.3%), but the estimated 5-year OS was similar between the 2 arms (91% for dasatinib versus 90% for imatinib).16 Regulatory approval for dasatinib as first-line therapy in newly diagnosed CML patients was based on results of the DASISION trial.
Toxicity. Most dasatinib-related toxicities are reported as grade 1 or grade 2, but grade 3/4 hematologic adverse events are fairly common. In the DASISION trial, grade 3/4 neutropenia, anemia, and thrombocytopenia occurred in 29%, 13%, and 22% of dasatinib-treated patients, respectively. Cytopenias can generally be managed with temporary dose interruptions or dose reductions.
During the 5-year follow-up of the DASISION trial, pleural effusions were reported in 28% of patients, most of which were grade 1/2. This occurred at a rate of approximately ≤ 8% per year, suggesting a stable incidence over time, and the effusions appear to be dose-dependent.16 Depending on the severity, pleural effusion may be treated with diuretics, dose interruption, and, in some instances, steroids or a thoracentesis. Typically, dasatinib can be restarted at 1 dose level lower than the previous dose once the effusion has resolved.19 Other, less common side effects of dasatinib include pulmonary hypertension (5% of patients), as well as abdominal pain, fluid retention, headaches, fatigue, musculoskeletal pain, rash, nausea, and diarrhea. Pulmonary hypertension is typically reversible after cessation of dasatinib, and thus dasatinib should be permanently discontinued once the diagnosis is confirmed. Fluid retention is often treated with diuretics and supportive care. Nausea and diarrhea are generally manageable and occur less frequently when dasatinib is taken with food and a large glass of water. Antiemetics and antidiarrheals can be used as needed. Troublesome rash can be best managed with topical or systemic steroids as well as possible dose reduction or dose interruption.16,19 In the DASISION trial, adverse events led to therapy discontinuation more often in the dasatinib group than in the imatinib group (16% versus 7%).16 Bleeding, particularly in the setting of thrombocytopenia, has been reported in patients being treated with dasatinib as a result of the drug-induced reversible inhibition of platelet aggregation.31
Nilotinib
The structure of nilotinib is similar to that of imatinib; however, it has a markedly increased affinity for the ATP‐binding site on the BCR-ABL1 protein. It was initially given regulatory approval in the setting of imatinib failure. Nilotinib was studied at a dose of 400 mg twice daily in 321 patients who were imatinib-resistant or -intolerant. It proved to be highly effective at inducing cytogenetic remissions in the second-line setting, with 59% of patients achieving a MCyR and 45% achieving a CCyR. With a median follow-up time of 4 years, the OS was 78%.32
Nilotinib gained regulatory approval for use as a first-line TKI after completion of the randomized phase 3 ENESTnd (Evaluating Nilotinib Efficacy and Safety in Clinical Trials-Newly Diagnosed Patients) trial. ENESTnd was a 3-arm study comparing nilotinib 300 mg twice daily versus nilotinib 400 mg twice daily versus imatinib 400 mg daily in newly diagnosed, previously untreated patients diagnosed with CP-CML. The primary endpoint of this clinical trial was rate of MMR at 12 months.33 Nilotinib surpassed imatinib in this regard, with 44% of patients on nilotinib 300 mg twice daily achieving MMR at 12 months versus 43% of nilotinib 400 mg twice daily patients versus 22% of the imatinib-treated patients (P < 0.001 for both comparisons). Furthermore, the rate of CCyR by 12 months was significantly higher for both nilotinib arms compared with imatinib (80% for nilotinib 300 mg, 78% for nilotinib 400 mg, and 65% for imatinib) (P < 0.001).12 Based on this data, nilotinib 300 mg twice daily was chosen as the standard dose of nilotinib in the first-line setting. After 5 years of follow-up on the ENESTnd study, there were fewer progressions to AP/BP CML in nilotinib-treated patients compared with imatinib. MMR was achieved in 77% of nilotinib 300 mg patients compared with 60.4% of patients on the imatinib arm. MR4.5 was also more common in patients treated with nilotinib 300 mg twice daily, with a rate of 53.5% at 5 years versus 31.4% in the imatinib arm.17 In spite of the deeper cytogenetic and molecular responses achieved with nilotinib, this did not translate into a significant improvement in OS. The 5-year OS rate was 93.7% in nilotinib 300 mg patients versus 91.7% in imatinib-treated patients, and this difference lacked statistical significance.17
Toxicity. Although some similarities exist between the toxicity profiles of nilotinib and imatinib, each drug has some distinct adverse events. On the ENESTnd trial, the rate of any grade 3/4 non-hematologic adverse event was fairly low; however, lower-grade toxicities were not uncommon. Patients treated with nilotinib 300 mg twice daily experienced rash (31%), headache (14%), pruritis (15%), and fatigue (11%) most commonly. The most frequently reported laboratory abnormalities included increased total bilirubin (53%), hypophosphatemia (32%), hyperglycemia (36%), elevated lipase (24%), increased alanine aminotransferase (ALT; 66%), and increased aspartate aminotransferase (AST; 40%). Any grade of neutropenia, thrombocytopenia, or anemia occurred at rates of 43%, 48%, and 38%, respectively.33 Although nilotinib has a Black Box Warning from the US Food and Drug Administration for QT interval prolongation, no patients on the ENESTnd trial experienced a QT interval corrected for heart rate greater than 500 msec.12
More recent concerns have emerged regarding the potential for cardiovascular toxicity after long-term use of nilotinib. The 5-year update of ENESTnd reports cardiovascular events, including ischemic heart disease, ischemic cerebrovascular events, or peripheral arterial disease occurring in 7.5% of patients treated with nilotinib 300 mg twice daily, as compared with a rate of 2.1% in imatinib-treated patients. The frequency of these cardiovascular events increased linearly over time in both arms. Elevations in total cholesterol from baseline occurred in 27.6% of nilotinib patients compared with 3.9% of imatinib patients. Furthermore, clinically meaningful increases in low-density lipoprotein cholesterol and glycated hemoglobin occurred more frequently with nilotinib therapy.33
Nilotinib should be taken on an empty stomach; therefore, patients should be made aware of the need to fast for 2 hours prior to each dose and 1 hour after each dose. Given the potential risk of QT interval prolongation, a baseline electrocardiogram (ECG) is recommended prior to initiating treatment to ensure the QT interval is within a normal range. A repeat ECG should be done approximately 7 days after nilotinib initiation to ensure no prolongation of the QT interval after starting. Close monitoring of potassium and magnesium levels is important to decrease the risk of cardiac arrhythmias, and concomitant use of drugs considered strong CYP3A4 inhibitors should be avoided.19
If the patient experiences any grade 3 or higher laboratory abnormalities, nilotinib should be held until resolution of the toxicity, and then restarted at a lower dose. Similarly, if patients develop significant neutropenia or thrombocytopenia, nilotinib doses should be interrupted until resolution of the cytopenias. At that point, nilotinib can be reinitiated at either the same or a lower dose. Rash can be managed by the use of topical or systemic steroids as well as potential dose reduction, interruption, or discontinuation.
Given the concerns for potential cardiovascular events with long-term use of nilotinib, caution is advised when prescribing it to any patient with a history of cardiovascular disease or peripheral arterial occlusive disease. At the first sign of new occlusive disease, nilotinib should be discontinued.19
Bosutinib
Bosutinib is a second-generation BCR-ABL TKI with activity against the Src family of kinases; it was initially approved to treat patients with CP-, AP-, or BP-CML after resistance or intolerance to imatinib. Long-term data has been reported from the phase 1/2 trial of bosutinib therapy in patients with CP-CML who developed resistance or intolerance to imatinib plus dasatinib and/or nilotinib. A total of 119 patients were included in the 4-year follow-up; 38 were resistant/intolerant to imatinib and resistant to dasatinib, 50 were resistant/intolerant to imatinib and intolerant to dasatinib, 26 were resistant/intolerant to imatinib and resistant to nilotinib, and 5 were resistant/intolerant to imatinib and intolerant to nilotinib or resistant/intolerant to dasatinib and nilotinib. Bosutinib 400 mg daily was studied in this setting. Of the 38 patients with imatinib resistance/intolerance and dasatinib resistance, 39% achieved MCyR, 22% achieved CCyR, and the OS was 67%. Of the 50 patients with imatinib resistance/intolerance and dasatinib intolerance, 42% achieved MCyR, 40% achieved CCyR, and the OS was 80%. Finally, in the 26 patients with imatinib resistance/intolerance and nilotinib resistance, 38% achieved MCyR, 31% achieved CCyR, and the OS was 87%.34
Five-year follow-up from the phase 1/2 clinical trial that studied bosutinib 500 mg daily in CP-CML patients after imatinib failure reported data on 284 patients. By 5 years on study, 60% of patients had achieved MCyR and 50% achieved CCyR with a 71% and 69% probability, respectively, of maintaining these responses at 5 years. The 5-year OS was 84%.35 These data led to the regulatory approval of bosutinib 500 mg daily as second-line or later therapy.
Bosutinib was initially studied in the first-line setting in the randomized phase 3 BELA (Bosutinib Efficacy and Safety in Newly Diagnosed Chronic Myeloid Leukemia) trial. This trial compared bosutinib 500 mg daily to imatinib 400 mg daily in newly diagnosed, previously untreated CP-CML patients. This trial failed to meet its primary endpoint of increased rate of CCyR at 12 months, with 70% of bosutinib patients achieving this response, compared to 68% of imatinib-treated patients (P = 0.601). In spite of this, the rate of MMR at 12 months was significantly higher in the bosutinib arm (41%) compared to the imatinib arm (27%; P = 0.001).36
A second phase 3 trial (BFORE) was designed to study bosutinib 400 mg daily versus imatinib in newly diagnosed, previously untreated CP-CML patients. This study enrolled 536 patients who were randomly assigned 1:1 to bosutinib versus imatinib. The primary endpoint of this trial was rate of MMR at 12 months. A significantly higher number of bosutinib-treated patients achieved this response (47.2%) compared with imatinib-treated patients (36.9%, P = 0.02). Furthermore, by 12 months 77.2% of patients on the bosutinib arm had achieved CCyR compared with 66.4% on the imatinib arm, and this difference did meet statistical significance (P = 0.0075). A lower rate of progression to AP- or BP-CML was noted in bosutinib-treated patients as well (1.6% versus 2.5%). Based on this data, bosutinib gained regulatory approval for first-line therapy in CP-CML at a dose of 400 mg daily.18
Toxicity. On the BFORE trial, the most common treatment-emergent adverse events of any grade reported in the bosutinib-treated patients were diarrhea (70.1%), nausea (35.1%), increased ALT (30.6%), and increased AST (22.8%). Musculoskeletal pain or spasms occurred in 29.5% of patients, rash in 19.8%, fatigue in 19.4%, and headache in 18.7%. Hematologic toxicity was also reported, but most was grade 1/2. Thrombocytopenia was reported in 35.1%, anemia in 18.7%, and neutropenia in 11.2%.18
Cardiovascular events occurred in 5.2% of patients on the bosutinib arm of the BFORE trial, which was similar to the rate observed in imatinib patients. The most common cardiovascular event was QT interval prolongation, which occurred in 1.5% of patients. Pleural effusions were reported in 1.9% of patients treated with bosutinib, and none were grade 3 or higher.18
If liver enzyme elevation occurs at a value greater than 5 times the institutional upper limit of normal, bosutinib should be held until the level recovers to ≤ 2.5 times the upper limit of normal, at which point bosutinib can be restarted at a lower dose. If recovery takes longer than 4 weeks, bosutinib should be permanently discontinued. Liver enzymes elevated greater than 3 times the institutional upper limit of normal and a concurrent elevation in total bilirubin to 2 times the upper limit of normal are consistent with Hy’s law, and bosutinib should be discontinued. Although diarrhea is the most common toxicity associated with bosutinib, it is commonly low grade and transient. Diarrhea occurs most frequently in the first few days after initiating bosutinib. It can often be managed with over-the-counter antidiarrheal medications, but if the diarrhea is grade 3 or higher, bosutinib should be held until recovery to grade 1 or lower. Gastrointestinal side effects may be improved by taking bosutinib with a meal and a large glass of water. Fluid retention can be managed with diuretics and supportive care. Finally, if rash occurs, this can be addressed with topical or systemic steroids as well as bosutinib dose reduction, interruption, or discontinuation.19
Similar to other TKIs, if bosutinib-induced cytopenias occur, treatment should be held and restarted at the same or a lower dose upon blood count recovery.19
Ponatinib
The most common cause of TKI resistance in CP-CML is the development of ABL kinase domain mutations. The majority of imatinib-resistant mutations can be overcome by the use of second-generation TKIs, including dasatinib, nilotinib, or bosutinib. However, ponatinib is the only BCR-ABL TKI able to overcome a T315I mutation. The phase 2 PACE (Ponatinib Ph-positive ALL and CML Evaluation) trial enrolled patients with CP-, AP-, or BP-CML as well as patients with Ph-positive acute lymphoblastic leukemia who were resistant or intolerant to nilotinib or dasatinib, or who had evidence of a T315I mutation. The starting dose of ponatinib on this trial was 45 mg daily.37 The PACE trial enrolled 267 patients with CP-CML: 203 with resistance or intolerance to nilotinib or dasatinib, and 64 with a T315I mutation. The primary endpoint in the CP cohort was rate of MCyR at any time within 12 months of starting ponatinib. The overall rate of MCyR by 12 months in the CP-CML patients was 56%. In those with a T315I mutation, 70% achieved MCyR, which compared favorably with those with resistance or intolerance to nilotinib or dasatinib, 51% of whom achieved MCyR. CCyR was achieved in 46% of CP-CML patients (40% in the resistant/intolerant cohort and 66% in the T315I cohort). In general, patients with T315I mutations received fewer prior therapies than those in the resistant/intolerant cohort, which likely contributed to the higher response rates in the T315I patients. MR4.5 was achieved in 15% of CP-CML patients by 12 months on the PACE trial.37 The 5-year update to this study reported that 60%, 40%, and 24% of CP-CML patients achieved MCyR, MMR, and MR4.5, respectively. In the patients who achieved MCyR, the probability of maintaining this response for 5 years was 82% and the estimated 5-year OS was 73%.19
Toxicity. In 2013, after the regulatory approval of ponatinib, reports became available that the drug can cause an increase in arterial occlusive events, including fatal myocardial infarctions and cerebrovascular accidents. For this reason, dose reductions were implemented in patients who were deriving clinical benefit from ponatinib. In spite of these dose reductions, ≥ 90% of responders maintained their response for up to 40 months.38 Although the likelihood of developing an arterial occlusive event appears higher in the first year after starting ponatinib than in later years, the cumulative incidence of events continues to increase. The 5-year follow-up to the PACE trial reports 31% of patients experiencing any grade of arterial occlusive event while on ponatinib. Aside from these events, the most common treatment-emergent adverse events in ponatinib-treated patients on the PACE trial included rash (47%), abdominal pain (46%), headache (43%), dry skin (42%), constipation (41%), and hypertension (37%). Hematologic toxicity was also common, with 46% of patients experiencing any grade of thrombocytopenia, 20% experiencing neutropenia, and 20% anemia.38
Patients receiving ponatinib therapy should be monitored closely for any evidence of arterial or venous thrombosis. If an occlusive event occurs, ponatinib should be discontinued. Similarly, in the setting of any new or worsening heart failure symptoms, ponatinib should be promptly discontinued. Management of any underlying cardiovascular risk factors, including hypertension, hyperlipidemia, diabetes, or smoking history, is recommended, and these patients should be referred to a cardiologist for a full evaluation. In the absence of any contraindications to aspirin, low-dose aspirin should be considered as a means of decreasing cardiovascular risks associated with ponatinib. In patients with known risk factors, a ponatinib starting dose of 30 mg daily rather than the standard 45 mg daily may be a safer option, resulting in fewer arterial occlusive events, although the efficacy of this dose is still being studied in comparison to 45 mg daily.19
If ponatinib-induced transaminitis greater than 3 times the upper limit of normal occurs, ponatinib should be held until resolution to less than 3 times the upper limit of normal, at which point it should be resumed at a lower dose. Similarly, in the setting of elevated serum lipase or symptomatic pancreatitis, ponatinib should be held and restarted at a lower dose after resolution of symptoms.19
In the event of neutropenia or thrombocytopenia, ponatinib should be held until blood count recovery and then restarted at the same dose. If cytopenias occur for a second time, the dose of ponatinib should be lowered at the time of treatment reinitiation. If rash occurs, it can be addressed with topical or systemic steroids as well as dose reduction, interruption, or discontinuation.19
Conclusion
With the development of imatinib and the subsequent TKIs, dasatinib, nilotinib, bosutinib, and ponatinib, CP-CML has become a chronic disease with a life expectancy that is similar to that of the general population. Given the successful treatments available for these patients, it is crucial to identify patients with this diagnosis, ensure they receive a complete, appropriate diagnostic workup including a bone marrow biopsy and aspiration with cytogenetic testing, and select the best therapy for each individual patient. Once on treatment, the importance of frequent monitoring cannot be overstated. This is the only way to be certain patients are achieving the desired treatment milestones that correlate with the favorable long-term outcomes that have been observed with TKI-based treatment of CP-CML.
Corresponding author: Kendra Sweet, MD, MS, Department of Malignant Hematology, Moffitt Cancer Center, Tampa, FL.
Financial disclosures: Dr. Sweet has served on the Advisory Board and Speakers Bureau of Novartis, Bristol-Meyers Squibb, Ariad Pharmaceuticals, and Pfizer, and has served as a consultant to Pfizer.
From the Moffitt Cancer Center, Tampa, FL.
Abstract
- Objective: To outline the approach to selecting a tyrosine kinase inhibitor (TKI) for initial treatment of chronic myeloid leukemia (CML) and monitoring patients following initiation of therapy.
- Methods: Review of the literature and evidence-based guidelines.
- Results: The development and availability of TKIs has improved survival for patients diagnosed with CML. The life expectancy of patients diagnosed with chronic-phase CML (CP-CML) is similar to that of the general population, provided they receive appropriate TKI therapy and adhere to treatment. Selection of the most appropriate first-line TKI for newly diagnosed CP-CML requires incorporation of the patient’s baseline karyotype and Sokal or EURO risk score, and a clear understanding of the patient’s comorbidities. The adverse effect profile of all TKIs must be considered in conjunction with the patient’s ongoing medical issues to decrease the likelihood of worsening their current symptoms or causing a severe complication from TKI therapy. After confirming a diagnosis of CML and selecting the most appropriate TKI for first-line therapy, close monitoring and follow-up are necessary to ensure patients are meeting the desired treatment milestones. Responses in CML can be assessed based on hematologic parameters, cytogenetic results, and molecular responses.
- Conclusion: Given the successful treatments available for patients with CML, it is crucial to identify patients with this diagnosis; ensure they receive a complete, appropriate diagnostic workup including a bone marrow biopsy and aspiration with cytogenetic testing; and select the best therapy for each individual patient.
Keywords: chronic myeloid leukemia; CML; tyrosine kinase inhibitor; TKI; cancer; BCR-ABL protein.
Chronic myeloid leukemia (CML) is a rare myeloproliferative neoplasm that is characterized by the presence of the Philadelphia (Ph) chromosome and uninhibited expansion of bone marrow stem cells. The Ph chromosome arises from a reciprocal translocation between the Abelson (ABL) region on chromosome 9 and the breakpoint cluster region (BCR) of chromosome 22 (t(9;22)(q34;q11.2)), resulting in the BCR-ABL1 fusion gene and its protein product, BCR-ABL tyrosine kinase.1 BCR-ABL has constitutive tyrosine kinase activity that promotes growth, replication, and survival of hematopoietic cells through downstream pathways, which is the driving factor in the pathogenesis of CML.1
CML is divided into 3 phases based on the number of myeloblasts observed in the blood or bone marrow: chronic, accelerated, and blast. Most cases of CML are diagnosed in the chronic phase (CP), which is marked by proliferation of primarily the myeloid element.
Typical treatment for CML involves lifelong use of oral BCR-ABL tyrosine kinase inhibitors (TKIs). Currently, 5 TKIs have regulatory approval for treatment of this disease. The advent of TKIs, a class of small molecules targeting the tyrosine kinases, particularly the BCR-ABL tyrosine kinase, led to rapid changes in the management of CML and improved survival for patients. Patients diagnosed with chronic-phase CML (CP-CML) now have a life expectancy that is similar to that of the general population, as long as they receive appropriate TKI therapy and adhere to treatment. As such, it is crucial to identify patients with CML; ensure they receive a complete, appropriate diagnostic workup; and select the best therapy for each patient.
Epidemiology
According to SEER data estimates, 8430 new cases of CML were diagnosed in the United States in 2018. CML is a disease of older adults, with a median age of 65 years at diagnosis, and there is a slight male predominance. Between 2011 and 2015, the number of new CML cases was 1.8 per 100,000 persons. The median overall survival (OS) in patients with newly diagnosed CP-CML has not been reached.2 Given the effective treatments available for managing CML, it is estimated that the prevalence of CML in the United States will plateau at 180,000 patients by 2050.3
Diagnosis
Clinical Features
The diagnosis of CML is often suspected based on an incidental finding of leukocytosis and, in some cases, thrombocytosis. In many cases, this is an incidental finding on routine blood work, but approximately 50% of patients will present with constitutional symptoms associated with the disease. Characteristic features of the white blood cell differential include left-shifted maturation with neutrophilia and immature circulating myeloid cells. Basophilia and eosinophilia are often present as well. Splenomegaly is a common sign, present in 50% to 90% of patients at diagnosis. In those patients with symptoms related to CML at diagnosis, the most common presentation includes increasing fatigue, fevers, night sweats, early satiety, and weight loss. The diagnosis is confirmed by cytogenetic studies showing the Ph chromosome abnormality, t(9; 22)(q3.4;q1.1), and/or reverse transcriptase polymerase chain reaction (PCR) showing BCR-ABL1 transcripts.
Testing
Bone marrow biopsy. There are 3 distinct phases of CML: CP, accelerated phase (AP), and blast phase (BP). Bone marrow biopsy and aspiration at diagnosis are mandatory in order to determine the phase of the disease at diagnosis. This distinction is based on the percentage of blasts, promyelocytes, and basophils present as well as the platelet count and presence or absence of extramedullary disease.4 The vast majority of patients at diagnosis have CML that is in the chronic phase. The typical appearance in CP-CML is a hypercellular marrow with granulocytic and occasionally megakaryocytic hyperplasia. In many cases, basophilia and/or eosinophilia are noted as well. Dysplasia is not a typical finding in CML.5 Bone marrow fibrosis can be seen in up to one-third of patients at diagnosis, and may indicate a slightly worse prognosis.6 Although a diagnosis of CML can be made without a bone marrow biopsy, complete staging and prognostication are only possible with information gained from this test, including baseline karyotype and confirmation of CP versus a more advanced phase of CML.
Diagnostic criteria. The criteria for diagnosing AP-CML has not been agreed upon by various groups, but the modified MD Anderson Cancer Center (MDACC) criteria are used in the majority of clinical trials evaluating the efficacy of TKIs in preventing progression to advanced phases of CML. MDACC criteria define AP-CML as the presence of 1 of the following: 15% to 29% blasts in the peripheral blood or bone marrow, ≥ 30% peripheral blasts plus promyelocytes, ≥ 20% basophils in the blood or bone marrow, platelet count ≤ 100,000/μL unrelated to therapy, and clonal cytogenetic evolution in Ph-positive metaphases (Table).7
BP-CML is typically defined using the criteria developed by the International Bone Marrow Transplant Registry (IBMTR): ≥ 30% blasts in the peripheral blood and/or the bone marrow or the presence of extramedullary disease.8 Although not typically used in clinical trials, the revised World Health Organization (WHO) criteria for BP-CML include ≥ 20% blasts in the peripheral blood or bone marrow, extramedullary blast proliferation, and large foci or clusters of blasts in the bone marrow biopsy sample (Table).9
The defining feature of CML is the presence of the Ph chromosome abnormality. In a small subset of patients, additional chromosome abnormalities (ACA) in the Ph-positive cells may be identified at diagnosis. Some reports indicate that the presence of “major route” ACA (trisomy 8, isochromosome 17q, a second Ph chromosome, or trisomy 19) at diagnosis may negatively impact prognosis, but other reports contradict these findings.10,11
PCR assay. The typical BCR breakpoint in CML is the major breakpoint cluster region (M-BCR), which results in a 210-kDa protein (p210). Alternate breakpoints that are less frequently identified are the minor BCR (mBCR or p190), which is more commonly found in Ph-positive acute lymphoblastic leukemia (ALL), and the micro BCR (µBCR or p230), which is much less common and is often characterized by chronic neutrophilia.12 Identifying which BCR-ABL1 transcript is present in each patient using qualitative PCR is crucial in order to ensure proper monitoring during treatment.
The most sensitive method for detecting BCR-ABL1 mRNA transcripts is the quantitative real-time PCR (RQ-PCR) assay, which is typically done on peripheral blood. RQ-PCR is capable of detecting a single CML cell in the presence of ≥ 100,000 normal cells. This test should be done during the initial diagnostic workup in order to confirm the presence of BCR-ABL1 transcripts, and it is used as a standard method for monitoring response to TKI therapy.13 The International Scale (IS) is a standardized approach to reporting RQ-PCR results that was developed to allow comparison of results across various laboratories and has become the gold standard for reporting BCR-ABL1 transcript values.14
Determining Risk Scores
Calculating a patient’s Sokal score or EURO risk score at diagnosis remains an important component of the diagnostic workup in CP-CML, as this information has prognostic and therapeutic implications (an online calculator is available through European LeukemiaNet [ELN]). The risk for disease progression to the accelerated or blast phases is higher in patients with intermediate or high risk scores compared to those with a low risk score at diagnosis. The risk of progression in intermediate- or high-risk patients is lower when a second-generation TKI (dasatinib, nilotinib, or bosutinib) is used as frontline therapy compared to imatinib, and therefore, the National Comprehensive Cancer Network (NCCN) CML Panel recommends starting with a second-generation TKI in these patients.15-19
Monitoring Response to Therapy
After confirming a diagnosis of CML and selecting the most appropriate TKI for first-line therapy, the successful management of CML patients relies on close monitoring and follow-up to ensure they are meeting the desired treatment milestones. Responses in CML can be assessed based on hematologic parameters, cytogenetic results, and molecular responses. A complete hematologic response (CHR) implies complete normalization of peripheral blood counts (with the exception of TKI-induced cytopenias) and resolution of any palpable splenomegaly. The majority of patients will achieve a CHR within 4 to 6 weeks after initiating CML-directed therapy.20
Cytogenetic Response
Cytogenetic responses are defined by the decrease in the number of Ph chromosome–positive metaphases when assessed on bone marrow cytogenetics. A partial cytogenetic response (PCyR) is defined as having 1% to 35% Ph-positive metaphases, a major cytogenetic response (MCyR) as having 0% to 35% Ph-positive metaphases, and a complete cytogenetic response (CCyR) implies that no Ph-positive metaphases are identified on bone marrow cytogenetics. An ideal response is the achievement of PCyR after 3 months on a TKI and a CCyR after 12 months on a TKI.21
Molecular Response
Once a patient has achieved a CCyR, monitoring their response to therapy can only be done using RQ-PCR to measure BCR-ABL1 transcripts in the peripheral blood. The NCCN and the ELN recommend monitoring RQ-PCR from the peripheral blood every 3 months in order to assess response to TKIs.19,22 As noted, the IS has become the gold standard reporting system for all BCR-ABL1 transcript levels in the majority of laboratories worldwide.14,23 Molecular responses are based on a log reduction in BCR-ABL1 transcripts from a standardized baseline. Many molecular responses can be correlated with cytogenetic responses such that, if reliable RQ-PCR testing is available, monitoring can be done using only peripheral blood RQ-PCR rather than repeat bone marrow biopsies. For example, an early molecular response (EMR) is defined as a RQ-PCR value of ≤ 10% IS, which is approximately equivalent to a PCyR.24 A value of 1% IS is approximately equivalent to a CCyR. A major molecular response (MMR) is a ≥ 3-log reduction in BCR-ABL1 transcripts from baseline and is a value of ≤ 0.1% IS. Deeper levels of molecular response are best described by the log reduction in BCR-ABL1 transcripts, with a 4-log reduction denoted as MR4.0, a 4.5-log reduction as MR4.5, and so forth. Complete molecular response (CMR) is defined by the level of sensitivity of the RQ-PCR assay being used.14
The definition of relapsed disease in CML is dependent on the type of response the patient had previously achieved. Relapse could be the loss of a hematologic or cytogenetic response, but fluctuations in BCR-ABL1 transcripts on routine RQ-PCR do not necessarily indicate relapsed CML. A 1-log increase in the level of BCR-ABL1 transcripts with a concurrent loss of MMR should prompt a bone marrow biopsy in order to assess for the loss of CCyR, and thus a cytogenetic relapse; however, this loss of MMR does not define relapse in and of itself. In the setting of relapsed disease, testing should be done to look for possible ABL kinase domain mutations, and alternate therapy should be selected.19
Multiple reports have identified the prognostic relevance of achieving an EMR at 3 and 6 months after starting TKI therapy. Marin and colleagues reported that in 282 imatinib-treated patients, there was a significant improvement in 8-year OS, progression-free survival (PFS), and cumulative incidence of CCyR and CMR in patients who had BCR-ABL1 transcripts < 9.84% IS after 3 months on treatment.24 This data highlights the importance of early molecular monitoring in order to ensure the best outcomes for patients with CP-CML.
The NCCN CML guidelines and ELN recommendations both agree that an ideal response after 3 months on a TKI is BCR-ABL1 transcripts < 10% IS, but treatment is not considered to be failing at this point if the patient marginally misses this milestone. After 6 months on treatment, an ideal response is considered BCR-ABL1 transcripts < 1%–10% IS. Ideally, patients will have BCR-ABL1 transcripts < 0.1%–1% IS by the time they complete 12 months of TKI therapy, suggesting that these patients have at least achieved a CCyR.19,22 Even after patients achieve these early milestones, frequent monitoring by RQ-PCR is required to ensure that they are maintaining their response to treatment. This will help to ensure patient compliance with treatment and will also help to identify a select subset of patients who could potentially be considered for an attempt at TKI cessation (not discussed in detail here) after a minimum of 3 years on therapy.19,25
Selecting First-line TKI Therapy
Selection of the most appropriate first-line TKI for newly diagnosed CP-CML patients requires incorporation of many patient-specific factors. These factors include baseline karyotype and confirmation of CP-CML through bone marrow biopsy, Sokal or EURO risk score, and a thorough patient history, including a clear understanding of the patient’s comorbidities. The adverse effect profile of all TKIs must be considered in conjunction with the patient’s ongoing medical issues in order to decrease the likelihood of worsening their current symptoms or causing a severe complication from TKI therapy.
Imatinib
The management of CML was revolutionized by the development and ultimate regulatory approval of imatinib mesylate in 2001. Imatinib was the first small-molecule cancer therapy developed and approved. It acts by binding to the adenosine triphosphate (ATP) binding site in the catalytic domain of BCR-ABL, thus inhibiting the oncoprotein’s tyrosine kinase activity.26
The International Randomized Study of Interferon versus STI571 (IRIS) trial was a randomized phase 3 study that compared imatinib 400 mg daily to interferon alfa (IFNa) plus cytarabine. More than 1000 CP-CML patients were randomly assigned 1:1 to either imatinib or IFNa plus cytarabine and were assessed for event-free survival, hematologic and cytogenetic responses, freedom from progression to AP or BP, and toxicity. Imatinib was superior to the prior standard of care for all these outcomes.21 The long-term follow-up of the IRIS trial reported an 83% estimated 10-year OS and 79% estimated event-free survival for patients on the imatinib arm of this study.15 The cumulative rate of CCyR was 82.8%. Of the 204 imatinib-treated patients who could undergo a molecular response evaluation at 10 years, 93.1% had a MMR and 63.2% had a MR4.5, suggesting durable, deep molecular responses for many patients. The estimated 10-year rate of freedom from progression to AP or BP was 92.1%.
Higher doses of imatinib (600-800 mg daily) have been studied in an attempt to overcome resistance and improve cytogenetic and molecular response rates. The Tyrosine Kinase Inhibitor Optimization and Selectivity (TOPS) trial was a randomized phase 3 study that compared imatinib 800 mg daily to imatinib 400 mg daily. Although the 6-month assessments found increased rates of CCyR and a MMR in the higher-dose imatinib arm, these differences were no longer present at the 12-month assessment. Furthermore, the higher dose of imatinib led to a significantly higher incidence of grade 3/4 hematologic adverse events, and approximately 50% of patients on imatinib 800 mg daily required a dose reduction to less than 600 mg daily because of toxicity.27
The Therapeutic Intensification in De Novo Leukaemia (TIDEL)-II study used plasma trough levels of imatinib on day 22 of treatment with imatinib 600 mg daily to determine if patients should escalate the imatinib dose to 800 mg daily. In patients who did not meet molecular milestones at 3, 6, or 12 months, cohort 1 was dose escalated to imatinib 800 mg daily and subsequently switched to nilotinib 400 mg twice daily for failing the same target 3 months later, and cohort 2 was switched to nilotinib. At 2 years, 73% of patients achieved MMR and 34% achieved MR4.5, suggesting that initial treatment with higher-dose imatinib, followed by a switch to nilotinib in those failing to achieve desired milestones, could be an effective strategy for managing newly diagnosed CP-CML.28
Toxicity. The standard starting dose of imatinib in CP-CML patients is 400 mg. The safety profile of imatinib has been very well established. In the IRIS trial, the most common adverse events (all grades in decreasing order of frequency) were peripheral and periorbital edema (60%), nausea (50%), muscle cramps (49%), musculoskeletal pain (47%), diarrhea (45%), rash (40%), fatigue (39%), abdominal pain (37%), headache (37%), and joint pain (31%). Grade 3/4 liver enzyme elevation can occur in 5% of patients.29 In the event of severe liver toxicity or fluid retention, imatinib should be held until the event resolves. At that time, imatinib can be restarted if deemed appropriate, but this is dependent on the severity of the inciting event. Fluid retention can be managed by the use of supportive care, diuretics, imatinib dose reduction, dose interruption, or imatinib discontinuation if the fluid retention is severe. Muscle cramps can be managed by the use of calcium supplements or tonic water. Management of rash can include topical or systemic steroids, or in some cases imatinib dose reduction, interruption, or discontinuation.19
Grade 3/4 imatinib-induced hematologic toxicity is not uncommon, with 17% of patients experiencing neutropenia, 9% thrombocytopenia, and 4% anemia. These adverse events occurred most commonly during the first year of therapy, and the frequency decreased over time.15,29 Depending on the degree of cytopenias, imatinib dosing should be interrupted until recovery of the absolute neutrophil count or platelet count, and can often be resumed at 400 mg daily. However, if cytopenias recur, imatinib should be held and subsequently restarted at 300 mg daily.19
Dasatinib
Dasatinib is a second-generation TKI that has regulatory approval for treatment of adult patients with newly diagnosed CP-CML or CP-CML in patients with resistance or intolerance to prior TKIs. In addition to dasatinib’s ability to inhibit ABL kinases, it is also known to be a potent inhibitor of Src family kinases. Dasatinib has shown efficacy in patients who have developed imatinib-resistant ABL kinase domain mutations.
Dasatinib was initially approved as second-line therapy in patients with resistance or intolerance to imatinib. This indication was based on the results of the phase 3 CA180-034 trial, which ultimately identified dasatinib 100 mg daily as the optimal dose. In this trial, 74% of patients enrolled had resistance to imatinib and the remainder were intolerant. The 7-year follow-up of patients randomized to dasatinib 100 mg (n = 167) daily indicated that 46% achieved MMR while on study. Of the 124 imatinib-resistant patients on dasatinib 100 mg daily, the 7-year PFS was 39% and OS was 63%. In the 43 imatinib-intolerant patients, the 7-year PFS was 51% and OS was 70%.30
Dasatinib 100 mg daily was compared to imatinib 400 mg daily in newly diagnosed CP-CML patients in the randomized phase 3 DASISION (Dasatinib versus Imatinib Study in Treatment-Naive CML Patients) trial. More patients on the dasatinib arm achieved an EMR of BCR-ABL1 transcripts ≤ 10% IS after 3 months on treatment compared to imatinib (84% versus 64%). Furthermore, the 5-year follow-up reports that the cumulative incidence of MMR and MR4.5 in dasatinib-treated patients was 76% and 42%, and was 64% and 33% with imatinib (P = 0.0022 and P = 0.0251, respectively). Fewer patients treated with dasatinib progressed to AP or BP (4.6%) compared to imatinib (7.3%), but the estimated 5-year OS was similar between the 2 arms (91% for dasatinib versus 90% for imatinib).16 Regulatory approval for dasatinib as first-line therapy in newly diagnosed CML patients was based on results of the DASISION trial.
Toxicity. Most dasatinib-related toxicities are reported as grade 1 or grade 2, but grade 3/4 hematologic adverse events are fairly common. In the DASISION trial, grade 3/4 neutropenia, anemia, and thrombocytopenia occurred in 29%, 13%, and 22% of dasatinib-treated patients, respectively. Cytopenias can generally be managed with temporary dose interruptions or dose reductions.
During the 5-year follow-up of the DASISION trial, pleural effusions were reported in 28% of patients, most of which were grade 1/2. This occurred at a rate of approximately ≤ 8% per year, suggesting a stable incidence over time, and the effusions appear to be dose-dependent.16 Depending on the severity, pleural effusion may be treated with diuretics, dose interruption, and, in some instances, steroids or a thoracentesis. Typically, dasatinib can be restarted at 1 dose level lower than the previous dose once the effusion has resolved.19 Other, less common side effects of dasatinib include pulmonary hypertension (5% of patients), as well as abdominal pain, fluid retention, headaches, fatigue, musculoskeletal pain, rash, nausea, and diarrhea. Pulmonary hypertension is typically reversible after cessation of dasatinib, and thus dasatinib should be permanently discontinued once the diagnosis is confirmed. Fluid retention is often treated with diuretics and supportive care. Nausea and diarrhea are generally manageable and occur less frequently when dasatinib is taken with food and a large glass of water. Antiemetics and antidiarrheals can be used as needed. Troublesome rash can be best managed with topical or systemic steroids as well as possible dose reduction or dose interruption.16,19 In the DASISION trial, adverse events led to therapy discontinuation more often in the dasatinib group than in the imatinib group (16% versus 7%).16 Bleeding, particularly in the setting of thrombocytopenia, has been reported in patients being treated with dasatinib as a result of the drug-induced reversible inhibition of platelet aggregation.31
Nilotinib
The structure of nilotinib is similar to that of imatinib; however, it has a markedly increased affinity for the ATP‐binding site on the BCR-ABL1 protein. It was initially given regulatory approval in the setting of imatinib failure. Nilotinib was studied at a dose of 400 mg twice daily in 321 patients who were imatinib-resistant or -intolerant. It proved to be highly effective at inducing cytogenetic remissions in the second-line setting, with 59% of patients achieving a MCyR and 45% achieving a CCyR. With a median follow-up time of 4 years, the OS was 78%.32
Nilotinib gained regulatory approval for use as a first-line TKI after completion of the randomized phase 3 ENESTnd (Evaluating Nilotinib Efficacy and Safety in Clinical Trials-Newly Diagnosed Patients) trial. ENESTnd was a 3-arm study comparing nilotinib 300 mg twice daily versus nilotinib 400 mg twice daily versus imatinib 400 mg daily in newly diagnosed, previously untreated patients diagnosed with CP-CML. The primary endpoint of this clinical trial was rate of MMR at 12 months.33 Nilotinib surpassed imatinib in this regard, with 44% of patients on nilotinib 300 mg twice daily achieving MMR at 12 months versus 43% of nilotinib 400 mg twice daily patients versus 22% of the imatinib-treated patients (P < 0.001 for both comparisons). Furthermore, the rate of CCyR by 12 months was significantly higher for both nilotinib arms compared with imatinib (80% for nilotinib 300 mg, 78% for nilotinib 400 mg, and 65% for imatinib) (P < 0.001).12 Based on this data, nilotinib 300 mg twice daily was chosen as the standard dose of nilotinib in the first-line setting. After 5 years of follow-up on the ENESTnd study, there were fewer progressions to AP/BP CML in nilotinib-treated patients compared with imatinib. MMR was achieved in 77% of nilotinib 300 mg patients compared with 60.4% of patients on the imatinib arm. MR4.5 was also more common in patients treated with nilotinib 300 mg twice daily, with a rate of 53.5% at 5 years versus 31.4% in the imatinib arm.17 In spite of the deeper cytogenetic and molecular responses achieved with nilotinib, this did not translate into a significant improvement in OS. The 5-year OS rate was 93.7% in nilotinib 300 mg patients versus 91.7% in imatinib-treated patients, and this difference lacked statistical significance.17
Toxicity. Although some similarities exist between the toxicity profiles of nilotinib and imatinib, each drug has some distinct adverse events. On the ENESTnd trial, the rate of any grade 3/4 non-hematologic adverse event was fairly low; however, lower-grade toxicities were not uncommon. Patients treated with nilotinib 300 mg twice daily experienced rash (31%), headache (14%), pruritis (15%), and fatigue (11%) most commonly. The most frequently reported laboratory abnormalities included increased total bilirubin (53%), hypophosphatemia (32%), hyperglycemia (36%), elevated lipase (24%), increased alanine aminotransferase (ALT; 66%), and increased aspartate aminotransferase (AST; 40%). Any grade of neutropenia, thrombocytopenia, or anemia occurred at rates of 43%, 48%, and 38%, respectively.33 Although nilotinib has a Black Box Warning from the US Food and Drug Administration for QT interval prolongation, no patients on the ENESTnd trial experienced a QT interval corrected for heart rate greater than 500 msec.12
More recent concerns have emerged regarding the potential for cardiovascular toxicity after long-term use of nilotinib. The 5-year update of ENESTnd reports cardiovascular events, including ischemic heart disease, ischemic cerebrovascular events, or peripheral arterial disease occurring in 7.5% of patients treated with nilotinib 300 mg twice daily, as compared with a rate of 2.1% in imatinib-treated patients. The frequency of these cardiovascular events increased linearly over time in both arms. Elevations in total cholesterol from baseline occurred in 27.6% of nilotinib patients compared with 3.9% of imatinib patients. Furthermore, clinically meaningful increases in low-density lipoprotein cholesterol and glycated hemoglobin occurred more frequently with nilotinib therapy.33
Nilotinib should be taken on an empty stomach; therefore, patients should be made aware of the need to fast for 2 hours prior to each dose and 1 hour after each dose. Given the potential risk of QT interval prolongation, a baseline electrocardiogram (ECG) is recommended prior to initiating treatment to ensure the QT interval is within a normal range. A repeat ECG should be done approximately 7 days after nilotinib initiation to ensure no prolongation of the QT interval after starting. Close monitoring of potassium and magnesium levels is important to decrease the risk of cardiac arrhythmias, and concomitant use of drugs considered strong CYP3A4 inhibitors should be avoided.19
If the patient experiences any grade 3 or higher laboratory abnormalities, nilotinib should be held until resolution of the toxicity, and then restarted at a lower dose. Similarly, if patients develop significant neutropenia or thrombocytopenia, nilotinib doses should be interrupted until resolution of the cytopenias. At that point, nilotinib can be reinitiated at either the same or a lower dose. Rash can be managed by the use of topical or systemic steroids as well as potential dose reduction, interruption, or discontinuation.
Given the concerns for potential cardiovascular events with long-term use of nilotinib, caution is advised when prescribing it to any patient with a history of cardiovascular disease or peripheral arterial occlusive disease. At the first sign of new occlusive disease, nilotinib should be discontinued.19
Bosutinib
Bosutinib is a second-generation BCR-ABL TKI with activity against the Src family of kinases; it was initially approved to treat patients with CP-, AP-, or BP-CML after resistance or intolerance to imatinib. Long-term data has been reported from the phase 1/2 trial of bosutinib therapy in patients with CP-CML who developed resistance or intolerance to imatinib plus dasatinib and/or nilotinib. A total of 119 patients were included in the 4-year follow-up; 38 were resistant/intolerant to imatinib and resistant to dasatinib, 50 were resistant/intolerant to imatinib and intolerant to dasatinib, 26 were resistant/intolerant to imatinib and resistant to nilotinib, and 5 were resistant/intolerant to imatinib and intolerant to nilotinib or resistant/intolerant to dasatinib and nilotinib. Bosutinib 400 mg daily was studied in this setting. Of the 38 patients with imatinib resistance/intolerance and dasatinib resistance, 39% achieved MCyR, 22% achieved CCyR, and the OS was 67%. Of the 50 patients with imatinib resistance/intolerance and dasatinib intolerance, 42% achieved MCyR, 40% achieved CCyR, and the OS was 80%. Finally, in the 26 patients with imatinib resistance/intolerance and nilotinib resistance, 38% achieved MCyR, 31% achieved CCyR, and the OS was 87%.34
Five-year follow-up from the phase 1/2 clinical trial that studied bosutinib 500 mg daily in CP-CML patients after imatinib failure reported data on 284 patients. By 5 years on study, 60% of patients had achieved MCyR and 50% achieved CCyR with a 71% and 69% probability, respectively, of maintaining these responses at 5 years. The 5-year OS was 84%.35 These data led to the regulatory approval of bosutinib 500 mg daily as second-line or later therapy.
Bosutinib was initially studied in the first-line setting in the randomized phase 3 BELA (Bosutinib Efficacy and Safety in Newly Diagnosed Chronic Myeloid Leukemia) trial. This trial compared bosutinib 500 mg daily to imatinib 400 mg daily in newly diagnosed, previously untreated CP-CML patients. This trial failed to meet its primary endpoint of increased rate of CCyR at 12 months, with 70% of bosutinib patients achieving this response, compared to 68% of imatinib-treated patients (P = 0.601). In spite of this, the rate of MMR at 12 months was significantly higher in the bosutinib arm (41%) compared to the imatinib arm (27%; P = 0.001).36
A second phase 3 trial (BFORE) was designed to study bosutinib 400 mg daily versus imatinib in newly diagnosed, previously untreated CP-CML patients. This study enrolled 536 patients who were randomly assigned 1:1 to bosutinib versus imatinib. The primary endpoint of this trial was rate of MMR at 12 months. A significantly higher number of bosutinib-treated patients achieved this response (47.2%) compared with imatinib-treated patients (36.9%, P = 0.02). Furthermore, by 12 months 77.2% of patients on the bosutinib arm had achieved CCyR compared with 66.4% on the imatinib arm, and this difference did meet statistical significance (P = 0.0075). A lower rate of progression to AP- or BP-CML was noted in bosutinib-treated patients as well (1.6% versus 2.5%). Based on this data, bosutinib gained regulatory approval for first-line therapy in CP-CML at a dose of 400 mg daily.18
Toxicity. On the BFORE trial, the most common treatment-emergent adverse events of any grade reported in the bosutinib-treated patients were diarrhea (70.1%), nausea (35.1%), increased ALT (30.6%), and increased AST (22.8%). Musculoskeletal pain or spasms occurred in 29.5% of patients, rash in 19.8%, fatigue in 19.4%, and headache in 18.7%. Hematologic toxicity was also reported, but most was grade 1/2. Thrombocytopenia was reported in 35.1%, anemia in 18.7%, and neutropenia in 11.2%.18
Cardiovascular events occurred in 5.2% of patients on the bosutinib arm of the BFORE trial, which was similar to the rate observed in imatinib patients. The most common cardiovascular event was QT interval prolongation, which occurred in 1.5% of patients. Pleural effusions were reported in 1.9% of patients treated with bosutinib, and none were grade 3 or higher.18
If liver enzyme elevation occurs at a value greater than 5 times the institutional upper limit of normal, bosutinib should be held until the level recovers to ≤ 2.5 times the upper limit of normal, at which point bosutinib can be restarted at a lower dose. If recovery takes longer than 4 weeks, bosutinib should be permanently discontinued. Liver enzymes elevated greater than 3 times the institutional upper limit of normal and a concurrent elevation in total bilirubin to 2 times the upper limit of normal are consistent with Hy’s law, and bosutinib should be discontinued. Although diarrhea is the most common toxicity associated with bosutinib, it is commonly low grade and transient. Diarrhea occurs most frequently in the first few days after initiating bosutinib. It can often be managed with over-the-counter antidiarrheal medications, but if the diarrhea is grade 3 or higher, bosutinib should be held until recovery to grade 1 or lower. Gastrointestinal side effects may be improved by taking bosutinib with a meal and a large glass of water. Fluid retention can be managed with diuretics and supportive care. Finally, if rash occurs, this can be addressed with topical or systemic steroids as well as bosutinib dose reduction, interruption, or discontinuation.19
Similar to other TKIs, if bosutinib-induced cytopenias occur, treatment should be held and restarted at the same or a lower dose upon blood count recovery.19
Ponatinib
The most common cause of TKI resistance in CP-CML is the development of ABL kinase domain mutations. The majority of imatinib-resistant mutations can be overcome by the use of second-generation TKIs, including dasatinib, nilotinib, or bosutinib. However, ponatinib is the only BCR-ABL TKI able to overcome a T315I mutation. The phase 2 PACE (Ponatinib Ph-positive ALL and CML Evaluation) trial enrolled patients with CP-, AP-, or BP-CML as well as patients with Ph-positive acute lymphoblastic leukemia who were resistant or intolerant to nilotinib or dasatinib, or who had evidence of a T315I mutation. The starting dose of ponatinib on this trial was 45 mg daily.37 The PACE trial enrolled 267 patients with CP-CML: 203 with resistance or intolerance to nilotinib or dasatinib, and 64 with a T315I mutation. The primary endpoint in the CP cohort was rate of MCyR at any time within 12 months of starting ponatinib. The overall rate of MCyR by 12 months in the CP-CML patients was 56%. In those with a T315I mutation, 70% achieved MCyR, which compared favorably with those with resistance or intolerance to nilotinib or dasatinib, 51% of whom achieved MCyR. CCyR was achieved in 46% of CP-CML patients (40% in the resistant/intolerant cohort and 66% in the T315I cohort). In general, patients with T315I mutations received fewer prior therapies than those in the resistant/intolerant cohort, which likely contributed to the higher response rates in the T315I patients. MR4.5 was achieved in 15% of CP-CML patients by 12 months on the PACE trial.37 The 5-year update to this study reported that 60%, 40%, and 24% of CP-CML patients achieved MCyR, MMR, and MR4.5, respectively. In the patients who achieved MCyR, the probability of maintaining this response for 5 years was 82% and the estimated 5-year OS was 73%.19
Toxicity. In 2013, after the regulatory approval of ponatinib, reports became available that the drug can cause an increase in arterial occlusive events, including fatal myocardial infarctions and cerebrovascular accidents. For this reason, dose reductions were implemented in patients who were deriving clinical benefit from ponatinib. In spite of these dose reductions, ≥ 90% of responders maintained their response for up to 40 months.38 Although the likelihood of developing an arterial occlusive event appears higher in the first year after starting ponatinib than in later years, the cumulative incidence of events continues to increase. The 5-year follow-up to the PACE trial reports 31% of patients experiencing any grade of arterial occlusive event while on ponatinib. Aside from these events, the most common treatment-emergent adverse events in ponatinib-treated patients on the PACE trial included rash (47%), abdominal pain (46%), headache (43%), dry skin (42%), constipation (41%), and hypertension (37%). Hematologic toxicity was also common, with 46% of patients experiencing any grade of thrombocytopenia, 20% experiencing neutropenia, and 20% anemia.38
Patients receiving ponatinib therapy should be monitored closely for any evidence of arterial or venous thrombosis. If an occlusive event occurs, ponatinib should be discontinued. Similarly, in the setting of any new or worsening heart failure symptoms, ponatinib should be promptly discontinued. Management of any underlying cardiovascular risk factors, including hypertension, hyperlipidemia, diabetes, or smoking history, is recommended, and these patients should be referred to a cardiologist for a full evaluation. In the absence of any contraindications to aspirin, low-dose aspirin should be considered as a means of decreasing cardiovascular risks associated with ponatinib. In patients with known risk factors, a ponatinib starting dose of 30 mg daily rather than the standard 45 mg daily may be a safer option, resulting in fewer arterial occlusive events, although the efficacy of this dose is still being studied in comparison to 45 mg daily.19
If ponatinib-induced transaminitis greater than 3 times the upper limit of normal occurs, ponatinib should be held until resolution to less than 3 times the upper limit of normal, at which point it should be resumed at a lower dose. Similarly, in the setting of elevated serum lipase or symptomatic pancreatitis, ponatinib should be held and restarted at a lower dose after resolution of symptoms.19
In the event of neutropenia or thrombocytopenia, ponatinib should be held until blood count recovery and then restarted at the same dose. If cytopenias occur for a second time, the dose of ponatinib should be lowered at the time of treatment reinitiation. If rash occurs, it can be addressed with topical or systemic steroids as well as dose reduction, interruption, or discontinuation.19
Conclusion
With the development of imatinib and the subsequent TKIs, dasatinib, nilotinib, bosutinib, and ponatinib, CP-CML has become a chronic disease with a life expectancy that is similar to that of the general population. Given the successful treatments available for these patients, it is crucial to identify patients with this diagnosis, ensure they receive a complete, appropriate diagnostic workup including a bone marrow biopsy and aspiration with cytogenetic testing, and select the best therapy for each individual patient. Once on treatment, the importance of frequent monitoring cannot be overstated. This is the only way to be certain patients are achieving the desired treatment milestones that correlate with the favorable long-term outcomes that have been observed with TKI-based treatment of CP-CML.
Corresponding author: Kendra Sweet, MD, MS, Department of Malignant Hematology, Moffitt Cancer Center, Tampa, FL.
Financial disclosures: Dr. Sweet has served on the Advisory Board and Speakers Bureau of Novartis, Bristol-Meyers Squibb, Ariad Pharmaceuticals, and Pfizer, and has served as a consultant to Pfizer.
1. Faderl S, Talpaz M, Estrov Z, et al. The biology of chronic myeloid leukemia. N Engl J Med. 1999;341:164-172.
2. Surveillance, Epidemiology, and End Results Program. Cancer Stat Facts: Leukemia - Chronic Myeloid Leukemia (CML). 2018.
3. Huang X, Cortes J, Kantarjian H. Estimations of the increasing prevalence and plateau prevalence of chronic myeloid leukemia in the era of tyrosine kinase inhibitor therapy. Cancer. 2012;118:3123-3127.
4. Savage DG, Szydlo RM, Chase A, et al. Bone marrow transplantation for chronic myeloid leukaemia: the effects of differing criteria for defining chronic phase on probabilities of survival and relapse. Br J Haematol. 1997;99:30-35.
5. Knox WF, Bhavnani M, Davson J, Geary CG. Histological classification of chronic granulocytic leukaemia. Clin Lab Haematol. 1984;6:171-175.
6. Kvasnicka HM, Thiele J, Schmitt-Graeff A, et al. Impact of bone marrow morphology on multivariate risk classification in chronic myelogenous leukemia. Acta Haematol. 2003;109:53-56.
7. Cortes JE, Talpaz M, O’Brien S, et al. Staging of chronic myeloid leukemia in the imatinib era: an evaluation of the World Health Organization proposal. Cancer. 2006;106:1306-1315.
8. Druker BJ. Chronic myeloid leukemia. In: DeVita VT, Lawrence TS, Rosenberg SA, eds. DeVita, Hellman, and Rosenberg’s Cancer Principles & Practice of Oncology. 8th ed. Philadelphia, PA: Lippincott, Williams and Wilkins; 2007:2267-2304.
9. Arber DA, Orazi A, Hasserjian R, et al. The 2016 revision to the World Health Organization classification of myeloid neoplasms and acute leukemia. Blood. 2016;127:2391-2405.
10. Fabarius A, Leitner A, Hochhaus A, et al. Impact of additional cytogenetic aberrations at diagnosis on prognosis of CML: long-term observation of 1151 patients from the randomized CML Study IV. Blood. 2011;118:6760-6768.
11. Alhuraiji A, Kantarjian H, Boddu P, et al. Prognostic significance of additional chromosomal abnormalities at the time of diagnosis in patients with chronic myeloid leukemia treated with frontline tyrosine kinase inhibitors. Am J Hematol. 2018;93:84-90.
12. Melo JV. BCR-ABL gene variants. Baillieres Clin Haematol. 1997;10:203-222.
13. Kantarjian HM, Talpaz M, Cortes J, et al. Quantitative polymerase chain reaction monitoring of BCR-ABL during therapy with imatinib mesylate (STI571; gleevec) in chronic-phase chronic myelogenous leukemia. Clin Cancer Res. 2003;9:160-166.
14. Hughes T, Deininger M, Hochhaus A, et al. Monitoring CML patients responding to treatment with tyrosine kinase inhibitors: review and recommendations for harmonizing current methodology for detecting BCR-ABL transcripts and kinase domain mutations and for expressing results. Blood. 2006;108:28-37.
15. Hochhaus A, Larson RA, Guilhot F, et al. Long-term outcomes of imatinib treatment for chronic myeloid leukemia. N Engl J Med. 2017;376:917-927.
16. Cortes JE, Saglio G, Kantarjian HM, et al. Final 5-year study results of DASISION: the Dasatinib Versus Imatinib Study in Treatment-Naive Chronic Myeloid Leukemia Patients trial. J Clin Oncol. 2016;34:2333-2340.
17. Hochhaus A, Saglio G, Hughes TP, et al. Long-term benefits and risks of frontline nilotinib vs imatinib for chronic myeloid leukemia in chronic phase: 5-year update of the randomized ENESTnd trial. Leukemia. 2016;30:1044-1054.
18. Cortes JE, Gambacorti-Passerini C, Deininger MW, et al. Bosutinib versus imatinib for newly diagnosed chronic myeloid leukemia: results from the randomized BFORE trial. J Clin Oncol. 2018;36:231-237.
19. Radich JP, Deininger M, Abboud CN, et al. Chronic Myeloid Leukemia, Version 1.2019, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2018;16:1108-1135.
20. Faderl S, Talpaz M, Estrov Z, Kantarjian HM. Chronic myelogenous leukemia: biology and therapy. Ann Intern Med. 1999;131:207-219.
21. O’Brien SG, Guilhot F, Larson RA, et al. Imatinib compared with interferon and low-dose cytarabine for newly diagnosed chronic-phase chronic myeloid leukemia. N Engl J Med. 2003;348:994-1004.
22. Baccarani M, Deininger MW, Rosti G, et al. European LeukemiaNet recommendations for the management of chronic myeloid leukemia: 2013. Blood. 2013;122:872-884.
23. Larripa I, Ruiz MS, Gutierrez M, Bianchini M. [Guidelines for molecular monitoring of BCR-ABL1 in chronic myeloid leukemia patients by RT-qPCR]. Medicina (B Aires). 2017;77:61-72.
24. Marin D, Ibrahim AR, Lucas C, et al. Assessment of BCR-ABL1 transcript levels at 3 months is the only requirement for predicting outcome for patients with chronic myeloid leukemia treated with tyrosine kinase inhibitors. J Clin Oncol. 2012;30:232-238.
25. Hughes TP, Ross DM. Moving treatment-free remission into mainstream clinical practice in CML. Blood. 2016;128:17-23.
26. Druker BJ, Talpaz M, Resta DJ, et al. Efficacy and safety of a specific inhibitor of the BCR-ABL tyrosine kinase in chronic myeloid leukemia. N Engl J Med. 2001;344:1031-1037.
27. Baccarani M, Druker BJ, Branford S, et al. Long-term response to imatinib is not affected by the initial dose in patients with Philadelphia chromosome-positive chronic myeloid leukemia in chronic phase: final update from the Tyrosine Kinase Inhibitor Optimization and Selectivity (TOPS) study. Int J Hematol. 2014;99:616-624.
28. Yeung DT, Osborn MP, White DL, et al. TIDEL-II: first-line use of imatinib in CML with early switch to nilotinib for failure to achieve time-dependent molecular targets. Blood. 2015;125:915-923.
29. Druker BJ, Guilhot F, O’Brien SG, et al. Five-year follow-up of patients receiving imatinib for chronic myeloid leukemia. N Engl J Med. 2006;355:2408-2417.
30. Shah NP, Rousselot P, Schiffer C, et al. Dasatinib in imatinib-resistant or -intolerant chronic-phase, chronic myeloid leukemia patients: 7-year follow-up of study CA180-034. Am J Hematol. 2016;91:869-874.
31. Quintas-Cardama A, Han X, Kantarjian H, Cortes J. Tyrosine kinase inhibitor-induced platelet dysfunction in patients with chronic myeloid leukemia. Blood. 2009;114:261-263.
32. Giles FJ, le Coutre PD, Pinilla-Ibarz J, et al. Nilotinib in imatinib-resistant or imatinib-intolerant patients with chronic myeloid leukemia in chronic phase: 48-month follow-up results of a phase II study. Leukemia. 2013;27:107-112.
33. Saglio G, Kim DW, Issaragrisil S, et al. Nilotinib versus imatinib for newly diagnosed chronic myeloid leukemia. N Engl J Med. 2010;362:2251-2259.
34. Cortes JE, Khoury HJ, Kantarjian HM, et al. Long-term bosutinib for chronic phase chronic myeloid leukemia after failure of imatinib plus dasatinib and/or nilotinib. Am J Hematol. 2016;91:1206-1214.
35. Gambacorti-Passerini C, Cortes JE, Lipton JH, et al. Safety and efficacy of second-line bosutinib for chronic phase chronic myeloid leukemia over a five-year period: final results of a phase I/II study. Haematologica. 2018;103:1298-1307.
36. Cortes JE, Kim DW, Kantarjian HM, et al. Bosutinib versus imatinib in newly diagnosed chronic-phase chronic myeloid leukemia: results from the BELA trial. J Clin Oncol. 2012;30:3486-3492.
37. Cortes JE, Kim DW, Pinilla-Ibarz J, et al. A phase 2 trial of ponatinib in Philadelphia chromosome-positive leukemias. N Engl J Med. 2013;369:1783-1796.
38. Cortes JE, Kim DW, Pinilla-Ibarz J, et al. Ponatinib efficacy and safety in Philadelphia chromosome-positive leukemia: final 5-year results of the phase 2 PACE trial. Blood. 2018;132:393-404.
1. Faderl S, Talpaz M, Estrov Z, et al. The biology of chronic myeloid leukemia. N Engl J Med. 1999;341:164-172.
2. Surveillance, Epidemiology, and End Results Program. Cancer Stat Facts: Leukemia - Chronic Myeloid Leukemia (CML). 2018.
3. Huang X, Cortes J, Kantarjian H. Estimations of the increasing prevalence and plateau prevalence of chronic myeloid leukemia in the era of tyrosine kinase inhibitor therapy. Cancer. 2012;118:3123-3127.
4. Savage DG, Szydlo RM, Chase A, et al. Bone marrow transplantation for chronic myeloid leukaemia: the effects of differing criteria for defining chronic phase on probabilities of survival and relapse. Br J Haematol. 1997;99:30-35.
5. Knox WF, Bhavnani M, Davson J, Geary CG. Histological classification of chronic granulocytic leukaemia. Clin Lab Haematol. 1984;6:171-175.
6. Kvasnicka HM, Thiele J, Schmitt-Graeff A, et al. Impact of bone marrow morphology on multivariate risk classification in chronic myelogenous leukemia. Acta Haematol. 2003;109:53-56.
7. Cortes JE, Talpaz M, O’Brien S, et al. Staging of chronic myeloid leukemia in the imatinib era: an evaluation of the World Health Organization proposal. Cancer. 2006;106:1306-1315.
8. Druker BJ. Chronic myeloid leukemia. In: DeVita VT, Lawrence TS, Rosenberg SA, eds. DeVita, Hellman, and Rosenberg’s Cancer Principles & Practice of Oncology. 8th ed. Philadelphia, PA: Lippincott, Williams and Wilkins; 2007:2267-2304.
9. Arber DA, Orazi A, Hasserjian R, et al. The 2016 revision to the World Health Organization classification of myeloid neoplasms and acute leukemia. Blood. 2016;127:2391-2405.
10. Fabarius A, Leitner A, Hochhaus A, et al. Impact of additional cytogenetic aberrations at diagnosis on prognosis of CML: long-term observation of 1151 patients from the randomized CML Study IV. Blood. 2011;118:6760-6768.
11. Alhuraiji A, Kantarjian H, Boddu P, et al. Prognostic significance of additional chromosomal abnormalities at the time of diagnosis in patients with chronic myeloid leukemia treated with frontline tyrosine kinase inhibitors. Am J Hematol. 2018;93:84-90.
12. Melo JV. BCR-ABL gene variants. Baillieres Clin Haematol. 1997;10:203-222.
13. Kantarjian HM, Talpaz M, Cortes J, et al. Quantitative polymerase chain reaction monitoring of BCR-ABL during therapy with imatinib mesylate (STI571; gleevec) in chronic-phase chronic myelogenous leukemia. Clin Cancer Res. 2003;9:160-166.
14. Hughes T, Deininger M, Hochhaus A, et al. Monitoring CML patients responding to treatment with tyrosine kinase inhibitors: review and recommendations for harmonizing current methodology for detecting BCR-ABL transcripts and kinase domain mutations and for expressing results. Blood. 2006;108:28-37.
15. Hochhaus A, Larson RA, Guilhot F, et al. Long-term outcomes of imatinib treatment for chronic myeloid leukemia. N Engl J Med. 2017;376:917-927.
16. Cortes JE, Saglio G, Kantarjian HM, et al. Final 5-year study results of DASISION: the Dasatinib Versus Imatinib Study in Treatment-Naive Chronic Myeloid Leukemia Patients trial. J Clin Oncol. 2016;34:2333-2340.
17. Hochhaus A, Saglio G, Hughes TP, et al. Long-term benefits and risks of frontline nilotinib vs imatinib for chronic myeloid leukemia in chronic phase: 5-year update of the randomized ENESTnd trial. Leukemia. 2016;30:1044-1054.
18. Cortes JE, Gambacorti-Passerini C, Deininger MW, et al. Bosutinib versus imatinib for newly diagnosed chronic myeloid leukemia: results from the randomized BFORE trial. J Clin Oncol. 2018;36:231-237.
19. Radich JP, Deininger M, Abboud CN, et al. Chronic Myeloid Leukemia, Version 1.2019, NCCN Clinical Practice Guidelines in Oncology. J Natl Compr Canc Netw. 2018;16:1108-1135.
20. Faderl S, Talpaz M, Estrov Z, Kantarjian HM. Chronic myelogenous leukemia: biology and therapy. Ann Intern Med. 1999;131:207-219.
21. O’Brien SG, Guilhot F, Larson RA, et al. Imatinib compared with interferon and low-dose cytarabine for newly diagnosed chronic-phase chronic myeloid leukemia. N Engl J Med. 2003;348:994-1004.
22. Baccarani M, Deininger MW, Rosti G, et al. European LeukemiaNet recommendations for the management of chronic myeloid leukemia: 2013. Blood. 2013;122:872-884.
23. Larripa I, Ruiz MS, Gutierrez M, Bianchini M. [Guidelines for molecular monitoring of BCR-ABL1 in chronic myeloid leukemia patients by RT-qPCR]. Medicina (B Aires). 2017;77:61-72.
24. Marin D, Ibrahim AR, Lucas C, et al. Assessment of BCR-ABL1 transcript levels at 3 months is the only requirement for predicting outcome for patients with chronic myeloid leukemia treated with tyrosine kinase inhibitors. J Clin Oncol. 2012;30:232-238.
25. Hughes TP, Ross DM. Moving treatment-free remission into mainstream clinical practice in CML. Blood. 2016;128:17-23.
26. Druker BJ, Talpaz M, Resta DJ, et al. Efficacy and safety of a specific inhibitor of the BCR-ABL tyrosine kinase in chronic myeloid leukemia. N Engl J Med. 2001;344:1031-1037.
27. Baccarani M, Druker BJ, Branford S, et al. Long-term response to imatinib is not affected by the initial dose in patients with Philadelphia chromosome-positive chronic myeloid leukemia in chronic phase: final update from the Tyrosine Kinase Inhibitor Optimization and Selectivity (TOPS) study. Int J Hematol. 2014;99:616-624.
28. Yeung DT, Osborn MP, White DL, et al. TIDEL-II: first-line use of imatinib in CML with early switch to nilotinib for failure to achieve time-dependent molecular targets. Blood. 2015;125:915-923.
29. Druker BJ, Guilhot F, O’Brien SG, et al. Five-year follow-up of patients receiving imatinib for chronic myeloid leukemia. N Engl J Med. 2006;355:2408-2417.
30. Shah NP, Rousselot P, Schiffer C, et al. Dasatinib in imatinib-resistant or -intolerant chronic-phase, chronic myeloid leukemia patients: 7-year follow-up of study CA180-034. Am J Hematol. 2016;91:869-874.
31. Quintas-Cardama A, Han X, Kantarjian H, Cortes J. Tyrosine kinase inhibitor-induced platelet dysfunction in patients with chronic myeloid leukemia. Blood. 2009;114:261-263.
32. Giles FJ, le Coutre PD, Pinilla-Ibarz J, et al. Nilotinib in imatinib-resistant or imatinib-intolerant patients with chronic myeloid leukemia in chronic phase: 48-month follow-up results of a phase II study. Leukemia. 2013;27:107-112.
33. Saglio G, Kim DW, Issaragrisil S, et al. Nilotinib versus imatinib for newly diagnosed chronic myeloid leukemia. N Engl J Med. 2010;362:2251-2259.
34. Cortes JE, Khoury HJ, Kantarjian HM, et al. Long-term bosutinib for chronic phase chronic myeloid leukemia after failure of imatinib plus dasatinib and/or nilotinib. Am J Hematol. 2016;91:1206-1214.
35. Gambacorti-Passerini C, Cortes JE, Lipton JH, et al. Safety and efficacy of second-line bosutinib for chronic phase chronic myeloid leukemia over a five-year period: final results of a phase I/II study. Haematologica. 2018;103:1298-1307.
36. Cortes JE, Kim DW, Kantarjian HM, et al. Bosutinib versus imatinib in newly diagnosed chronic-phase chronic myeloid leukemia: results from the BELA trial. J Clin Oncol. 2012;30:3486-3492.
37. Cortes JE, Kim DW, Pinilla-Ibarz J, et al. A phase 2 trial of ponatinib in Philadelphia chromosome-positive leukemias. N Engl J Med. 2013;369:1783-1796.
38. Cortes JE, Kim DW, Pinilla-Ibarz J, et al. Ponatinib efficacy and safety in Philadelphia chromosome-positive leukemia: final 5-year results of the phase 2 PACE trial. Blood. 2018;132:393-404.
Mismatch Between Process and Outcome Measures for Hospital-Acquired Venous Thromboembolism in a Surgical Cohort
From Tufts Medical Center, Boston, MA.
Abstract
- Objective: Audits at our academic medical center revealed near 100% compliance with protocols for perioperative venous thromboembolism (VTE) prophylaxis, but recent National Surgical Quality Improvement Program data demonstrated a higher than expected incidence of VTE (observed/expected = 1.32). The objective of this study was to identify potential causes of this discrepancy.
- Design: Retrospective case-control study.
- Setting: Urban academic medical center with high case-mix indices (Medicare approximately 2.4, non-Medicare approximately 2.0).
- Participants: 102 surgical inpatients with VTE (September 2012 to October 2015) matched with controls for age, gender, and type of procedure.
- Measurements: Prevalence of common VTE risk factors, length of stay, number of procedures, index operation times, and postoperative bed rest > 12 hours were assessed. Utilization of and compliance with our VTE risk assessment tool was also investigated.
- Results: Cases underwent more procedures and had longer lengths of stay and index procedures than controls. In addition, cases were more likely to have had > 12 hours of postoperative bed rest and central venous access than controls. Cases had more infections and were more likely to have severe lung disease, thrombophilia, and a history of prior VTE than controls. No differences in body mass index, tobacco use, current or previous malignancy, or VTE risk assessment form use were observed. Overall, care complexity and risk factors were equally important in determining VTE incidence. Our analyses also revealed lack of strict adherence to our VTE risk stratification protocol and frequent use of suboptimal prophylactic regimens.
- Conclusion: Well-accepted risk factors and overall care complexity determine VTE risk. Preventing VTE in high-risk patients requires assiduous attention to detail in VTE risk assessment and in delivery of optimal prophylaxis. Patients at especially high risk may require customized prophylactic regimens.
Keywords: hospital-acquired venous thromboembolic disease; VTE prophylaxis, surgical patients.
Deep vein thrombosis (DVT) and pulmonary embolism (PE) are well-recognized causes of morbidity and mortality in surgical patients. Between 350,000 and 600,000 cases of venous thromboembolism (VTE) occur each year in the United States, and it is responsible for approximately 10% of preventable in-hospital fatalities.1-3 Given VTE’s impact on patients and the healthcare system and the fact that it is preventable, intense effort has been focused on developing more effective prophylactic measures to decrease its incidence.2-4 In 2008, the surgeon general issued a “call to action” for increased efforts to prevent VTE.5
The American College of Chest Physicians (ACCP) guidelines subcategorize patients based on type of surgery. In addition, the ACCP guidelines support the use of a Caprini-based scoring system to aid in risk stratification and improve clinical decision-making (
Our hospital, a 350-bed academic medical center in downtown Boston, MA, serving a diverse population with a very high case-mix index (2.4 Medicare and 2.0 non-Medicare), has strict protocols for VTE prophylaxis consistent with the ACCP guidelines and based on the Surgical Care Improvement Project (SCIP) measures published in 2006.10 The SCIP mandates allow for considerable surgeon discretion in the use of chemoprophylaxis for neurosurgical cases and general and orthopedic surgery cases deemed to be at high risk for bleeding. In addition, SCIP requires only that prophylaxis be initiated within 24 hours of surgical end time. Although recent audits revealed nearly 100% compliance with SCIP-mandated protocols, National Surgical Quality Improvement Program (NSQIP) data showed that the incidence of VTE events at our institution was higher than expected (observed/expected [O/E] = 1.32).
In order to determine the reasons for this mismatch between process and outcome performance, we investigated whether there were characteristics of our patient population that contributed to the higher than expected rates of VTE, and we scrutinized our VTE prophylaxis protocol to determine if there were aspects of our process that were also contributory.
Methods
Study Sample
This is a retrospective case-control study of surgical inpatients at our hospital during the period September 2012 to October 2015. Cases were identified as patients diagnosed with a VTE (DVT or PE). Controls were identified from a pool of surgical patients whose courses were not complicated by VTE during the same time frame as the cases and who were matched as closely as possible by procedure code, age, and gender.
Variables
Patient and hospital course variables that were analyzed included demographics, comorbidities, length of stay, number of procedures, index operation times, duration of postoperative bed rest, use of mechanical prophylaxis, and type of chemoprophylaxis and time frame within which it was initiated. Data were collected via chart review using International Classification of Diseases-9 and -10 codes to identify surgical cases within the allotted time period who were diagnosed with VTE. Demographic variables included age, sex, and ethnicity. Comorbidities included hypertension, diabetes, coronary artery disease, serious lung disease, previous or current malignancy, documented hypercoagulable state, and previous history of VTE. Body mass index (BMI) was also recorded. The aforementioned disease-specific variables were not matched between the case and control groups, as this data was obtained retrospectively during data collection.
Analysis
Associations between case and matched control were analyzed using the paired t-test for continuous variables and McNemar’s test for categorical variables. P values < 0.05 were considered statistically significant. SAS Enterprise Guide 7.15 (Cary, NC) was used for all statistical analyses.
The requirement for informed consent was waived by our Institutional Review Board, as the study was initially deemed to be a quality improvement project, and all data used for this report were de-identified.
Results
Our retrospective case-control analysis included a sample of 102 surgical patients whose courses were complicated by VTE between September 2012 and October 2015. The cases were distributed among 6 different surgical categories (Figure 1): trauma (20%), cancer (10%), cardiovascular (21%), noncancer neurosurgery (28%), elective orthopedics (11%), and miscellaneous general surgery (10%).
Comparisons between cases and controls in terms of patient demographics and risk factors are shown in Table 2. No statistically significant difference was observed in ethnicity or race between the 2 groups. Overall, cases had more hip/pelvis/leg fractures at presentation (P = 0.0008). The case group also had higher proportions of patients with postoperative bed rest greater than 12 hours (P = 0.009), central venous access (P < 0.0001), infection (P < 0.0001), and lower extremity edema documented during the hospitalization prior to development of DVT (P < 0.0001). Additionally, cases had significantly greater rates of previous VTE (P = 0.0004), inherited or acquired thrombophilia (P = 0.03), history of stroke (P = 0.0003), and severe lung disease, including pneumonia (P = 0.0008). No significant differences were noted between cases and matched controls in BMI (P = 0.43), current tobacco use (P = 0.71), current malignancy (P = 0.80), previous malignancy (P = 0.83), head trauma (P = 0.17), or acute cardiac disease (myocardial infarction or congestive heart failure; P = 0.12).
Variables felt to indicate overall complexity of hospital course for cases as compared to controls are outlined in Table 3. Cases were found to have significantly longer lengths of stay (median, 15.5 days versus 3 days, P < 0.0001). To account for the possibility that the development of VTE contributed to the increased length of stay in the cases, we also looked at the duration between admission date and the date of VTE diagnosis and determined that cases still had a longer length of stay when this was accounted for (median, 7 days versus 3 days, P < 0.0001). A much higher proportion of cases underwent more than 1 procedure compared to controls (P < 0.0001), and cases had significantly longer index operations as compared to controls (P = 0.002).
Seventeen cases received heparin on induction during their index procedure, compared to 23 controls (P = 0.24). Additionally, 63 cases began a prophylaxis regimen within 24 hours of surgery end time, compared to 68 controls (P = 0.24). The chemoprophylactic regimens utilized in cases and in controls are summarized in Figure 2. Of note, only 26 cases and 32 controls received standard prophylactic regimens with no missed doses (heparin 5000 units 3 times daily or enoxaparin 40 mg daily). Additionally, in over half of cases and a third of controls, nonstandard regimens were ordered. Examples of nonstandard regimens included nonstandard heparin or enoxaparin doses, low-dose warfarin, or aspirin alone. In most cases, nonstandard regimens were justified on the basis of high risk for bleeding.
Mechanical prophylaxis with pneumatic sequential compression devices (SCDs) was ordered in 93 (91%) cases and 87 (85%) controls; however, we were unable to accurately document uniform compliance in the use of these devices.
With regard to evaluation of our process measures, we found only 17% of cases and controls combined actually had a VTE risk assessment in their chart, and when it was present, it was often incomplete or was completed inaccurately.
Discussion
The goal of this study was to identify factors (patient characteristics and/or processes of care) that may be contributing to the higher than expected incidence of VTE events at our medical center, despite internal audits suggesting near perfect compliance with SCIP-mandated protocols. We found that in addition to usual risk factors for VTE, an overarching theme of our case cohort was their high complexity of illness. At baseline, these patients had significantly greater rates of stroke, thrombophilia, severe lung disease, infection, and history of VTE than controls. Moreover, the hospital courses of cases were significantly more complex than those of controls, as these patients had more procedures, longer lengths of stay and longer index operations, higher rates of postoperative bed rest exceeding 12 hours, and more prevalent central venous access than controls (Table 2). Several of these risk factors have been found to contribute to VTE development despite compliance with prophylaxis protocols.
Cassidy et al reviewed a cohort of nontrauma general surgery patients who developed VTE despite receiving appropriate prophylaxis and found that both multiple operations and emergency procedures contributed to the failure of VTE prophylaxis.11 Similarly, Wang et al identified several independent risk factors for VTE despite thromboprophylaxis, including central venous access and infection, as well as intensive care unit admission, hospitalization for cranial surgery, and admission from a long-term care facility.12 While our study did not capture some of these additional factors considered by Wang et al, the presence of risk factors not captured in traditional assessment tools suggests that additional consideration for complex patients is warranted.
In addition to these nonmodifiable patient characteristics, aspects of our VTE prophylaxis processes likely contributed to the higher than expected rate of VTE. While the electronic medical record at our institution does contain a VTE risk assessment tool based on the Caprini score, we found it often is not used at all or is used incorrectly/incompletely, which likely reflects the fact that physicians are neither prompted nor required to complete the assessment prior to prescribing VTE prophylaxis.
There is a significant body of evidence demonstrating that mandatory computerized VTE risk assessments can effectively reduce VTE rates and that improved outcomes occur shortly after implementation. Cassidy et al demonstrated the benefits of instituting a hospital-wide, mandatory, Caprini-based computerized VTE risk assessment that provides prophylaxis/early ambulation recommendations. Two years after implementing this system, they observed an 84% reduction in DVTs (P < 0.001) and a 55% reduction in PEs (P < 0.001).13 Nimeri et al had similarly impressive success, achieving a reduction in their NSQIP O/E for PE/DVT in general surgery from 6.00 in 2010 to 0.82 (for DVTs) and 0.78 (for PEs) 5 years after implementation of mandatory VTE risk assessment (though they noted that the most dramatic reduction occurred 1 year after implementation).14 Additionally, a recent systematic review and meta-analysis by Borab et al found computerized VTE risk assessments to be associated with a significant decrease in VTE events.15
The risk assessment tool used at our institution is qualitative in nature, and current literature suggests that employing a more quantitative tool may yield improved outcomes. Numerous studies have highlighted the importance of identifying patients at very high risk for VTE, as higher risk may necessitate more careful consideration of their prophylactic regimens. Obi et al found patients with Caprini scores higher than 8 to be at significantly greater risk of developing VTE compared to patients with scores of 7 or 8. Also, patients with scores of 7 or 8 were significantly more likely to have a VTE compared to those with scores of 5 or 6.16 In another study, Lobastov et al identified Caprini scores of 11 or higher as representing an extremely high-risk category for which standard prophylaxis regimens may not be effective.17 Thus, while having mandatory risk assessment has been shown to dramatically decrease VTE incidence, it is important to consider the magnitude of the numerical risk score. This is of particular importance at medical centers with high case-mix indices where patients at the highest risk might need to be managed with different prophylactic guidelines.
Another notable aspect of the process at our hospital was the great variation in the types of prophylactic regimens ordered, and the adherence to what was ordered. Only 25.5% of patients were maintained on a standard prophylactic regimen with no missed doses (heparin 5000 every 8 hours or enoxaparin 40 mg daily). Thus, the vast majority of the patients who went on to develop VTE either were prescribed a nontraditional prophylaxis regimen or missed doses of standard agents. The need for secondary surgical procedures or other invasive interventions may explain many, but not all, of the missed doses.
The timing of prophylaxis initiation for our patients was also found to deviate from accepted standards. Only 16.8% of cases received prophylaxis upon induction of anesthesia, and furthermore, 38% of cases did not receive any anticoagulation within 24 hours of their index operation. While this variability in prophylaxis implementation was acceptable within the SCIP guidelines based on “high risk for bleeding” or other considerations, it likely contributed to our suboptimal outcomes. The variations and interruptions in prophylactic regimens speak to barriers that have previously been reported as contributing factors to noncompliance with VTE prophylaxis.18
Given these known barriers and the observed underutilization and improper use of our risk assessment tool, we have recently changed our surgical admission order sets such that a mandatory quantitative risk assessment must be done for every surgical patient at the time of admission/operation before other orders can be completed. Following completion of the assessment, the physician will be presented with an appropriate standard regimen based on the individual patient’s risk assessment. Early results of our VTE quality improvement project have been satisfying: in the most recent NSQIP semi-annual report, our O/E for VTE was 0.74, placing us in the first decile. Some of these early reports may simply be the product of the Hawthorne effect; however, we are encouraged by the early improvements seen in other research. While we are hopeful that these changes will result in sustainable improvements in outcomes, patients at extremely high risk may require novel weight-based or otherwise customized aggressive prophylactic regimens. Such regimens have already been proposed for arthroplasty and other high-risk patients.
Future research may identify other risk factors not captured by traditional risk assessments. In addition, research should continue to explore the use and efficacy of standard prophylactic regimens in these populations to help determine if they are sufficient. Currently, weight-based low-molecular-weight heparin dosing and alternative regimens employing fondaparinux are under investigation for very-high-risk patients.19
There were several limitations to the present study. First, due to the retrospective design of our study, we could collect only data that had been uniformly recorded in the charts throughout the study period. Second, we were unable to accurately assess compliance with mechanical prophylaxis. While our chart review showed that the vast majority of cases and controls were ordered to have mechanical prophylaxis, it is impossible to document how often these devices were used appropriately in a retrospective analysis. Anecdotal observation suggests that once patients are out of post-anesthesia or critical care units, SCD use is not standardized. The inability to measure compliance precisely may be leading to an overestimation of our compliance with prophylaxis. Finally, because our study included only patients who underwent surgery at our hospital, our observations may not be generalizable outside our institution.
Conclusion
Our study findings reinforce the importance of attention to detail in VTE risk assessment and in ordering and administering VTE prophylactic regimens, especially in high-risk surgical patients. While we adhered to the SCIP-mandated prophylaxis requirements, the complexity of our patients and our lack of a truly standardized approach to risk assessment and prophylactic regimens resulted in suboptimal outcomes. Stricter and more quantitative mandatory VTE risk assessment, along with highly standardized VTE prophylaxis regimens, are required to achieve optimal outcomes.
Corresponding author: Jason C. DeGiovanni, MS, BA, [email protected].
Financial disclosures: None.
1. Spyropoulos AC, Hussein M, Lin J, et al. Rates of symptomatic venous thromboembolism in US surgical patients: a retrospective administrative database study. J Thromb Thrombolysis. 2009;28:458-464.
2. Deitzelzweig SB, Johnson BH, Lin J, et al. Prevalence of clinical venous thromboembolism in the USA: Current trends and future projections. Am J Hematol. 2011;86:217-220.
3. Horlander KT, Mannino DM, Leeper KV. Pulmonary embolism mortality in the United States, 1979-1998: an analysis using multiple-cause mortality data. Arch Intern Med. 2003;163:1711-1717.
4. Guyatt GH, Akl EA, Crowther M, et al. Introduction to the ninth edition: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(suppl):48S-52S.
5. Office of the Surgeon General; National Heart, Lung, and Blood Institute. The Surgeon General’s Call to Action to Prevent Deep Vein Thrombosis and Pulmonary Embolism. Rockville, MD: Office of the Surgeon General; 2008. www.ncbi.nlm.nih.gov/books/NBK44178/. Accessed May 2, 2019.
6. Pannucci CJ, Swistun L, MacDonald JK, et al. Individualized venous thromboembolism risk stratification using the 2005 Caprini score to identify the benefits and harms of chemoprophylaxis in surgical patients: a meta-analysis. Ann Surg. 2017;265:1094-1102.
7. Caprini JA, Arcelus JI, Hasty JH, et al. Clinical assessment of venous thromboembolic risk in surgical patients. Semin Thromb Hemost. 1991;17(suppl 3):304-312.
8. Caprini JA. Risk assessment as a guide for the prevention of the many faces of venous thromboembolism. Am J Surg. 2010;199:S3-S10.
9. Gould MK, Garcia DA, Wren SM, et al. Prevention of VTE in nonorthopedic surgical patients: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2 Suppl):e227S-e277S.
10. The Joint Commission. Surgical Care Improvement Project (SCIP) Measure Information Form (Version 2.1c). www.jointcommission.org/surgical_care_improvement_project_scip_measure_information_form_version_21c/. Accessed June 22, 2016.
11. Cassidy MR, Macht RD, Rosenkranz P, et al. Patterns of failure of a standardized perioperative venous thromboembolism prophylaxis protocol. J Am Coll Surg. 2016;222:1074-1081.
12. Wang TF, Wong CA, Milligan PE, et al. Risk factors for inpatient venous thromboembolism despite thromboprophylaxis. Thromb Res. 2014;133:25-29.
13. Cassidy MR, Rosenkranz P, McAneny D. Reducing postoperative venous thromboembolism complications with a standardized risk-stratified prophylaxis protocol and mobilization program. J Am Coll Surg. 2014;218:1095-1104.
14. Nimeri AA, Gamaleldin MM, McKenna KL, et al. Reduction of venous thromboembolism in surgical patients using a mandatory risk-scoring system: 5-year follow-up of an American College of Surgeons National Quality Improvement Program. Clin Appl Thromb Hemost. 2017;23:392-396.
15. Borab ZM, Lanni MA, Tecce MG, et al. Use of computerized clinical decision support systems to prevent venous thromboembolism in surgical patients: a systematic review and meta-analysis. JAMA Surg. 2017;152:638–645.
16. Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150:941-948.
17. Lobastov K, Barinov V, Schastlivtsev I, et al. Validation of the Caprini risk assessment model for venous thromboembolism in high-risk surgical patients in the background of standard prophylaxis. J Vasc Surg Venous Lymphat Disord. 2016;4:153-160.
18. Kakkar AK, Cohen AT, Tapson VF, et al. Venous thromboembolism risk and prophylaxis in the acute care hospital setting (ENDORSE survey): findings in surgical patients. Ann Surg. 2010;251:330-338.
19. Smythe MA, Priziola J, Dobesh PP, et al. Guidance for the practical management of the heparin anticoagulants in the treatment of venous thromboembolism. J Thromb Thrombolysis. 2016;41:165-186.
From Tufts Medical Center, Boston, MA.
Abstract
- Objective: Audits at our academic medical center revealed near 100% compliance with protocols for perioperative venous thromboembolism (VTE) prophylaxis, but recent National Surgical Quality Improvement Program data demonstrated a higher than expected incidence of VTE (observed/expected = 1.32). The objective of this study was to identify potential causes of this discrepancy.
- Design: Retrospective case-control study.
- Setting: Urban academic medical center with high case-mix indices (Medicare approximately 2.4, non-Medicare approximately 2.0).
- Participants: 102 surgical inpatients with VTE (September 2012 to October 2015) matched with controls for age, gender, and type of procedure.
- Measurements: Prevalence of common VTE risk factors, length of stay, number of procedures, index operation times, and postoperative bed rest > 12 hours were assessed. Utilization of and compliance with our VTE risk assessment tool was also investigated.
- Results: Cases underwent more procedures and had longer lengths of stay and index procedures than controls. In addition, cases were more likely to have had > 12 hours of postoperative bed rest and central venous access than controls. Cases had more infections and were more likely to have severe lung disease, thrombophilia, and a history of prior VTE than controls. No differences in body mass index, tobacco use, current or previous malignancy, or VTE risk assessment form use were observed. Overall, care complexity and risk factors were equally important in determining VTE incidence. Our analyses also revealed lack of strict adherence to our VTE risk stratification protocol and frequent use of suboptimal prophylactic regimens.
- Conclusion: Well-accepted risk factors and overall care complexity determine VTE risk. Preventing VTE in high-risk patients requires assiduous attention to detail in VTE risk assessment and in delivery of optimal prophylaxis. Patients at especially high risk may require customized prophylactic regimens.
Keywords: hospital-acquired venous thromboembolic disease; VTE prophylaxis, surgical patients.
Deep vein thrombosis (DVT) and pulmonary embolism (PE) are well-recognized causes of morbidity and mortality in surgical patients. Between 350,000 and 600,000 cases of venous thromboembolism (VTE) occur each year in the United States, and it is responsible for approximately 10% of preventable in-hospital fatalities.1-3 Given VTE’s impact on patients and the healthcare system and the fact that it is preventable, intense effort has been focused on developing more effective prophylactic measures to decrease its incidence.2-4 In 2008, the surgeon general issued a “call to action” for increased efforts to prevent VTE.5
The American College of Chest Physicians (ACCP) guidelines subcategorize patients based on type of surgery. In addition, the ACCP guidelines support the use of a Caprini-based scoring system to aid in risk stratification and improve clinical decision-making (
Our hospital, a 350-bed academic medical center in downtown Boston, MA, serving a diverse population with a very high case-mix index (2.4 Medicare and 2.0 non-Medicare), has strict protocols for VTE prophylaxis consistent with the ACCP guidelines and based on the Surgical Care Improvement Project (SCIP) measures published in 2006.10 The SCIP mandates allow for considerable surgeon discretion in the use of chemoprophylaxis for neurosurgical cases and general and orthopedic surgery cases deemed to be at high risk for bleeding. In addition, SCIP requires only that prophylaxis be initiated within 24 hours of surgical end time. Although recent audits revealed nearly 100% compliance with SCIP-mandated protocols, National Surgical Quality Improvement Program (NSQIP) data showed that the incidence of VTE events at our institution was higher than expected (observed/expected [O/E] = 1.32).
In order to determine the reasons for this mismatch between process and outcome performance, we investigated whether there were characteristics of our patient population that contributed to the higher than expected rates of VTE, and we scrutinized our VTE prophylaxis protocol to determine if there were aspects of our process that were also contributory.
Methods
Study Sample
This is a retrospective case-control study of surgical inpatients at our hospital during the period September 2012 to October 2015. Cases were identified as patients diagnosed with a VTE (DVT or PE). Controls were identified from a pool of surgical patients whose courses were not complicated by VTE during the same time frame as the cases and who were matched as closely as possible by procedure code, age, and gender.
Variables
Patient and hospital course variables that were analyzed included demographics, comorbidities, length of stay, number of procedures, index operation times, duration of postoperative bed rest, use of mechanical prophylaxis, and type of chemoprophylaxis and time frame within which it was initiated. Data were collected via chart review using International Classification of Diseases-9 and -10 codes to identify surgical cases within the allotted time period who were diagnosed with VTE. Demographic variables included age, sex, and ethnicity. Comorbidities included hypertension, diabetes, coronary artery disease, serious lung disease, previous or current malignancy, documented hypercoagulable state, and previous history of VTE. Body mass index (BMI) was also recorded. The aforementioned disease-specific variables were not matched between the case and control groups, as this data was obtained retrospectively during data collection.
Analysis
Associations between case and matched control were analyzed using the paired t-test for continuous variables and McNemar’s test for categorical variables. P values < 0.05 were considered statistically significant. SAS Enterprise Guide 7.15 (Cary, NC) was used for all statistical analyses.
The requirement for informed consent was waived by our Institutional Review Board, as the study was initially deemed to be a quality improvement project, and all data used for this report were de-identified.
Results
Our retrospective case-control analysis included a sample of 102 surgical patients whose courses were complicated by VTE between September 2012 and October 2015. The cases were distributed among 6 different surgical categories (Figure 1): trauma (20%), cancer (10%), cardiovascular (21%), noncancer neurosurgery (28%), elective orthopedics (11%), and miscellaneous general surgery (10%).
Comparisons between cases and controls in terms of patient demographics and risk factors are shown in Table 2. No statistically significant difference was observed in ethnicity or race between the 2 groups. Overall, cases had more hip/pelvis/leg fractures at presentation (P = 0.0008). The case group also had higher proportions of patients with postoperative bed rest greater than 12 hours (P = 0.009), central venous access (P < 0.0001), infection (P < 0.0001), and lower extremity edema documented during the hospitalization prior to development of DVT (P < 0.0001). Additionally, cases had significantly greater rates of previous VTE (P = 0.0004), inherited or acquired thrombophilia (P = 0.03), history of stroke (P = 0.0003), and severe lung disease, including pneumonia (P = 0.0008). No significant differences were noted between cases and matched controls in BMI (P = 0.43), current tobacco use (P = 0.71), current malignancy (P = 0.80), previous malignancy (P = 0.83), head trauma (P = 0.17), or acute cardiac disease (myocardial infarction or congestive heart failure; P = 0.12).
Variables felt to indicate overall complexity of hospital course for cases as compared to controls are outlined in Table 3. Cases were found to have significantly longer lengths of stay (median, 15.5 days versus 3 days, P < 0.0001). To account for the possibility that the development of VTE contributed to the increased length of stay in the cases, we also looked at the duration between admission date and the date of VTE diagnosis and determined that cases still had a longer length of stay when this was accounted for (median, 7 days versus 3 days, P < 0.0001). A much higher proportion of cases underwent more than 1 procedure compared to controls (P < 0.0001), and cases had significantly longer index operations as compared to controls (P = 0.002).
Seventeen cases received heparin on induction during their index procedure, compared to 23 controls (P = 0.24). Additionally, 63 cases began a prophylaxis regimen within 24 hours of surgery end time, compared to 68 controls (P = 0.24). The chemoprophylactic regimens utilized in cases and in controls are summarized in Figure 2. Of note, only 26 cases and 32 controls received standard prophylactic regimens with no missed doses (heparin 5000 units 3 times daily or enoxaparin 40 mg daily). Additionally, in over half of cases and a third of controls, nonstandard regimens were ordered. Examples of nonstandard regimens included nonstandard heparin or enoxaparin doses, low-dose warfarin, or aspirin alone. In most cases, nonstandard regimens were justified on the basis of high risk for bleeding.
Mechanical prophylaxis with pneumatic sequential compression devices (SCDs) was ordered in 93 (91%) cases and 87 (85%) controls; however, we were unable to accurately document uniform compliance in the use of these devices.
With regard to evaluation of our process measures, we found only 17% of cases and controls combined actually had a VTE risk assessment in their chart, and when it was present, it was often incomplete or was completed inaccurately.
Discussion
The goal of this study was to identify factors (patient characteristics and/or processes of care) that may be contributing to the higher than expected incidence of VTE events at our medical center, despite internal audits suggesting near perfect compliance with SCIP-mandated protocols. We found that in addition to usual risk factors for VTE, an overarching theme of our case cohort was their high complexity of illness. At baseline, these patients had significantly greater rates of stroke, thrombophilia, severe lung disease, infection, and history of VTE than controls. Moreover, the hospital courses of cases were significantly more complex than those of controls, as these patients had more procedures, longer lengths of stay and longer index operations, higher rates of postoperative bed rest exceeding 12 hours, and more prevalent central venous access than controls (Table 2). Several of these risk factors have been found to contribute to VTE development despite compliance with prophylaxis protocols.
Cassidy et al reviewed a cohort of nontrauma general surgery patients who developed VTE despite receiving appropriate prophylaxis and found that both multiple operations and emergency procedures contributed to the failure of VTE prophylaxis.11 Similarly, Wang et al identified several independent risk factors for VTE despite thromboprophylaxis, including central venous access and infection, as well as intensive care unit admission, hospitalization for cranial surgery, and admission from a long-term care facility.12 While our study did not capture some of these additional factors considered by Wang et al, the presence of risk factors not captured in traditional assessment tools suggests that additional consideration for complex patients is warranted.
In addition to these nonmodifiable patient characteristics, aspects of our VTE prophylaxis processes likely contributed to the higher than expected rate of VTE. While the electronic medical record at our institution does contain a VTE risk assessment tool based on the Caprini score, we found it often is not used at all or is used incorrectly/incompletely, which likely reflects the fact that physicians are neither prompted nor required to complete the assessment prior to prescribing VTE prophylaxis.
There is a significant body of evidence demonstrating that mandatory computerized VTE risk assessments can effectively reduce VTE rates and that improved outcomes occur shortly after implementation. Cassidy et al demonstrated the benefits of instituting a hospital-wide, mandatory, Caprini-based computerized VTE risk assessment that provides prophylaxis/early ambulation recommendations. Two years after implementing this system, they observed an 84% reduction in DVTs (P < 0.001) and a 55% reduction in PEs (P < 0.001).13 Nimeri et al had similarly impressive success, achieving a reduction in their NSQIP O/E for PE/DVT in general surgery from 6.00 in 2010 to 0.82 (for DVTs) and 0.78 (for PEs) 5 years after implementation of mandatory VTE risk assessment (though they noted that the most dramatic reduction occurred 1 year after implementation).14 Additionally, a recent systematic review and meta-analysis by Borab et al found computerized VTE risk assessments to be associated with a significant decrease in VTE events.15
The risk assessment tool used at our institution is qualitative in nature, and current literature suggests that employing a more quantitative tool may yield improved outcomes. Numerous studies have highlighted the importance of identifying patients at very high risk for VTE, as higher risk may necessitate more careful consideration of their prophylactic regimens. Obi et al found patients with Caprini scores higher than 8 to be at significantly greater risk of developing VTE compared to patients with scores of 7 or 8. Also, patients with scores of 7 or 8 were significantly more likely to have a VTE compared to those with scores of 5 or 6.16 In another study, Lobastov et al identified Caprini scores of 11 or higher as representing an extremely high-risk category for which standard prophylaxis regimens may not be effective.17 Thus, while having mandatory risk assessment has been shown to dramatically decrease VTE incidence, it is important to consider the magnitude of the numerical risk score. This is of particular importance at medical centers with high case-mix indices where patients at the highest risk might need to be managed with different prophylactic guidelines.
Another notable aspect of the process at our hospital was the great variation in the types of prophylactic regimens ordered, and the adherence to what was ordered. Only 25.5% of patients were maintained on a standard prophylactic regimen with no missed doses (heparin 5000 every 8 hours or enoxaparin 40 mg daily). Thus, the vast majority of the patients who went on to develop VTE either were prescribed a nontraditional prophylaxis regimen or missed doses of standard agents. The need for secondary surgical procedures or other invasive interventions may explain many, but not all, of the missed doses.
The timing of prophylaxis initiation for our patients was also found to deviate from accepted standards. Only 16.8% of cases received prophylaxis upon induction of anesthesia, and furthermore, 38% of cases did not receive any anticoagulation within 24 hours of their index operation. While this variability in prophylaxis implementation was acceptable within the SCIP guidelines based on “high risk for bleeding” or other considerations, it likely contributed to our suboptimal outcomes. The variations and interruptions in prophylactic regimens speak to barriers that have previously been reported as contributing factors to noncompliance with VTE prophylaxis.18
Given these known barriers and the observed underutilization and improper use of our risk assessment tool, we have recently changed our surgical admission order sets such that a mandatory quantitative risk assessment must be done for every surgical patient at the time of admission/operation before other orders can be completed. Following completion of the assessment, the physician will be presented with an appropriate standard regimen based on the individual patient’s risk assessment. Early results of our VTE quality improvement project have been satisfying: in the most recent NSQIP semi-annual report, our O/E for VTE was 0.74, placing us in the first decile. Some of these early reports may simply be the product of the Hawthorne effect; however, we are encouraged by the early improvements seen in other research. While we are hopeful that these changes will result in sustainable improvements in outcomes, patients at extremely high risk may require novel weight-based or otherwise customized aggressive prophylactic regimens. Such regimens have already been proposed for arthroplasty and other high-risk patients.
Future research may identify other risk factors not captured by traditional risk assessments. In addition, research should continue to explore the use and efficacy of standard prophylactic regimens in these populations to help determine if they are sufficient. Currently, weight-based low-molecular-weight heparin dosing and alternative regimens employing fondaparinux are under investigation for very-high-risk patients.19
There were several limitations to the present study. First, due to the retrospective design of our study, we could collect only data that had been uniformly recorded in the charts throughout the study period. Second, we were unable to accurately assess compliance with mechanical prophylaxis. While our chart review showed that the vast majority of cases and controls were ordered to have mechanical prophylaxis, it is impossible to document how often these devices were used appropriately in a retrospective analysis. Anecdotal observation suggests that once patients are out of post-anesthesia or critical care units, SCD use is not standardized. The inability to measure compliance precisely may be leading to an overestimation of our compliance with prophylaxis. Finally, because our study included only patients who underwent surgery at our hospital, our observations may not be generalizable outside our institution.
Conclusion
Our study findings reinforce the importance of attention to detail in VTE risk assessment and in ordering and administering VTE prophylactic regimens, especially in high-risk surgical patients. While we adhered to the SCIP-mandated prophylaxis requirements, the complexity of our patients and our lack of a truly standardized approach to risk assessment and prophylactic regimens resulted in suboptimal outcomes. Stricter and more quantitative mandatory VTE risk assessment, along with highly standardized VTE prophylaxis regimens, are required to achieve optimal outcomes.
Corresponding author: Jason C. DeGiovanni, MS, BA, [email protected].
Financial disclosures: None.
From Tufts Medical Center, Boston, MA.
Abstract
- Objective: Audits at our academic medical center revealed near 100% compliance with protocols for perioperative venous thromboembolism (VTE) prophylaxis, but recent National Surgical Quality Improvement Program data demonstrated a higher than expected incidence of VTE (observed/expected = 1.32). The objective of this study was to identify potential causes of this discrepancy.
- Design: Retrospective case-control study.
- Setting: Urban academic medical center with high case-mix indices (Medicare approximately 2.4, non-Medicare approximately 2.0).
- Participants: 102 surgical inpatients with VTE (September 2012 to October 2015) matched with controls for age, gender, and type of procedure.
- Measurements: Prevalence of common VTE risk factors, length of stay, number of procedures, index operation times, and postoperative bed rest > 12 hours were assessed. Utilization of and compliance with our VTE risk assessment tool was also investigated.
- Results: Cases underwent more procedures and had longer lengths of stay and index procedures than controls. In addition, cases were more likely to have had > 12 hours of postoperative bed rest and central venous access than controls. Cases had more infections and were more likely to have severe lung disease, thrombophilia, and a history of prior VTE than controls. No differences in body mass index, tobacco use, current or previous malignancy, or VTE risk assessment form use were observed. Overall, care complexity and risk factors were equally important in determining VTE incidence. Our analyses also revealed lack of strict adherence to our VTE risk stratification protocol and frequent use of suboptimal prophylactic regimens.
- Conclusion: Well-accepted risk factors and overall care complexity determine VTE risk. Preventing VTE in high-risk patients requires assiduous attention to detail in VTE risk assessment and in delivery of optimal prophylaxis. Patients at especially high risk may require customized prophylactic regimens.
Keywords: hospital-acquired venous thromboembolic disease; VTE prophylaxis, surgical patients.
Deep vein thrombosis (DVT) and pulmonary embolism (PE) are well-recognized causes of morbidity and mortality in surgical patients. Between 350,000 and 600,000 cases of venous thromboembolism (VTE) occur each year in the United States, and it is responsible for approximately 10% of preventable in-hospital fatalities.1-3 Given VTE’s impact on patients and the healthcare system and the fact that it is preventable, intense effort has been focused on developing more effective prophylactic measures to decrease its incidence.2-4 In 2008, the surgeon general issued a “call to action” for increased efforts to prevent VTE.5
The American College of Chest Physicians (ACCP) guidelines subcategorize patients based on type of surgery. In addition, the ACCP guidelines support the use of a Caprini-based scoring system to aid in risk stratification and improve clinical decision-making (
Our hospital, a 350-bed academic medical center in downtown Boston, MA, serving a diverse population with a very high case-mix index (2.4 Medicare and 2.0 non-Medicare), has strict protocols for VTE prophylaxis consistent with the ACCP guidelines and based on the Surgical Care Improvement Project (SCIP) measures published in 2006.10 The SCIP mandates allow for considerable surgeon discretion in the use of chemoprophylaxis for neurosurgical cases and general and orthopedic surgery cases deemed to be at high risk for bleeding. In addition, SCIP requires only that prophylaxis be initiated within 24 hours of surgical end time. Although recent audits revealed nearly 100% compliance with SCIP-mandated protocols, National Surgical Quality Improvement Program (NSQIP) data showed that the incidence of VTE events at our institution was higher than expected (observed/expected [O/E] = 1.32).
In order to determine the reasons for this mismatch between process and outcome performance, we investigated whether there were characteristics of our patient population that contributed to the higher than expected rates of VTE, and we scrutinized our VTE prophylaxis protocol to determine if there were aspects of our process that were also contributory.
Methods
Study Sample
This is a retrospective case-control study of surgical inpatients at our hospital during the period September 2012 to October 2015. Cases were identified as patients diagnosed with a VTE (DVT or PE). Controls were identified from a pool of surgical patients whose courses were not complicated by VTE during the same time frame as the cases and who were matched as closely as possible by procedure code, age, and gender.
Variables
Patient and hospital course variables that were analyzed included demographics, comorbidities, length of stay, number of procedures, index operation times, duration of postoperative bed rest, use of mechanical prophylaxis, and type of chemoprophylaxis and time frame within which it was initiated. Data were collected via chart review using International Classification of Diseases-9 and -10 codes to identify surgical cases within the allotted time period who were diagnosed with VTE. Demographic variables included age, sex, and ethnicity. Comorbidities included hypertension, diabetes, coronary artery disease, serious lung disease, previous or current malignancy, documented hypercoagulable state, and previous history of VTE. Body mass index (BMI) was also recorded. The aforementioned disease-specific variables were not matched between the case and control groups, as this data was obtained retrospectively during data collection.
Analysis
Associations between case and matched control were analyzed using the paired t-test for continuous variables and McNemar’s test for categorical variables. P values < 0.05 were considered statistically significant. SAS Enterprise Guide 7.15 (Cary, NC) was used for all statistical analyses.
The requirement for informed consent was waived by our Institutional Review Board, as the study was initially deemed to be a quality improvement project, and all data used for this report were de-identified.
Results
Our retrospective case-control analysis included a sample of 102 surgical patients whose courses were complicated by VTE between September 2012 and October 2015. The cases were distributed among 6 different surgical categories (Figure 1): trauma (20%), cancer (10%), cardiovascular (21%), noncancer neurosurgery (28%), elective orthopedics (11%), and miscellaneous general surgery (10%).
Comparisons between cases and controls in terms of patient demographics and risk factors are shown in Table 2. No statistically significant difference was observed in ethnicity or race between the 2 groups. Overall, cases had more hip/pelvis/leg fractures at presentation (P = 0.0008). The case group also had higher proportions of patients with postoperative bed rest greater than 12 hours (P = 0.009), central venous access (P < 0.0001), infection (P < 0.0001), and lower extremity edema documented during the hospitalization prior to development of DVT (P < 0.0001). Additionally, cases had significantly greater rates of previous VTE (P = 0.0004), inherited or acquired thrombophilia (P = 0.03), history of stroke (P = 0.0003), and severe lung disease, including pneumonia (P = 0.0008). No significant differences were noted between cases and matched controls in BMI (P = 0.43), current tobacco use (P = 0.71), current malignancy (P = 0.80), previous malignancy (P = 0.83), head trauma (P = 0.17), or acute cardiac disease (myocardial infarction or congestive heart failure; P = 0.12).
Variables felt to indicate overall complexity of hospital course for cases as compared to controls are outlined in Table 3. Cases were found to have significantly longer lengths of stay (median, 15.5 days versus 3 days, P < 0.0001). To account for the possibility that the development of VTE contributed to the increased length of stay in the cases, we also looked at the duration between admission date and the date of VTE diagnosis and determined that cases still had a longer length of stay when this was accounted for (median, 7 days versus 3 days, P < 0.0001). A much higher proportion of cases underwent more than 1 procedure compared to controls (P < 0.0001), and cases had significantly longer index operations as compared to controls (P = 0.002).
Seventeen cases received heparin on induction during their index procedure, compared to 23 controls (P = 0.24). Additionally, 63 cases began a prophylaxis regimen within 24 hours of surgery end time, compared to 68 controls (P = 0.24). The chemoprophylactic regimens utilized in cases and in controls are summarized in Figure 2. Of note, only 26 cases and 32 controls received standard prophylactic regimens with no missed doses (heparin 5000 units 3 times daily or enoxaparin 40 mg daily). Additionally, in over half of cases and a third of controls, nonstandard regimens were ordered. Examples of nonstandard regimens included nonstandard heparin or enoxaparin doses, low-dose warfarin, or aspirin alone. In most cases, nonstandard regimens were justified on the basis of high risk for bleeding.
Mechanical prophylaxis with pneumatic sequential compression devices (SCDs) was ordered in 93 (91%) cases and 87 (85%) controls; however, we were unable to accurately document uniform compliance in the use of these devices.
With regard to evaluation of our process measures, we found only 17% of cases and controls combined actually had a VTE risk assessment in their chart, and when it was present, it was often incomplete or was completed inaccurately.
Discussion
The goal of this study was to identify factors (patient characteristics and/or processes of care) that may be contributing to the higher than expected incidence of VTE events at our medical center, despite internal audits suggesting near perfect compliance with SCIP-mandated protocols. We found that in addition to usual risk factors for VTE, an overarching theme of our case cohort was their high complexity of illness. At baseline, these patients had significantly greater rates of stroke, thrombophilia, severe lung disease, infection, and history of VTE than controls. Moreover, the hospital courses of cases were significantly more complex than those of controls, as these patients had more procedures, longer lengths of stay and longer index operations, higher rates of postoperative bed rest exceeding 12 hours, and more prevalent central venous access than controls (Table 2). Several of these risk factors have been found to contribute to VTE development despite compliance with prophylaxis protocols.
Cassidy et al reviewed a cohort of nontrauma general surgery patients who developed VTE despite receiving appropriate prophylaxis and found that both multiple operations and emergency procedures contributed to the failure of VTE prophylaxis.11 Similarly, Wang et al identified several independent risk factors for VTE despite thromboprophylaxis, including central venous access and infection, as well as intensive care unit admission, hospitalization for cranial surgery, and admission from a long-term care facility.12 While our study did not capture some of these additional factors considered by Wang et al, the presence of risk factors not captured in traditional assessment tools suggests that additional consideration for complex patients is warranted.
In addition to these nonmodifiable patient characteristics, aspects of our VTE prophylaxis processes likely contributed to the higher than expected rate of VTE. While the electronic medical record at our institution does contain a VTE risk assessment tool based on the Caprini score, we found it often is not used at all or is used incorrectly/incompletely, which likely reflects the fact that physicians are neither prompted nor required to complete the assessment prior to prescribing VTE prophylaxis.
There is a significant body of evidence demonstrating that mandatory computerized VTE risk assessments can effectively reduce VTE rates and that improved outcomes occur shortly after implementation. Cassidy et al demonstrated the benefits of instituting a hospital-wide, mandatory, Caprini-based computerized VTE risk assessment that provides prophylaxis/early ambulation recommendations. Two years after implementing this system, they observed an 84% reduction in DVTs (P < 0.001) and a 55% reduction in PEs (P < 0.001).13 Nimeri et al had similarly impressive success, achieving a reduction in their NSQIP O/E for PE/DVT in general surgery from 6.00 in 2010 to 0.82 (for DVTs) and 0.78 (for PEs) 5 years after implementation of mandatory VTE risk assessment (though they noted that the most dramatic reduction occurred 1 year after implementation).14 Additionally, a recent systematic review and meta-analysis by Borab et al found computerized VTE risk assessments to be associated with a significant decrease in VTE events.15
The risk assessment tool used at our institution is qualitative in nature, and current literature suggests that employing a more quantitative tool may yield improved outcomes. Numerous studies have highlighted the importance of identifying patients at very high risk for VTE, as higher risk may necessitate more careful consideration of their prophylactic regimens. Obi et al found patients with Caprini scores higher than 8 to be at significantly greater risk of developing VTE compared to patients with scores of 7 or 8. Also, patients with scores of 7 or 8 were significantly more likely to have a VTE compared to those with scores of 5 or 6.16 In another study, Lobastov et al identified Caprini scores of 11 or higher as representing an extremely high-risk category for which standard prophylaxis regimens may not be effective.17 Thus, while having mandatory risk assessment has been shown to dramatically decrease VTE incidence, it is important to consider the magnitude of the numerical risk score. This is of particular importance at medical centers with high case-mix indices where patients at the highest risk might need to be managed with different prophylactic guidelines.
Another notable aspect of the process at our hospital was the great variation in the types of prophylactic regimens ordered, and the adherence to what was ordered. Only 25.5% of patients were maintained on a standard prophylactic regimen with no missed doses (heparin 5000 every 8 hours or enoxaparin 40 mg daily). Thus, the vast majority of the patients who went on to develop VTE either were prescribed a nontraditional prophylaxis regimen or missed doses of standard agents. The need for secondary surgical procedures or other invasive interventions may explain many, but not all, of the missed doses.
The timing of prophylaxis initiation for our patients was also found to deviate from accepted standards. Only 16.8% of cases received prophylaxis upon induction of anesthesia, and furthermore, 38% of cases did not receive any anticoagulation within 24 hours of their index operation. While this variability in prophylaxis implementation was acceptable within the SCIP guidelines based on “high risk for bleeding” or other considerations, it likely contributed to our suboptimal outcomes. The variations and interruptions in prophylactic regimens speak to barriers that have previously been reported as contributing factors to noncompliance with VTE prophylaxis.18
Given these known barriers and the observed underutilization and improper use of our risk assessment tool, we have recently changed our surgical admission order sets such that a mandatory quantitative risk assessment must be done for every surgical patient at the time of admission/operation before other orders can be completed. Following completion of the assessment, the physician will be presented with an appropriate standard regimen based on the individual patient’s risk assessment. Early results of our VTE quality improvement project have been satisfying: in the most recent NSQIP semi-annual report, our O/E for VTE was 0.74, placing us in the first decile. Some of these early reports may simply be the product of the Hawthorne effect; however, we are encouraged by the early improvements seen in other research. While we are hopeful that these changes will result in sustainable improvements in outcomes, patients at extremely high risk may require novel weight-based or otherwise customized aggressive prophylactic regimens. Such regimens have already been proposed for arthroplasty and other high-risk patients.
Future research may identify other risk factors not captured by traditional risk assessments. In addition, research should continue to explore the use and efficacy of standard prophylactic regimens in these populations to help determine if they are sufficient. Currently, weight-based low-molecular-weight heparin dosing and alternative regimens employing fondaparinux are under investigation for very-high-risk patients.19
There were several limitations to the present study. First, due to the retrospective design of our study, we could collect only data that had been uniformly recorded in the charts throughout the study period. Second, we were unable to accurately assess compliance with mechanical prophylaxis. While our chart review showed that the vast majority of cases and controls were ordered to have mechanical prophylaxis, it is impossible to document how often these devices were used appropriately in a retrospective analysis. Anecdotal observation suggests that once patients are out of post-anesthesia or critical care units, SCD use is not standardized. The inability to measure compliance precisely may be leading to an overestimation of our compliance with prophylaxis. Finally, because our study included only patients who underwent surgery at our hospital, our observations may not be generalizable outside our institution.
Conclusion
Our study findings reinforce the importance of attention to detail in VTE risk assessment and in ordering and administering VTE prophylactic regimens, especially in high-risk surgical patients. While we adhered to the SCIP-mandated prophylaxis requirements, the complexity of our patients and our lack of a truly standardized approach to risk assessment and prophylactic regimens resulted in suboptimal outcomes. Stricter and more quantitative mandatory VTE risk assessment, along with highly standardized VTE prophylaxis regimens, are required to achieve optimal outcomes.
Corresponding author: Jason C. DeGiovanni, MS, BA, [email protected].
Financial disclosures: None.
1. Spyropoulos AC, Hussein M, Lin J, et al. Rates of symptomatic venous thromboembolism in US surgical patients: a retrospective administrative database study. J Thromb Thrombolysis. 2009;28:458-464.
2. Deitzelzweig SB, Johnson BH, Lin J, et al. Prevalence of clinical venous thromboembolism in the USA: Current trends and future projections. Am J Hematol. 2011;86:217-220.
3. Horlander KT, Mannino DM, Leeper KV. Pulmonary embolism mortality in the United States, 1979-1998: an analysis using multiple-cause mortality data. Arch Intern Med. 2003;163:1711-1717.
4. Guyatt GH, Akl EA, Crowther M, et al. Introduction to the ninth edition: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(suppl):48S-52S.
5. Office of the Surgeon General; National Heart, Lung, and Blood Institute. The Surgeon General’s Call to Action to Prevent Deep Vein Thrombosis and Pulmonary Embolism. Rockville, MD: Office of the Surgeon General; 2008. www.ncbi.nlm.nih.gov/books/NBK44178/. Accessed May 2, 2019.
6. Pannucci CJ, Swistun L, MacDonald JK, et al. Individualized venous thromboembolism risk stratification using the 2005 Caprini score to identify the benefits and harms of chemoprophylaxis in surgical patients: a meta-analysis. Ann Surg. 2017;265:1094-1102.
7. Caprini JA, Arcelus JI, Hasty JH, et al. Clinical assessment of venous thromboembolic risk in surgical patients. Semin Thromb Hemost. 1991;17(suppl 3):304-312.
8. Caprini JA. Risk assessment as a guide for the prevention of the many faces of venous thromboembolism. Am J Surg. 2010;199:S3-S10.
9. Gould MK, Garcia DA, Wren SM, et al. Prevention of VTE in nonorthopedic surgical patients: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2 Suppl):e227S-e277S.
10. The Joint Commission. Surgical Care Improvement Project (SCIP) Measure Information Form (Version 2.1c). www.jointcommission.org/surgical_care_improvement_project_scip_measure_information_form_version_21c/. Accessed June 22, 2016.
11. Cassidy MR, Macht RD, Rosenkranz P, et al. Patterns of failure of a standardized perioperative venous thromboembolism prophylaxis protocol. J Am Coll Surg. 2016;222:1074-1081.
12. Wang TF, Wong CA, Milligan PE, et al. Risk factors for inpatient venous thromboembolism despite thromboprophylaxis. Thromb Res. 2014;133:25-29.
13. Cassidy MR, Rosenkranz P, McAneny D. Reducing postoperative venous thromboembolism complications with a standardized risk-stratified prophylaxis protocol and mobilization program. J Am Coll Surg. 2014;218:1095-1104.
14. Nimeri AA, Gamaleldin MM, McKenna KL, et al. Reduction of venous thromboembolism in surgical patients using a mandatory risk-scoring system: 5-year follow-up of an American College of Surgeons National Quality Improvement Program. Clin Appl Thromb Hemost. 2017;23:392-396.
15. Borab ZM, Lanni MA, Tecce MG, et al. Use of computerized clinical decision support systems to prevent venous thromboembolism in surgical patients: a systematic review and meta-analysis. JAMA Surg. 2017;152:638–645.
16. Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150:941-948.
17. Lobastov K, Barinov V, Schastlivtsev I, et al. Validation of the Caprini risk assessment model for venous thromboembolism in high-risk surgical patients in the background of standard prophylaxis. J Vasc Surg Venous Lymphat Disord. 2016;4:153-160.
18. Kakkar AK, Cohen AT, Tapson VF, et al. Venous thromboembolism risk and prophylaxis in the acute care hospital setting (ENDORSE survey): findings in surgical patients. Ann Surg. 2010;251:330-338.
19. Smythe MA, Priziola J, Dobesh PP, et al. Guidance for the practical management of the heparin anticoagulants in the treatment of venous thromboembolism. J Thromb Thrombolysis. 2016;41:165-186.
1. Spyropoulos AC, Hussein M, Lin J, et al. Rates of symptomatic venous thromboembolism in US surgical patients: a retrospective administrative database study. J Thromb Thrombolysis. 2009;28:458-464.
2. Deitzelzweig SB, Johnson BH, Lin J, et al. Prevalence of clinical venous thromboembolism in the USA: Current trends and future projections. Am J Hematol. 2011;86:217-220.
3. Horlander KT, Mannino DM, Leeper KV. Pulmonary embolism mortality in the United States, 1979-1998: an analysis using multiple-cause mortality data. Arch Intern Med. 2003;163:1711-1717.
4. Guyatt GH, Akl EA, Crowther M, et al. Introduction to the ninth edition: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(suppl):48S-52S.
5. Office of the Surgeon General; National Heart, Lung, and Blood Institute. The Surgeon General’s Call to Action to Prevent Deep Vein Thrombosis and Pulmonary Embolism. Rockville, MD: Office of the Surgeon General; 2008. www.ncbi.nlm.nih.gov/books/NBK44178/. Accessed May 2, 2019.
6. Pannucci CJ, Swistun L, MacDonald JK, et al. Individualized venous thromboembolism risk stratification using the 2005 Caprini score to identify the benefits and harms of chemoprophylaxis in surgical patients: a meta-analysis. Ann Surg. 2017;265:1094-1102.
7. Caprini JA, Arcelus JI, Hasty JH, et al. Clinical assessment of venous thromboembolic risk in surgical patients. Semin Thromb Hemost. 1991;17(suppl 3):304-312.
8. Caprini JA. Risk assessment as a guide for the prevention of the many faces of venous thromboembolism. Am J Surg. 2010;199:S3-S10.
9. Gould MK, Garcia DA, Wren SM, et al. Prevention of VTE in nonorthopedic surgical patients: antithrombotic therapy and prevention of thrombosis, 9th ed: American College of Chest Physicians Evidence-Based Clinical Practice Guidelines. Chest. 2012;141(2 Suppl):e227S-e277S.
10. The Joint Commission. Surgical Care Improvement Project (SCIP) Measure Information Form (Version 2.1c). www.jointcommission.org/surgical_care_improvement_project_scip_measure_information_form_version_21c/. Accessed June 22, 2016.
11. Cassidy MR, Macht RD, Rosenkranz P, et al. Patterns of failure of a standardized perioperative venous thromboembolism prophylaxis protocol. J Am Coll Surg. 2016;222:1074-1081.
12. Wang TF, Wong CA, Milligan PE, et al. Risk factors for inpatient venous thromboembolism despite thromboprophylaxis. Thromb Res. 2014;133:25-29.
13. Cassidy MR, Rosenkranz P, McAneny D. Reducing postoperative venous thromboembolism complications with a standardized risk-stratified prophylaxis protocol and mobilization program. J Am Coll Surg. 2014;218:1095-1104.
14. Nimeri AA, Gamaleldin MM, McKenna KL, et al. Reduction of venous thromboembolism in surgical patients using a mandatory risk-scoring system: 5-year follow-up of an American College of Surgeons National Quality Improvement Program. Clin Appl Thromb Hemost. 2017;23:392-396.
15. Borab ZM, Lanni MA, Tecce MG, et al. Use of computerized clinical decision support systems to prevent venous thromboembolism in surgical patients: a systematic review and meta-analysis. JAMA Surg. 2017;152:638–645.
16. Obi AT, Pannucci CJ, Nackashi A, et al. Validation of the Caprini venous thromboembolism risk assessment model in critically ill surgical patients. JAMA Surg. 2015;150:941-948.
17. Lobastov K, Barinov V, Schastlivtsev I, et al. Validation of the Caprini risk assessment model for venous thromboembolism in high-risk surgical patients in the background of standard prophylaxis. J Vasc Surg Venous Lymphat Disord. 2016;4:153-160.
18. Kakkar AK, Cohen AT, Tapson VF, et al. Venous thromboembolism risk and prophylaxis in the acute care hospital setting (ENDORSE survey): findings in surgical patients. Ann Surg. 2010;251:330-338.
19. Smythe MA, Priziola J, Dobesh PP, et al. Guidance for the practical management of the heparin anticoagulants in the treatment of venous thromboembolism. J Thromb Thrombolysis. 2016;41:165-186.
Once-Daily 2-Drug versus 3-Drug Antiretroviral Therapy for HIV Infection in Treatment-naive Adults: Less Is Best?
Study Overview
Objective. To evaluate the efficacy and safety of a once-daily 2-drug antiretroviral (ARV) regimen, dolutegravir plus lamivudine, for the treatment of HIV-1 infection in adults naive to antiretroviral therapy (ART).
Design. GEMINI-1 and GEMINI-2 were 2 identically designed multicenter, double-blind, randomized, noninferiority, phase 3 clinical trials conducted between July 18, 2016 and March 31, 2017. Participants were stratified to receive 1 of 2 once-daily HIV regimens: the study regimen, consisting of once-daily dolutegravir 50 mg plus lamivudine 300 mg, or the standard-of-care regimen, consisting of once-daily dolutegravir 50 mg plus tenofovir disoproxil fumarate (TDF) 300 mg plus emtricitabine 200 mg. While this article presents results at week 48, both trials are scheduled to evaluate participants up to week 148 in an attempt to evaluate long-term efficacy and safety.
Setting and participants. Eligible participants had to be aged 18 years or older with treatment-naive HIV-1 infection. Women were eligible if they were not (1) pregnant, (2) lactating, or (3) of reproductive potential, defined by various means, including tubal ligation, hysterectomy, postmenopausal, and the use of highly effective contraception. Initially, eligibility screening restricted participation to those with viral loads between 1000 and 100,000 copies/mL. However, the upper limit was later increased to 500,000 copies/mL based on an independent review of results from other clinical trials1,2 evaluating dual therapy with dolutegravir and lamivudine, which indicated efficacy in patients with viral loads up to 500,000.3-5
Notable exclusion criteria included: (1) major mutations to nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, and protease inhibitors; (2) evidence of hepatitis B infection; (3) hepatitis C infection with anticipation of initiating treatment within 48 weeks of study enrollment; and (4) stage 3 HIV disease, per Centers for Disease Control and Prevention criteria, with the exception of cutaneous Kaposi sarcoma and CD4 cell counts < 200 cells/mL.
Main outcome measures. The primary endpoint was demonstration of noninferiority of the 2-drug ARV regimen through assessment of the proportion of participants who achieved virologic suppression at week 48 in the intent-to-treat-exposed population. For the purposes of this study, virologic suppression was defined as having fewer than 50 copies of HIV-1 RNA per mL at week 48. For evaluation of safety and toxicity concerns, renal and bone biomarkers were assessed at study entry and at weeks 24 and 48. In addition, participants who met virological withdrawal criteria were evaluated for integrase strand transfer inhibitor mutations. Virological withdrawal was defined as the presence of 1 of the following: (1) HIV RNA > 200 copies/mL at week 24, (2) HIV RNA > 200 copies/mL after previous HIV RNA < 200 copies/mL (confirmed rebound), and (3) a < 1 log10 copies/mL decrease from baseline (unless already < 200 copies/mL).
Main results. GEMINI-1 and GEMINI-2 randomized a combined total of 1441 participants to receive either the once-daily 2-drug ARV regimen (dolutegravir and lamivudine, n = 719) or the once-daily 3-drug ARV regimen (dolutegravir, TDF, and emtricitabine, n = 722). Of the 533 participants who did not meet inclusion criteria, the predominant reasons for exclusion were either having preexisting major viral resistance mutations (n = 246) or viral loads outside the range of 1000 to 500,000 copies/mL (n = 133).
Baseline demographic and clinical characteristics were similar between both groups. The median age was 33 years (10% were over 50 years of age), and participants were mostly male (85%) and white (68%). Baseline HIV RNA counts of > 100,000 copies/mL were found in 293 participants (20%), and 188 (8%) participants had CD4 counts of ≤ 200 cells/mL.
Noninferiority of the once-daily 2-drug versus the once-daily 3-drug ARV regimen was demonstrated in both the GEMINI-1 and GEMINI-2 trials for the intent-to-treat-exposed population. In GEMINI-1, 90% (n = 320) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 93% (n = 332) in the 3-drug ARV group (no statistically significant difference). In GEMINI-2, 93% (n =335 ) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 94% (n = 337) in the 3-drug ARV group (no statistically significant difference).
A subgroup analysis found no significant impact of baseline HIV RNA (> 100,000 compared to ≤ 100,000 copies/mL) on achieving virologic suppression at week 48. However, a subgroup analysis did find that participants with CD4 counts < 200 copies/mL had a reduced response in the once-daily 2-drug versus 3-drug ARV regimen for achieving virologic response at week 48 (79% versus 93%, respectively).
Overall, 10 participants met virological withdrawal criteria during the study period, and 4 of these were on the 2-drug ARV regimen. For these 10 participants, genotypic testing did not find emergence of resistance to either nucleoside reverse transcriptase or integrase strand transfer inhibitors.
Regarding renal biomarkers, increases of both serum creatinine and urinary excretion of protein creatinine were significantly greater in the 3-drug ARV group. Also, biomarkers indicating increased bone turnover were elevated in both groups, but the degree of elevation was significantly lower in the 2-drug ARV regimen cohort. It is unclear whether these findings reflect an increased or decreased risk of developing osteopenia or osteoporosis in the 2 study groups.
Conclusion. The once-daily 2-drug ARV regimen dolutegravir and lamivudine is noninferior to the guideline-recommended once-daily 3-drug ARV regimen dolutegravir, TDF, and emtricitabine at achieving viral suppression in ART-naive HIV-1 infected individuals with HIV RNA counts < 500,000 copies/mL. However, the efficacy of this ARV regimen may be compromised in individuals with CD4 counts < 200 cells/mL.
Commentary
Currently, the mainstay of HIV pharmacotherapy is a 3-drug regimen consisting of 2 nucleoside reverse transcriptase inhibitors in combination with 1 drug from another class, with an integrase strand transfer inhibitor being the preferred third drug.6 Despite the improved tolerability of contemporary ARVs, there remains concern among HIV practitioners regarding potential toxicities associated with cumulative drug exposure, specifically related to nucleoside reverse transcriptase inhibitors. As a result, there has been much interest in evaluating 2-drug ARV regimens for HIV treatment in order to reduce overall drug exposure.7-10
The 48-week results of the GEMINI-1 and GEMINI-2 trials, published in early 2019, further expand our understanding regarding the efficacy and safety of 2-drug regimens in HIV treatment. These identically designed studies evaluated once-daily dolutegravir and lamivudine for HIV in a treatment-naive population. This goes a step further than the SWORD-1 and SWORD-2 trials, which evaluated once-daily dolutegravir and rilpivirine as a step-down therapy for virologically suppressed individuals and led to the U.S. Food and Drug Administration (FDA) approval of the single-tablet combination regimen dolutegravir/rilpivirine (Juluca).10 Therefore, whereas the SWORD trials evaluated a 2-drug regimen for maintenance of virologic suppression, the GEMINI trials assessed whether a 2-drug regimen can both achieve and maintain virologic suppression.
The results of the GEMINI trials are promising for a future direction in HIV care. The rates of virologic suppression achieved in these trials are comparable to those seen in the SWORD trials.10 Furthermore, the virologic response seen in the GEMINI trials is comparable to that seen in similar trials that evaluated a 3-drug ARV regimen consisting of an integrase strand transfer inhibitor–based backbone in ART-naive individuals.11,12
A major confounder to the design of this trial was that it included TDF as one of the components in the comparator arm, an agent that has already been demonstrated to have detrimental effects on both renal and bone health.13,14 Additionally, the bone biomarker results were inconclusive, and the agents’ effects on bone would have been better demonstrated through bone mineral density testing, as had been done in prior trials.
Applications for Clinical Practice
Given the recent FDA approval of the single-tablet combination regimen dolutegravir and lamivudine (Dovato), this once-daily 2-drug ARV regimen will begin making its way into clinical practice for certain patients. Prior to starting this regimen, hepatitis B infection first must be ruled out due to poor efficacy of lamivudine monotherapy for management of chronic hepatitis B infection.15 Additionally, baseline genotype testing should be performed prior to starting this ART given that approximately 10% of newly diagnosed HIV patients have baseline resistance mutations.16 Obtaining rapid genotype testing may be difficult to accomplish in low-resource settings where such testing is not readily available. Finally, this approach may not be applicable to those presenting with acute HIV infection, in whom viral loads are often in the millions of copies per mL. It is likely that dolutegravir/lamivudine could assume a role similar to that of dolutegravir/rilpivirine, in which patients who present with acute HIV step down to a 2-drug regimen once their viral loads have either dropped below 500,000 copies/mL or have already been suppressed.
—Evan K. Mallory, PharmD, Banner-University Medical Center Tucson, and Norman L. Beatty, MD, University of Arizona College of Medicine, Tucson, AZ
1. Cahn P, Rolón MJ, Figueroa MI, et al. Dolutegravir-lamivudine as initial therapy in HIV-1 infected, ARV-naive patients, 48-week results of the PADDLE (Pilot Antiretroviral Design with Dolutegravir LamivudinE) study. J Int AIDS Soc. 2017;20:21678.
2. Taiwo BO, Zheng L, Stefanescu A, et al. ACTG A5353: a pilot study of dolutegravir plus lamivudine for initial treatment of human immunodeficiency virus-1 (HIV-1)-infected participants eith HIV-1 RNA <500000 vopies/mL. Clin Infect Dis. 2018;66:1689-1697.
3. Min S, Sloan L, DeJesus E, et al. Antiviral activity, safety, and pharmacokinetics/pharmacodynamics of dolutegravir as 10-day monotherapy in HIV-1-infected adults. AIDS. 2011;25:1737-1745.
4. Eron JJ, Benoit SL, Jemsek J, et al. Treatment with lamivudine, zidovudine, or both in HIV-positive patients with 200 to 500 CD4+ cells per cubic millimeter. North American HIV Working Party. N Engl J Med. 1995;333:1662-1669.
5. Kuritzkes DR, Quinn JB, Benoit SL, et al. Drug resistance and virologic response in NUCA 3001, a randomized trial of lamivudine (3TC) versus zidovudine (ZDV) versus ZDV plus 3TC in previously untreated patients. AIDS. 1996;10:975-981.
6. Department of Health and Human Services. Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. http://aidsinfo.nih.gov/contentfiles/lvguidelines/AdultandAdolescentGL.pdf. Accessed April 1, 2019.
7. Riddler SA, Haubrich R, DiRienzo AG, et al. Class-sparing regimens for initial treatment of HIV-1 infection. N Engl J Med. 2008;358:2095-2106.
8. Reynes J, Lawal A, Pulido F, et al. Examination of noninferiority, safety, and tolerability of lopinavir/ritonavir and raltegravir compared with lopinavir/ritonavir and tenofovir/ emtricitabine in antiretroviral-naïve subjects: the progress study, 48-week results. HIV Clin Trials. 2011;12:255-267.
9. Cahn P, Andrade-Villanueva J, Arribas JR, et al. Dual therapy with lopinavir and ritonavir plus lamivudine versus triple therapy with lopinavir and ritonavir plus two nucleoside reverse transcriptase inhibitors in antiretroviral-therapy-naive adults with HIV-1 infection: 48 week results of the randomised, open label, non-inferiority GARDEL trial. Lancet Infect Dis. 2014;14:572-580.
10. Llibre JM, Hung CC, Brinson C, et al. Efficacy, safety, and tolerability of dolutegravir-rilpivirine for the maintenance of virological suppression in adults with HIV-1: phase 3, randomised, non-inferiority SWORD-1 and SWORD-2 studies. Lancet. 2018;391:839-849.
11. Walmsley SL, Antela A, Clumeck N, et al. Dolutegravir plus abacavir-lamivudine for the treatment of HIV-1 infection. N Engl J Med. 2013;369:1807-1818.
12. Sax PE, Wohl D, Yin MT, et al. Tenofovir alafenamide versus tenofovir disoproxil fumarate, coformulated with elvitegravir, cobicistat, and emtricitabine, for initial treatment of HIV-1 infection: two randomised, double-blind, phase 3, non-inferiority trials. Lancet. 2015;385:2606-2615.
13. Mulligan K, Glidden DV, Anderson PL, et al. Effects of emtricitabine/tenofovir on bone mineral density in HIV-negative persons in a randomized, double-blind, placebo-controlled trial. Clin Infect Dis. 2015;61:572-580.
14. Cooper RD, Wiebe N, Smith N, et al. Systematic review and meta-analysis: renal safety of tenofovir disoproxil fumarate in HIV-infected patients. Clin Infect Dis. 2010;51:496-505.
15. Kim D, Wheeler W, Ziebell R, et al. Prevalence of antiretroviral drug resistance among newly diagnosed HIV-1 infected persons, United States, 2007. 17th Conference on Retroviruses & Opportunistic Infections; San Francisco, CA: 2010. Feb 16-19. Abstract 580.
16. Terrault NA, Lok ASF, McMahon BJ, et al. Update on prevention, diagnosis, and treatment of chronic hepatitis B: AASLD 2018 hepatitis B guidance. Hepatology. 2018;67:1560-1599.
Study Overview
Objective. To evaluate the efficacy and safety of a once-daily 2-drug antiretroviral (ARV) regimen, dolutegravir plus lamivudine, for the treatment of HIV-1 infection in adults naive to antiretroviral therapy (ART).
Design. GEMINI-1 and GEMINI-2 were 2 identically designed multicenter, double-blind, randomized, noninferiority, phase 3 clinical trials conducted between July 18, 2016 and March 31, 2017. Participants were stratified to receive 1 of 2 once-daily HIV regimens: the study regimen, consisting of once-daily dolutegravir 50 mg plus lamivudine 300 mg, or the standard-of-care regimen, consisting of once-daily dolutegravir 50 mg plus tenofovir disoproxil fumarate (TDF) 300 mg plus emtricitabine 200 mg. While this article presents results at week 48, both trials are scheduled to evaluate participants up to week 148 in an attempt to evaluate long-term efficacy and safety.
Setting and participants. Eligible participants had to be aged 18 years or older with treatment-naive HIV-1 infection. Women were eligible if they were not (1) pregnant, (2) lactating, or (3) of reproductive potential, defined by various means, including tubal ligation, hysterectomy, postmenopausal, and the use of highly effective contraception. Initially, eligibility screening restricted participation to those with viral loads between 1000 and 100,000 copies/mL. However, the upper limit was later increased to 500,000 copies/mL based on an independent review of results from other clinical trials1,2 evaluating dual therapy with dolutegravir and lamivudine, which indicated efficacy in patients with viral loads up to 500,000.3-5
Notable exclusion criteria included: (1) major mutations to nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, and protease inhibitors; (2) evidence of hepatitis B infection; (3) hepatitis C infection with anticipation of initiating treatment within 48 weeks of study enrollment; and (4) stage 3 HIV disease, per Centers for Disease Control and Prevention criteria, with the exception of cutaneous Kaposi sarcoma and CD4 cell counts < 200 cells/mL.
Main outcome measures. The primary endpoint was demonstration of noninferiority of the 2-drug ARV regimen through assessment of the proportion of participants who achieved virologic suppression at week 48 in the intent-to-treat-exposed population. For the purposes of this study, virologic suppression was defined as having fewer than 50 copies of HIV-1 RNA per mL at week 48. For evaluation of safety and toxicity concerns, renal and bone biomarkers were assessed at study entry and at weeks 24 and 48. In addition, participants who met virological withdrawal criteria were evaluated for integrase strand transfer inhibitor mutations. Virological withdrawal was defined as the presence of 1 of the following: (1) HIV RNA > 200 copies/mL at week 24, (2) HIV RNA > 200 copies/mL after previous HIV RNA < 200 copies/mL (confirmed rebound), and (3) a < 1 log10 copies/mL decrease from baseline (unless already < 200 copies/mL).
Main results. GEMINI-1 and GEMINI-2 randomized a combined total of 1441 participants to receive either the once-daily 2-drug ARV regimen (dolutegravir and lamivudine, n = 719) or the once-daily 3-drug ARV regimen (dolutegravir, TDF, and emtricitabine, n = 722). Of the 533 participants who did not meet inclusion criteria, the predominant reasons for exclusion were either having preexisting major viral resistance mutations (n = 246) or viral loads outside the range of 1000 to 500,000 copies/mL (n = 133).
Baseline demographic and clinical characteristics were similar between both groups. The median age was 33 years (10% were over 50 years of age), and participants were mostly male (85%) and white (68%). Baseline HIV RNA counts of > 100,000 copies/mL were found in 293 participants (20%), and 188 (8%) participants had CD4 counts of ≤ 200 cells/mL.
Noninferiority of the once-daily 2-drug versus the once-daily 3-drug ARV regimen was demonstrated in both the GEMINI-1 and GEMINI-2 trials for the intent-to-treat-exposed population. In GEMINI-1, 90% (n = 320) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 93% (n = 332) in the 3-drug ARV group (no statistically significant difference). In GEMINI-2, 93% (n =335 ) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 94% (n = 337) in the 3-drug ARV group (no statistically significant difference).
A subgroup analysis found no significant impact of baseline HIV RNA (> 100,000 compared to ≤ 100,000 copies/mL) on achieving virologic suppression at week 48. However, a subgroup analysis did find that participants with CD4 counts < 200 copies/mL had a reduced response in the once-daily 2-drug versus 3-drug ARV regimen for achieving virologic response at week 48 (79% versus 93%, respectively).
Overall, 10 participants met virological withdrawal criteria during the study period, and 4 of these were on the 2-drug ARV regimen. For these 10 participants, genotypic testing did not find emergence of resistance to either nucleoside reverse transcriptase or integrase strand transfer inhibitors.
Regarding renal biomarkers, increases of both serum creatinine and urinary excretion of protein creatinine were significantly greater in the 3-drug ARV group. Also, biomarkers indicating increased bone turnover were elevated in both groups, but the degree of elevation was significantly lower in the 2-drug ARV regimen cohort. It is unclear whether these findings reflect an increased or decreased risk of developing osteopenia or osteoporosis in the 2 study groups.
Conclusion. The once-daily 2-drug ARV regimen dolutegravir and lamivudine is noninferior to the guideline-recommended once-daily 3-drug ARV regimen dolutegravir, TDF, and emtricitabine at achieving viral suppression in ART-naive HIV-1 infected individuals with HIV RNA counts < 500,000 copies/mL. However, the efficacy of this ARV regimen may be compromised in individuals with CD4 counts < 200 cells/mL.
Commentary
Currently, the mainstay of HIV pharmacotherapy is a 3-drug regimen consisting of 2 nucleoside reverse transcriptase inhibitors in combination with 1 drug from another class, with an integrase strand transfer inhibitor being the preferred third drug.6 Despite the improved tolerability of contemporary ARVs, there remains concern among HIV practitioners regarding potential toxicities associated with cumulative drug exposure, specifically related to nucleoside reverse transcriptase inhibitors. As a result, there has been much interest in evaluating 2-drug ARV regimens for HIV treatment in order to reduce overall drug exposure.7-10
The 48-week results of the GEMINI-1 and GEMINI-2 trials, published in early 2019, further expand our understanding regarding the efficacy and safety of 2-drug regimens in HIV treatment. These identically designed studies evaluated once-daily dolutegravir and lamivudine for HIV in a treatment-naive population. This goes a step further than the SWORD-1 and SWORD-2 trials, which evaluated once-daily dolutegravir and rilpivirine as a step-down therapy for virologically suppressed individuals and led to the U.S. Food and Drug Administration (FDA) approval of the single-tablet combination regimen dolutegravir/rilpivirine (Juluca).10 Therefore, whereas the SWORD trials evaluated a 2-drug regimen for maintenance of virologic suppression, the GEMINI trials assessed whether a 2-drug regimen can both achieve and maintain virologic suppression.
The results of the GEMINI trials are promising for a future direction in HIV care. The rates of virologic suppression achieved in these trials are comparable to those seen in the SWORD trials.10 Furthermore, the virologic response seen in the GEMINI trials is comparable to that seen in similar trials that evaluated a 3-drug ARV regimen consisting of an integrase strand transfer inhibitor–based backbone in ART-naive individuals.11,12
A major confounder to the design of this trial was that it included TDF as one of the components in the comparator arm, an agent that has already been demonstrated to have detrimental effects on both renal and bone health.13,14 Additionally, the bone biomarker results were inconclusive, and the agents’ effects on bone would have been better demonstrated through bone mineral density testing, as had been done in prior trials.
Applications for Clinical Practice
Given the recent FDA approval of the single-tablet combination regimen dolutegravir and lamivudine (Dovato), this once-daily 2-drug ARV regimen will begin making its way into clinical practice for certain patients. Prior to starting this regimen, hepatitis B infection first must be ruled out due to poor efficacy of lamivudine monotherapy for management of chronic hepatitis B infection.15 Additionally, baseline genotype testing should be performed prior to starting this ART given that approximately 10% of newly diagnosed HIV patients have baseline resistance mutations.16 Obtaining rapid genotype testing may be difficult to accomplish in low-resource settings where such testing is not readily available. Finally, this approach may not be applicable to those presenting with acute HIV infection, in whom viral loads are often in the millions of copies per mL. It is likely that dolutegravir/lamivudine could assume a role similar to that of dolutegravir/rilpivirine, in which patients who present with acute HIV step down to a 2-drug regimen once their viral loads have either dropped below 500,000 copies/mL or have already been suppressed.
—Evan K. Mallory, PharmD, Banner-University Medical Center Tucson, and Norman L. Beatty, MD, University of Arizona College of Medicine, Tucson, AZ
Study Overview
Objective. To evaluate the efficacy and safety of a once-daily 2-drug antiretroviral (ARV) regimen, dolutegravir plus lamivudine, for the treatment of HIV-1 infection in adults naive to antiretroviral therapy (ART).
Design. GEMINI-1 and GEMINI-2 were 2 identically designed multicenter, double-blind, randomized, noninferiority, phase 3 clinical trials conducted between July 18, 2016 and March 31, 2017. Participants were stratified to receive 1 of 2 once-daily HIV regimens: the study regimen, consisting of once-daily dolutegravir 50 mg plus lamivudine 300 mg, or the standard-of-care regimen, consisting of once-daily dolutegravir 50 mg plus tenofovir disoproxil fumarate (TDF) 300 mg plus emtricitabine 200 mg. While this article presents results at week 48, both trials are scheduled to evaluate participants up to week 148 in an attempt to evaluate long-term efficacy and safety.
Setting and participants. Eligible participants had to be aged 18 years or older with treatment-naive HIV-1 infection. Women were eligible if they were not (1) pregnant, (2) lactating, or (3) of reproductive potential, defined by various means, including tubal ligation, hysterectomy, postmenopausal, and the use of highly effective contraception. Initially, eligibility screening restricted participation to those with viral loads between 1000 and 100,000 copies/mL. However, the upper limit was later increased to 500,000 copies/mL based on an independent review of results from other clinical trials1,2 evaluating dual therapy with dolutegravir and lamivudine, which indicated efficacy in patients with viral loads up to 500,000.3-5
Notable exclusion criteria included: (1) major mutations to nucleoside reverse transcriptase inhibitors, non-nucleoside reverse transcriptase inhibitors, and protease inhibitors; (2) evidence of hepatitis B infection; (3) hepatitis C infection with anticipation of initiating treatment within 48 weeks of study enrollment; and (4) stage 3 HIV disease, per Centers for Disease Control and Prevention criteria, with the exception of cutaneous Kaposi sarcoma and CD4 cell counts < 200 cells/mL.
Main outcome measures. The primary endpoint was demonstration of noninferiority of the 2-drug ARV regimen through assessment of the proportion of participants who achieved virologic suppression at week 48 in the intent-to-treat-exposed population. For the purposes of this study, virologic suppression was defined as having fewer than 50 copies of HIV-1 RNA per mL at week 48. For evaluation of safety and toxicity concerns, renal and bone biomarkers were assessed at study entry and at weeks 24 and 48. In addition, participants who met virological withdrawal criteria were evaluated for integrase strand transfer inhibitor mutations. Virological withdrawal was defined as the presence of 1 of the following: (1) HIV RNA > 200 copies/mL at week 24, (2) HIV RNA > 200 copies/mL after previous HIV RNA < 200 copies/mL (confirmed rebound), and (3) a < 1 log10 copies/mL decrease from baseline (unless already < 200 copies/mL).
Main results. GEMINI-1 and GEMINI-2 randomized a combined total of 1441 participants to receive either the once-daily 2-drug ARV regimen (dolutegravir and lamivudine, n = 719) or the once-daily 3-drug ARV regimen (dolutegravir, TDF, and emtricitabine, n = 722). Of the 533 participants who did not meet inclusion criteria, the predominant reasons for exclusion were either having preexisting major viral resistance mutations (n = 246) or viral loads outside the range of 1000 to 500,000 copies/mL (n = 133).
Baseline demographic and clinical characteristics were similar between both groups. The median age was 33 years (10% were over 50 years of age), and participants were mostly male (85%) and white (68%). Baseline HIV RNA counts of > 100,000 copies/mL were found in 293 participants (20%), and 188 (8%) participants had CD4 counts of ≤ 200 cells/mL.
Noninferiority of the once-daily 2-drug versus the once-daily 3-drug ARV regimen was demonstrated in both the GEMINI-1 and GEMINI-2 trials for the intent-to-treat-exposed population. In GEMINI-1, 90% (n = 320) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 93% (n = 332) in the 3-drug ARV group (no statistically significant difference). In GEMINI-2, 93% (n =335 ) in the 2-drug ARV group achieved virologic suppression at week 48 compared to 94% (n = 337) in the 3-drug ARV group (no statistically significant difference).
A subgroup analysis found no significant impact of baseline HIV RNA (> 100,000 compared to ≤ 100,000 copies/mL) on achieving virologic suppression at week 48. However, a subgroup analysis did find that participants with CD4 counts < 200 copies/mL had a reduced response in the once-daily 2-drug versus 3-drug ARV regimen for achieving virologic response at week 48 (79% versus 93%, respectively).
Overall, 10 participants met virological withdrawal criteria during the study period, and 4 of these were on the 2-drug ARV regimen. For these 10 participants, genotypic testing did not find emergence of resistance to either nucleoside reverse transcriptase or integrase strand transfer inhibitors.
Regarding renal biomarkers, increases of both serum creatinine and urinary excretion of protein creatinine were significantly greater in the 3-drug ARV group. Also, biomarkers indicating increased bone turnover were elevated in both groups, but the degree of elevation was significantly lower in the 2-drug ARV regimen cohort. It is unclear whether these findings reflect an increased or decreased risk of developing osteopenia or osteoporosis in the 2 study groups.
Conclusion. The once-daily 2-drug ARV regimen dolutegravir and lamivudine is noninferior to the guideline-recommended once-daily 3-drug ARV regimen dolutegravir, TDF, and emtricitabine at achieving viral suppression in ART-naive HIV-1 infected individuals with HIV RNA counts < 500,000 copies/mL. However, the efficacy of this ARV regimen may be compromised in individuals with CD4 counts < 200 cells/mL.
Commentary
Currently, the mainstay of HIV pharmacotherapy is a 3-drug regimen consisting of 2 nucleoside reverse transcriptase inhibitors in combination with 1 drug from another class, with an integrase strand transfer inhibitor being the preferred third drug.6 Despite the improved tolerability of contemporary ARVs, there remains concern among HIV practitioners regarding potential toxicities associated with cumulative drug exposure, specifically related to nucleoside reverse transcriptase inhibitors. As a result, there has been much interest in evaluating 2-drug ARV regimens for HIV treatment in order to reduce overall drug exposure.7-10
The 48-week results of the GEMINI-1 and GEMINI-2 trials, published in early 2019, further expand our understanding regarding the efficacy and safety of 2-drug regimens in HIV treatment. These identically designed studies evaluated once-daily dolutegravir and lamivudine for HIV in a treatment-naive population. This goes a step further than the SWORD-1 and SWORD-2 trials, which evaluated once-daily dolutegravir and rilpivirine as a step-down therapy for virologically suppressed individuals and led to the U.S. Food and Drug Administration (FDA) approval of the single-tablet combination regimen dolutegravir/rilpivirine (Juluca).10 Therefore, whereas the SWORD trials evaluated a 2-drug regimen for maintenance of virologic suppression, the GEMINI trials assessed whether a 2-drug regimen can both achieve and maintain virologic suppression.
The results of the GEMINI trials are promising for a future direction in HIV care. The rates of virologic suppression achieved in these trials are comparable to those seen in the SWORD trials.10 Furthermore, the virologic response seen in the GEMINI trials is comparable to that seen in similar trials that evaluated a 3-drug ARV regimen consisting of an integrase strand transfer inhibitor–based backbone in ART-naive individuals.11,12
A major confounder to the design of this trial was that it included TDF as one of the components in the comparator arm, an agent that has already been demonstrated to have detrimental effects on both renal and bone health.13,14 Additionally, the bone biomarker results were inconclusive, and the agents’ effects on bone would have been better demonstrated through bone mineral density testing, as had been done in prior trials.
Applications for Clinical Practice
Given the recent FDA approval of the single-tablet combination regimen dolutegravir and lamivudine (Dovato), this once-daily 2-drug ARV regimen will begin making its way into clinical practice for certain patients. Prior to starting this regimen, hepatitis B infection first must be ruled out due to poor efficacy of lamivudine monotherapy for management of chronic hepatitis B infection.15 Additionally, baseline genotype testing should be performed prior to starting this ART given that approximately 10% of newly diagnosed HIV patients have baseline resistance mutations.16 Obtaining rapid genotype testing may be difficult to accomplish in low-resource settings where such testing is not readily available. Finally, this approach may not be applicable to those presenting with acute HIV infection, in whom viral loads are often in the millions of copies per mL. It is likely that dolutegravir/lamivudine could assume a role similar to that of dolutegravir/rilpivirine, in which patients who present with acute HIV step down to a 2-drug regimen once their viral loads have either dropped below 500,000 copies/mL or have already been suppressed.
—Evan K. Mallory, PharmD, Banner-University Medical Center Tucson, and Norman L. Beatty, MD, University of Arizona College of Medicine, Tucson, AZ
1. Cahn P, Rolón MJ, Figueroa MI, et al. Dolutegravir-lamivudine as initial therapy in HIV-1 infected, ARV-naive patients, 48-week results of the PADDLE (Pilot Antiretroviral Design with Dolutegravir LamivudinE) study. J Int AIDS Soc. 2017;20:21678.
2. Taiwo BO, Zheng L, Stefanescu A, et al. ACTG A5353: a pilot study of dolutegravir plus lamivudine for initial treatment of human immunodeficiency virus-1 (HIV-1)-infected participants eith HIV-1 RNA <500000 vopies/mL. Clin Infect Dis. 2018;66:1689-1697.
3. Min S, Sloan L, DeJesus E, et al. Antiviral activity, safety, and pharmacokinetics/pharmacodynamics of dolutegravir as 10-day monotherapy in HIV-1-infected adults. AIDS. 2011;25:1737-1745.
4. Eron JJ, Benoit SL, Jemsek J, et al. Treatment with lamivudine, zidovudine, or both in HIV-positive patients with 200 to 500 CD4+ cells per cubic millimeter. North American HIV Working Party. N Engl J Med. 1995;333:1662-1669.
5. Kuritzkes DR, Quinn JB, Benoit SL, et al. Drug resistance and virologic response in NUCA 3001, a randomized trial of lamivudine (3TC) versus zidovudine (ZDV) versus ZDV plus 3TC in previously untreated patients. AIDS. 1996;10:975-981.
6. Department of Health and Human Services. Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. http://aidsinfo.nih.gov/contentfiles/lvguidelines/AdultandAdolescentGL.pdf. Accessed April 1, 2019.
7. Riddler SA, Haubrich R, DiRienzo AG, et al. Class-sparing regimens for initial treatment of HIV-1 infection. N Engl J Med. 2008;358:2095-2106.
8. Reynes J, Lawal A, Pulido F, et al. Examination of noninferiority, safety, and tolerability of lopinavir/ritonavir and raltegravir compared with lopinavir/ritonavir and tenofovir/ emtricitabine in antiretroviral-naïve subjects: the progress study, 48-week results. HIV Clin Trials. 2011;12:255-267.
9. Cahn P, Andrade-Villanueva J, Arribas JR, et al. Dual therapy with lopinavir and ritonavir plus lamivudine versus triple therapy with lopinavir and ritonavir plus two nucleoside reverse transcriptase inhibitors in antiretroviral-therapy-naive adults with HIV-1 infection: 48 week results of the randomised, open label, non-inferiority GARDEL trial. Lancet Infect Dis. 2014;14:572-580.
10. Llibre JM, Hung CC, Brinson C, et al. Efficacy, safety, and tolerability of dolutegravir-rilpivirine for the maintenance of virological suppression in adults with HIV-1: phase 3, randomised, non-inferiority SWORD-1 and SWORD-2 studies. Lancet. 2018;391:839-849.
11. Walmsley SL, Antela A, Clumeck N, et al. Dolutegravir plus abacavir-lamivudine for the treatment of HIV-1 infection. N Engl J Med. 2013;369:1807-1818.
12. Sax PE, Wohl D, Yin MT, et al. Tenofovir alafenamide versus tenofovir disoproxil fumarate, coformulated with elvitegravir, cobicistat, and emtricitabine, for initial treatment of HIV-1 infection: two randomised, double-blind, phase 3, non-inferiority trials. Lancet. 2015;385:2606-2615.
13. Mulligan K, Glidden DV, Anderson PL, et al. Effects of emtricitabine/tenofovir on bone mineral density in HIV-negative persons in a randomized, double-blind, placebo-controlled trial. Clin Infect Dis. 2015;61:572-580.
14. Cooper RD, Wiebe N, Smith N, et al. Systematic review and meta-analysis: renal safety of tenofovir disoproxil fumarate in HIV-infected patients. Clin Infect Dis. 2010;51:496-505.
15. Kim D, Wheeler W, Ziebell R, et al. Prevalence of antiretroviral drug resistance among newly diagnosed HIV-1 infected persons, United States, 2007. 17th Conference on Retroviruses & Opportunistic Infections; San Francisco, CA: 2010. Feb 16-19. Abstract 580.
16. Terrault NA, Lok ASF, McMahon BJ, et al. Update on prevention, diagnosis, and treatment of chronic hepatitis B: AASLD 2018 hepatitis B guidance. Hepatology. 2018;67:1560-1599.
1. Cahn P, Rolón MJ, Figueroa MI, et al. Dolutegravir-lamivudine as initial therapy in HIV-1 infected, ARV-naive patients, 48-week results of the PADDLE (Pilot Antiretroviral Design with Dolutegravir LamivudinE) study. J Int AIDS Soc. 2017;20:21678.
2. Taiwo BO, Zheng L, Stefanescu A, et al. ACTG A5353: a pilot study of dolutegravir plus lamivudine for initial treatment of human immunodeficiency virus-1 (HIV-1)-infected participants eith HIV-1 RNA <500000 vopies/mL. Clin Infect Dis. 2018;66:1689-1697.
3. Min S, Sloan L, DeJesus E, et al. Antiviral activity, safety, and pharmacokinetics/pharmacodynamics of dolutegravir as 10-day monotherapy in HIV-1-infected adults. AIDS. 2011;25:1737-1745.
4. Eron JJ, Benoit SL, Jemsek J, et al. Treatment with lamivudine, zidovudine, or both in HIV-positive patients with 200 to 500 CD4+ cells per cubic millimeter. North American HIV Working Party. N Engl J Med. 1995;333:1662-1669.
5. Kuritzkes DR, Quinn JB, Benoit SL, et al. Drug resistance and virologic response in NUCA 3001, a randomized trial of lamivudine (3TC) versus zidovudine (ZDV) versus ZDV plus 3TC in previously untreated patients. AIDS. 1996;10:975-981.
6. Department of Health and Human Services. Panel on Antiretroviral Guidelines for Adults and Adolescents. Guidelines for the use of antiretroviral agents in adults and adolescents living with HIV. http://aidsinfo.nih.gov/contentfiles/lvguidelines/AdultandAdolescentGL.pdf. Accessed April 1, 2019.
7. Riddler SA, Haubrich R, DiRienzo AG, et al. Class-sparing regimens for initial treatment of HIV-1 infection. N Engl J Med. 2008;358:2095-2106.
8. Reynes J, Lawal A, Pulido F, et al. Examination of noninferiority, safety, and tolerability of lopinavir/ritonavir and raltegravir compared with lopinavir/ritonavir and tenofovir/ emtricitabine in antiretroviral-naïve subjects: the progress study, 48-week results. HIV Clin Trials. 2011;12:255-267.
9. Cahn P, Andrade-Villanueva J, Arribas JR, et al. Dual therapy with lopinavir and ritonavir plus lamivudine versus triple therapy with lopinavir and ritonavir plus two nucleoside reverse transcriptase inhibitors in antiretroviral-therapy-naive adults with HIV-1 infection: 48 week results of the randomised, open label, non-inferiority GARDEL trial. Lancet Infect Dis. 2014;14:572-580.
10. Llibre JM, Hung CC, Brinson C, et al. Efficacy, safety, and tolerability of dolutegravir-rilpivirine for the maintenance of virological suppression in adults with HIV-1: phase 3, randomised, non-inferiority SWORD-1 and SWORD-2 studies. Lancet. 2018;391:839-849.
11. Walmsley SL, Antela A, Clumeck N, et al. Dolutegravir plus abacavir-lamivudine for the treatment of HIV-1 infection. N Engl J Med. 2013;369:1807-1818.
12. Sax PE, Wohl D, Yin MT, et al. Tenofovir alafenamide versus tenofovir disoproxil fumarate, coformulated with elvitegravir, cobicistat, and emtricitabine, for initial treatment of HIV-1 infection: two randomised, double-blind, phase 3, non-inferiority trials. Lancet. 2015;385:2606-2615.
13. Mulligan K, Glidden DV, Anderson PL, et al. Effects of emtricitabine/tenofovir on bone mineral density in HIV-negative persons in a randomized, double-blind, placebo-controlled trial. Clin Infect Dis. 2015;61:572-580.
14. Cooper RD, Wiebe N, Smith N, et al. Systematic review and meta-analysis: renal safety of tenofovir disoproxil fumarate in HIV-infected patients. Clin Infect Dis. 2010;51:496-505.
15. Kim D, Wheeler W, Ziebell R, et al. Prevalence of antiretroviral drug resistance among newly diagnosed HIV-1 infected persons, United States, 2007. 17th Conference on Retroviruses & Opportunistic Infections; San Francisco, CA: 2010. Feb 16-19. Abstract 580.
16. Terrault NA, Lok ASF, McMahon BJ, et al. Update on prevention, diagnosis, and treatment of chronic hepatitis B: AASLD 2018 hepatitis B guidance. Hepatology. 2018;67:1560-1599.
Meta-analysis finds no link between PPI use and risk of dementia
The finding runs counter to recent studies, including a large pharmacoepidemiological claims data analysis from Germany, that propose an association between proton pump inhibitor (PPI) use and the development of dementia (JAMA Neurol. 2016;73[4]:410-6). “The issue with these studies is that they’re based on retrospective claims data and pharmacoepidemiological studies and insurance databases that don’t really give you a good causality basis,” lead study author Saad Alrajhi, MD, said in an interview at the annual Digestive Disease Week.
In an effort to better characterize the association between PPI exposure and dementia, Dr. Alrajhi, a gastroenterology fellow at McGill University, Montreal, and colleagues conducted a meta-analysis of all fully published randomized clinical trials or observational studies comparing use of PPIs and occurrence of dementia. The researchers queried Embase, MEDLINE, and ISI Web of Knowledge for relevant studies that were published from 1995 through September 2018. Next, they assessed the quality of the studies by using the Cochrane risk assessment tool for RCTs or the Newcastle-Ottawa Scale for observational studies.
As the primary outcome, the researchers compared dementia incidence after PPI exposure (experimental group) versus no PPI exposure (control group). Development of Alzheimer’s dementia was a secondary outcome. Sensitivity analyses consisted of excluding one study at a time, and assessing results among studies of highest qualities. Subgroup analyses included stratifying patients by age. To report odds ratios, Dr. Alrajhi and colleagues used fixed or random effects models based on the absence or presence of heterogeneity.
Of 549 studies assessed, 5 met the criteria for inclusion in the final analysis: 3 case-control studies and 2 cohort studies, with a total of 472,933 patients. All of the studies scored 8 or 9 on the Newcastle-Ottawa scale, indicating high quality. Significant heterogeneity was noted for all analyses. The researchers found that the incidence of dementia was not significantly increased among patients in the PPI-exposed group (odd ratio, 1.08 (95% confidence interval, 0.97-1.20; P = .18). Sensitivity analyses confirmed the robustness of the results. Subgroup analysis showed no between-group differences among studies that included a minimum age above 65 years (three studies) or less than age 65 (two studies). PPI exposure was not associated with the development of Alzheimer’s dementia (two studies) (OR, 1.32 (95% CI, 0.80-2.17; P = .27).
“In the absence of randomized trial evidence, a PPI prescribing approach based on appropriate utilization of guideline-based prescription should be done without the extra fear of the association of dementia,” Dr. Alrajhi said.
The researchers reported having no financial disclosures.
The finding runs counter to recent studies, including a large pharmacoepidemiological claims data analysis from Germany, that propose an association between proton pump inhibitor (PPI) use and the development of dementia (JAMA Neurol. 2016;73[4]:410-6). “The issue with these studies is that they’re based on retrospective claims data and pharmacoepidemiological studies and insurance databases that don’t really give you a good causality basis,” lead study author Saad Alrajhi, MD, said in an interview at the annual Digestive Disease Week.
In an effort to better characterize the association between PPI exposure and dementia, Dr. Alrajhi, a gastroenterology fellow at McGill University, Montreal, and colleagues conducted a meta-analysis of all fully published randomized clinical trials or observational studies comparing use of PPIs and occurrence of dementia. The researchers queried Embase, MEDLINE, and ISI Web of Knowledge for relevant studies that were published from 1995 through September 2018. Next, they assessed the quality of the studies by using the Cochrane risk assessment tool for RCTs or the Newcastle-Ottawa Scale for observational studies.
As the primary outcome, the researchers compared dementia incidence after PPI exposure (experimental group) versus no PPI exposure (control group). Development of Alzheimer’s dementia was a secondary outcome. Sensitivity analyses consisted of excluding one study at a time, and assessing results among studies of highest qualities. Subgroup analyses included stratifying patients by age. To report odds ratios, Dr. Alrajhi and colleagues used fixed or random effects models based on the absence or presence of heterogeneity.
Of 549 studies assessed, 5 met the criteria for inclusion in the final analysis: 3 case-control studies and 2 cohort studies, with a total of 472,933 patients. All of the studies scored 8 or 9 on the Newcastle-Ottawa scale, indicating high quality. Significant heterogeneity was noted for all analyses. The researchers found that the incidence of dementia was not significantly increased among patients in the PPI-exposed group (odd ratio, 1.08 (95% confidence interval, 0.97-1.20; P = .18). Sensitivity analyses confirmed the robustness of the results. Subgroup analysis showed no between-group differences among studies that included a minimum age above 65 years (three studies) or less than age 65 (two studies). PPI exposure was not associated with the development of Alzheimer’s dementia (two studies) (OR, 1.32 (95% CI, 0.80-2.17; P = .27).
“In the absence of randomized trial evidence, a PPI prescribing approach based on appropriate utilization of guideline-based prescription should be done without the extra fear of the association of dementia,” Dr. Alrajhi said.
The researchers reported having no financial disclosures.
The finding runs counter to recent studies, including a large pharmacoepidemiological claims data analysis from Germany, that propose an association between proton pump inhibitor (PPI) use and the development of dementia (JAMA Neurol. 2016;73[4]:410-6). “The issue with these studies is that they’re based on retrospective claims data and pharmacoepidemiological studies and insurance databases that don’t really give you a good causality basis,” lead study author Saad Alrajhi, MD, said in an interview at the annual Digestive Disease Week.
In an effort to better characterize the association between PPI exposure and dementia, Dr. Alrajhi, a gastroenterology fellow at McGill University, Montreal, and colleagues conducted a meta-analysis of all fully published randomized clinical trials or observational studies comparing use of PPIs and occurrence of dementia. The researchers queried Embase, MEDLINE, and ISI Web of Knowledge for relevant studies that were published from 1995 through September 2018. Next, they assessed the quality of the studies by using the Cochrane risk assessment tool for RCTs or the Newcastle-Ottawa Scale for observational studies.
As the primary outcome, the researchers compared dementia incidence after PPI exposure (experimental group) versus no PPI exposure (control group). Development of Alzheimer’s dementia was a secondary outcome. Sensitivity analyses consisted of excluding one study at a time, and assessing results among studies of highest qualities. Subgroup analyses included stratifying patients by age. To report odds ratios, Dr. Alrajhi and colleagues used fixed or random effects models based on the absence or presence of heterogeneity.
Of 549 studies assessed, 5 met the criteria for inclusion in the final analysis: 3 case-control studies and 2 cohort studies, with a total of 472,933 patients. All of the studies scored 8 or 9 on the Newcastle-Ottawa scale, indicating high quality. Significant heterogeneity was noted for all analyses. The researchers found that the incidence of dementia was not significantly increased among patients in the PPI-exposed group (odd ratio, 1.08 (95% confidence interval, 0.97-1.20; P = .18). Sensitivity analyses confirmed the robustness of the results. Subgroup analysis showed no between-group differences among studies that included a minimum age above 65 years (three studies) or less than age 65 (two studies). PPI exposure was not associated with the development of Alzheimer’s dementia (two studies) (OR, 1.32 (95% CI, 0.80-2.17; P = .27).
“In the absence of randomized trial evidence, a PPI prescribing approach based on appropriate utilization of guideline-based prescription should be done without the extra fear of the association of dementia,” Dr. Alrajhi said.
The researchers reported having no financial disclosures.
REPORTING FROM DDW 2019
Women’s health 2019: Osteoporosis, breast cancer, contraception, and hormone therapy
Keeping up with current evidence-based healthcare practices is key to providing good clinical care to patients. This review presents 5 vignettes that highlight key issues in women’s health: osteoporosis screening, hormonal contraceptive interactions with antibiotics, hormone replacement therapy in carriers of the BRCA1 gene mutation, risks associated with hormonal contraception, and breast cancer diagnosis using digital tomosynthesis in addition to digital mammography. Supporting articles, all published in 2017 and 2018, were selected from high-impact medical and women’s health journals.
OSTEOPOROSIS SCREENING FOR FRACTURE PREVENTION
A 60-year-old woman reports that her last menstrual period was 7 years ago. She has no history of falls or fractures, and she takes no medications. She smokes 10 cigarettes per day and drinks 3 to 4 alcoholic beverages on most days of the week. She is 5 feet 6 inches (170 cm) tall and weighs 107 lb. Should she be screened for osteoporosis?
Osteoporosis is underdiagnosed
It is estimated that, in the United States, 12.3 million individuals older than 50 will develop osteoporosis by 2020. Missed opportunities to screen high-risk individuals can lead to fractures, including fractures of the hip.1
Updated screening recommendations
In 2018, the US Preventive Services Task Force (USPSTF) developed and published evidence-based recommendations for osteoporosis screening to help providers identify and treat osteoporosis early to prevent fractures.2 Available evidence on screening and treatment in women and men were reviewed with the intention of updating the 2011 USPSTF recommendations. The review also evaluated risk assessment tools, screening intervals, and efficacy of screening and treatment in various subpopulations.
Since the 2011 recommendations, more data have become available on fracture risk assessment with or without bone mineral density measurements. In its 2018 report, the USPSTF recommends that postmenopausal women younger than 65 should undergo screening with a bone density test if their 10-year risk of major osteoporotic fracture is more than 8.4%. This is equivalent to the fracture risk of a 65-year-old white woman with no major risk factors for fracture (grade B recommendation—high certainty that the benefit is moderate, or moderate certainty that the benefit is moderate to substantial).2
Assessment of fracture risk
For postmenopausal women who are under age 65 and who have at least 1 risk factor for fracture, it is reasonable to use a clinical risk assessment tool to determine who should undergo screening with bone mineral density measurement. Risk factors associated with an increased risk of osteoporotic fractures include a parental history of hip fracture, smoking, intake of 3 or more alcoholic drinks per day, low body weight, malabsorption, rheumatoid arthritis, diabetes, and postmenopausal status (not using estrogen replacement). Medications should be carefully reviewed for those that can increase the risk of fractures, including steroids and antiestrogen treatments.
The 10-year risk of a major osteoporotic or hip fracture can be assessed using the Fractional Risk Assessment Tool (FRAX), available at www.sheffield.ac.uk/FRAX/. Other acceptable tools that perform similarly to FRAX include the Osteoporosis Risk Assessment Instrument (ORAI) (10 studies; N = 16,780), Osteoporosis Index of Risk (OSIRIS) (5 studies; N = 5,649), Osteoporosis Self-Assessment Tool (OST) (13 studies; N = 44,323), and Simple Calculated Osteoporosis Risk Estimation (SCORE) (8 studies; N = 15,362).
Should this patient be screened for osteoporosis?
Based on the FRAX, this patient’s 10-year risk of major osteoporosis fracture is 9.2%. She would benefit from osteoporosis screening with a bone density test.
DO ANTIBIOTICS REDUCE EFFECTIVENESS OF HORMONAL CONTRACEPTION?
A 27-year-old woman presents with a dog bite on her right hand and is started on oral antibiotics. She takes an oral contraceptive that contains 35 µg of ethinyl estradiol and 0.25 mg of norgestimate. She asks if she should use condoms while taking antibiotics.
The antibiotics rifampin and rifabutin are known inducers of the hepatic enzymes required for contraceptive steroid metabolism, whereas other antibiotics are not. Despite the lack of compelling evidence that broad-spectrum antibiotics interfere with the efficacy of hormonal contraception, most pharmacists recommend backup contraception for women who use concomitant antibiotics.3 This practice could lead to poor compliance with the contraceptive regimen, the antibiotic regimen, or both.3
Simmons et al3 conducted a systematic review of randomized and nonrandomized studies that assessed pregnancy rates, breakthrough bleeding, ovulation suppression, and hormone pharmacokinetics in women taking oral or vaginal hormonal contraceptives in combination with nonrifamycin antibiotics, including oral, intramuscular, and intravenous forms. Oral contraceptives used in the studies included a range of doses and progestins, but lowest-dose pills, such as those containing less than 30 µg ethinyl estradiol or less than 150 µg levonorgestrel, were not included.
The contraceptive formulations in this systematic review3 included oral contraceptive pills, emergency contraception pills, and the contraceptive vaginal ring. The effect of antibiotics on other nonoral contraceptives, such as the transdermal patch, injectables, and progestin implants was not studied.
Four observational studies3 evaluated pregnancy rates or hormonal contraception failure with any antibiotic use. In 2 of these 4 studies, there was no difference in pregnancy rates in women who used oral contraceptives with and without nonrifamycin antibiotics. However, ethinyl estradiol was shown to have increased clearance when administered with dirithromycin (a macrolide).3 Twenty-five of the studies reported measures of contraceptive effectiveness (ovulation) and pharmacokinetic outcomes.
There were no observed differences in ovulation suppression or breakthrough bleeding in any study that combined hormonal contraceptives with an antibiotic. Furthermore, there was no significant decrease in progestin pharmacokinetic parameters during coadministration with an antibiotic.3 Study limitations included small sample sizes and the observational nature of the data.
How would you counsel this patient?
Available evidence suggests that nonrifamycin antibiotics do not diminish the effectiveness of the vaginal contraceptive ring or an oral hormonal contraceptive that contains at least 30 µg of ethinyl estradiol or 150 µg of levonorgestrel. Current guidelines do not recommend the use of additional backup contraception, regardless of hormonal contraception dose or formulation.4 Likewise, the most recent guidance for dental practitioners (ie, from 2012) no longer advises women to use additional contraceptive protection when taking nonrifamycin antibiotics.5
In our practice, we discuss the option of additional protection when prescribing formulations with lower estrogen doses (< 30 µg), not only because of the limitations of the available data, but also because of the high rates of unintended pregnancy with typical use of combined hormonal contraceptives (9% per year, unrelated to use of antibiotics).4 However, if our patient would rather not use additional barrier methods, she can be reassured that concomitant nonrifamycin antibiotic use is unlikely to affect contraceptive effectiveness.
HORMONE REPLACEMENT THERAPY IN CARRIERS OF THE BRCA1 MUTATION
A 41-year-old healthy mother of 3 was recently found to be a carrier of the BRCA1 mutation. She is planning to undergo prophylactic bilateral salpingo-oophorectomy for ovarian cancer prevention. However, she is apprehensive about undergoing surgical menopause. Should she be started on hormone replacement therapy after oophorectomy? How would hormone replacement therapy affect her risk of breast cancer?
In females who carry the BRCA1 mutation, the cumulative risk of both ovarian and breast cancer approaches 44% (95% confidence interval [CI] 36%–53%) and 72% (95% CI 65%–79%) by age 80.6 Prophylactic salpingo-oophorectomy reduces the risk of breast cancer by 50% and the risk of ovarian cancer by 90%. Unfortunately, premature withdrawal of ovarian hormones has been associated with long-term adverse effects including significant vasomotor symptoms, decreased quality of life, sexual dysfunction, early mortality, bone loss, decline in mood and cognition, and poor cardiovascular outcomes.7 Many of these effects can be avoided or lessened with hormone replacement therapy.
Kotsopoulos et al8 conducted a longitudinal, prospective analysis of BRCA1 mutation carriers in a multicenter study between 1995 and 2017. The mean follow-up period was 7.6 years (range 0.4–22.1). The study assessed associations between the use of hormone replacement therapy and breast cancer risk in carriers of the BRCA1 mutation who underwent prophylactic salpingo-oophorectomy. Study participants did not have a personal history of cancer. Those with a history of prophylactic mastectomy were excluded.
Participants completed a series of questionnaires every 2 years, disclosing updates in personal medical, cancer, and reproductive history. The questionnaires also inquired about the use of hormone replacement therapy, including the type used (estrogen only, progestin only, estrogen plus progestin, other), brand name, duration of use, and dose and route of administration (pill, patch, suppository).
Of the 13,087 BRCA1 mutation carriers identified, 872 met the study criteria. Of those, 377 (43%) reported using some form of hormone replacement therapy after salpingo-oophorectomy, and 495 (57%) did not. The average duration of use was 3.9 years (range 0.5–19), with most (69%) using estrogen alone; 18% used other regimens, including estrogen plus progestin and progestin only. A small percentage of participants did not indicate which formulation they used. On average, women using hormone replacement therapy underwent prophylactic oophorectomy earlier than nonusers (age 43.0 vs 48.4; absolute difference 5.5 years, P < .001).
During follow-up, there was no significant difference noted in the proportion of women diagnosed with breast cancer between hormone replacement therapy users and nonusers (10.3 vs 10.7%; absolute difference 0.4%; P = .86). In fact, for each year of estrogen-containing hormone replacement therapy, there was an 18% reduction in breast cancer risk when oophorectomy was performed before age 45 (95% CI 0.69–0.97). The authors also noted a nonsignificant 14% trend toward an increase in breast cancer risk for each year of progestin use after oophorectomy when surgery was performed before age 45 (95% CI 0.9–1.46).
Although prophylactic hysterectomy was not recommended, the authors noted that hysterectomy would eliminate the need for progestin-containing hormone replacement therapy. For those who underwent oophorectomy after age 45, hormone replacement therapy did not increase or decrease the risk of breast cancer.7
A meta-analysis by Marchetti et al9 also supports the safety of hormone replacement therapy after risk-reducing salpingo-oophorectomy. Three studies that included 1,100 patients were analyzed (including the Kotsopoulos study8 noted above). There was a nonsignificant decrease in breast cancer risk in women on estrogen-only hormone replacement therapy compared with women on estrogen-plus-progestin therapy (odds ratio 0.53, 95% CI 0.25–1.15). Overall, the authors regarded hormone replacement therapy as a safe therapeutic option after prophylactic salpingo-oophorectomy in carriers of the BRCA1 and BRCA2 mutations.9
In a case-control study published in 2016,10 hormone replacement therapy was assessed in 432 postmenopausal BRCA1 mutation carriers with invasive breast cancer (cases) and in 432 BRCA1 mutation carriers without a history of breast cancer (controls). Results showed no difference in breast cancer risk between hormone replacement therapy users and nonusers.10
Rebbeck et al11 evaluated short-term hormone replacement therapy in BRCA1 and BRCA2 gene-mutation carriers after they underwent prophylactic salpingo-oophorectomy. The results showed that hormone replacement did not affect the breast cancer risk-reduction conferred with prophylactic bilateral salpingo-oophorectomy.
Johansen et al12 evaluated hormone replacement therapy in premenopausal women after prophylactic salpingo-oophorectomy. They studied 324 carriers of BRCA gene mutations after they underwent prophylactic salpingo-oophorectomy and a subset of 950 controls who had bilateral salpingo-oophorectomy for reasons unrelated to cancer. In both groups, hormone replacement therapy was underutilized. The authors recommended using it when clinically indicated.
Should your patient start hormone replacement therapy?
This patient is healthy, and in the absence of contraindications, systemic hormone replacement therapy after prophylactic oophorectomy could mitigate the potential adverse effects of surgically induced menopause. The patient can be reassured that estrogen-containing short-term hormone replacement therapy is unlikely to increase her breast cancer risk.
HORMONAL CONTRACEPTION AND THE RISK OF BREAST CANCER
A 44-year-old woman presents to your office for an annual visit. She is sexually active but does not wish to become pregnant. She has a family history of breast cancer: her mother was diagnosed at age 53. She is interested in an oral contraceptive to prevent pregnancy and acne. However, she is nervous about being on any contraceptive that may increase her risk of breast cancer.
To date, studies assessing the effect of hormonal contraception on the risk of breast cancer have produced inconsistent results. Although most studies have shown no associated risk, a few have shown a temporary 20% to 30% increased risk of breast cancer during use.13,14 Case-controlled studies that reported an association between hormonal contraception and breast cancer included populations taking higher-dose combination pills, which are no longer prescribed. Most studies do not evaluate specific formulations of hormonal contraception, and little is known about effects associated with intrauterine devices or progestin-only contraception.
A prospective study performed by Mørch et al13 followed more than 1 million reproductive-aged women for a mean of 10.9 years. The Danish Cancer Registry was used to identify cases of invasive breast cancer. Women who used hormonal contraceptives had a relative risk of breast cancer of 1.20 compared with women not on hormonal contraception (95% CI 1.14–1.26). The study suggested that those who had been on contraceptive agents for more than 5 years had an increased risk and that this risk remained for 5 years after the agents were discontinued. Conversely, no increased risk of cancer was noted in those who used hormonal contraception for less than 5 years. No notable differences were seen among various formulations.
For women using the levonorgestrel-containing intrauterine device, the relative risk of breast cancer was 1.21 (95% CI 1.11–1.33). A few cancers were noted in those who used the progestin-only implant or those using depot medroxyprogesterone acetate. While the study showed an increased relative risk of breast cancer, the absolute risk was low—13 cases per 100,000, or approximately 1 additional case of breast cancer per 7,690 per year.13
This study had several important limitations. The authors did not adjust for common breast cancer risk factors including age at menarche, alcohol use, or breastfeeding. Additionally, the study did not account for the use of hormonal contraception before the study period and conversely, did not account for women who may have stopped taking their contraceptive despite their prescribed duration. The frequency of mammography was not explicitly noted, which could have shifted results for women who had more aggressive screening.
It is also noteworthy that the use of high-dose systemic progestins was not associated with an increased risk, whereas the levonorgestrel intrauterine device, which contains only 1/20th the dose of a low-dose oral contraceptive pill, was associated with an increased risk. This discrepancy in risk warrants further investigation, and clinicians should be aware that this inconsistency needs validation before changing clinical practice.
In an observational cohort study,15 more than 100,000 women ages 50 to 71 were followed prospectively for 15 years to evaluate the association between hormonal contraceptive use and the risk of gynecologic and breast cancers. In this study, the duration of hormonal contraceptive use, smoking status, alcohol use, body mass index, physical activity, and family history of cancer were recorded. Long-term hormonal contraceptive use reduced ovarian and endometrial cancer risks by 40% and 34%, respectively, with no increase in breast cancer risk regardless of family history.
How would you counsel the patient?
The patient should be educated on the benefits of hormonal contraception that extend beyond pregnancy prevention, including regulation of menses, improved acne, decreased risk of endometrial and ovarian cancer, and likely reductions in colorectal cancer and overall mortality risk.13–16 Further, after their own systematic review of the data assessing risk of breast cancer with hormonal contraception, the US Centers for Disease Control and Prevention state in their guidelines that all contraceptives may be used without limitation in those who have a family history of breast cancer.4 Any potential increased risk of breast cancer in women using hormonal contraception is small and would not outweigh the benefits associated with use.
One must consider the impact of an unintended pregnancy in such women, including effects on the health of the fetus and mother. Recent reports on the increasing rates of maternal death in the US (23.8 of 100,000 live births) serve as a reminder of the complications that can arise with pregnancy, especially if a mother’s health is not optimized before conception.17
MAMMOGRAPHY PLUS TOMOSYNTHESIS VS MAMMOGRAPHY ALONE
The same 44-year-old patient now inquires about screening for breast cancer. She is curious about 3-dimensional mammography and whether it would be a better screening test for her.
Digital breast tomosynthesis (DBT) is a newer imaging modality that provides a 3-dimensional reconstruction of the breast using low-dose x-ray imaging. Some studies have shown that combining DBT with digital mammography may be superior to digital mammography alone in detecting cancers.18 However, digital mammography is currently the gold standard for breast cancer screening and is the only test proven to reduce mortality.18,19
In a retrospective US study of 13 medical centers,20 breast cancer detection rates increased by 41% the year after DBT was introduced, from 2.9 to 4.1 per 1,000 cases. DBT was associated with 16 fewer patients recalled for repeat imaging out of 1,000 women screened (as opposed to mammography alone). Two European studies similarly suggested an increase in cancer detection with lower recall rates.21,22
Is 3-D mammography a better option?
In a 2-arm study by Pattacini et al,18 nearly 20,000 women ages 45 to 70 were randomized to undergo either digital mammography or digital mammography plus DBT for primary breast cancer screening. Women were enrolled over a 2-year period and were followed for 4.5 years, and the development of a primary invasive cancer was the primary end point. Recall rates, reading times, and radiation doses were also compared between the 2 groups.
Overall, the cancer detection rate was higher in the digital mammography plus DBT arm compared with digital mammography alone (8.6 vs 4.5 per 1,000). The detection rates were higher in the combined screening group among all age subgroups, with relative risks ranging from 1.83 to 2.04 (P = .93). The recall rate was 3.5% in the 2 arms, with relative risks ranging from 0.93 to 1.11 (P = .52). There was a reduction in the number of false positives seen in women undergoing digital mammography plus DBT when compared with digital mammography alone, from 30 per 1,000 to 27 per 1,000.
Detection of ductal carcinoma in situ increased in the experimental arm (relative detection 2.80, 95% CI 1.01–7.65) compared with invasive cancers. Comparing radiation, the dose was 2.3 times higher in those who underwent digital mammography plus DBT. The average reading times for digital mammography alone were 20 to 85 seconds; adding DBT added 35 to 81 seconds.19
Should you advise 3-D mammography?
The patient should be educated on the benefits of both digital mammography alone and digital mammography plus DBT. The use of digital mammography plus DBT has been supported in various studies and has been shown to increase cancer detection rates, although data are still conflicting regarding recall rates.19,20 More studies are needed to determine its effect on breast cancer morality.
Routine use of DBT in women with or without dense breast tissue has not been recommended by organizations such as the USPSTF and the American College of Obstetricians and Gynecologists.23,24 While there is an increased dose of radiation, it still falls below the US Food and Drug Administration limits and should not be the sole barrier to use.
- Cauley JA. Screening for osteoporosis. JAMA 2018; 319(24):2483–2485. doi:10.1001/jama.2018.5722
- US Preventive Services Task Force, Curry SJ, Krist AH, Owens DK, et al. Screening for osteoporosis to prevent fractures: US Preventive Services Task Force recommendation statement. JAMA 2018; 319(24):2521–2531. doi:10.1001/jama.2018.7498
- Simmons KB, Haddad LB, Nanda K, Curtis KM. Drug interactions between non-rifamycin antibiotics and hormonal contraception: a systematic review. Am J Obstet Gynecol 2018; 218(1):88–97.e14. doi:10.1016/j.ajog.2017.07.003
- Curtis KM, Tepper NK, Jatlaoui TC, et al. US Medical eligibility criteria for contraceptive use, 2016. MMWR Recomm Rep 2016; 65(3):1–103. doi:10.15585/mmwr.rr6503a1
- Taylor J, Pemberton MN. Antibiotics and oral contraceptives: new considerations for dental practice. Br Dent J 2012; 212(10):481–483. doi:10.1038/sj.bdj.2012.414
- Kuchenbaecker KB, Hopper JL, Barnes DR, et al. Risks of breast, ovarian, and contralateral breast cancer for BRCA1 and BRCA2 mutation carriers. JAMA 2017; 317(23):2402–2416. doi:10.1001/jama.2017.7112
- Faubion SS, Kuhle CL, Shuster LT, Rocca WA. Long-term health consequences of premature or early menopause and considerations for management. Climacteric 2015; 18(4):483–491. doi:10.3109/13697137.2015.1020484
- Kotsopoulos J, Gronwald J, Karlan BY, et al; Hereditary Breast Cancer Clinical Study Group. Hormone replacement therapy after oophorectomy and breast cancer risk among BRCA1 mutation carriers. JAMA Oncol 2018; 4(8):1059–1065. doi:10.1001/jamaoncol.2018.0211
- Marchetti C, De Felice F, Boccia S, et al. Hormone replacement therapy after prophylactic risk reducing salpingo-oophorectomy and breast cancer risk in BRCA1 and BRCA2 mutation carriers: a meta-analysis. Crit Rev Oncol Hematol 2018; 132:111–115. doi:10.1016/j.critrevonc.2018.09.018
- Kotsopoulos J, Huzarski T, Gronwald J, et al. Hormone replacement therapy after menopause and risk of breast cancer in BRCA1 mutation carriers: a case-control study. Breast Cancer Res Treat 2016; 155(2):365–373. doi:10.1007/s10549-016-3685-3
- Rebbeck TR, Friebel T, Wagner T, et al; PROSE Study Group. Effect of short-term hormone replacement therapy on breast cancer risk reduction after bilateral prophylactic oophorectomy in BRCA1 and BRCA2 mutation carriers: the PROSE Study Group. J Clin Oncol 2005; 23(31):7804–7810. doi:10.1200/JCO.2004.00.8151
- Johansen N, Liavaag AH, Iversen OE, Dørum A, Braaten T, Michelsen TM. Use of hormone replacement therapy after risk-reducing salpingo-oophorectomy. Acta Obstet Gynecol Scand 2017; 96(5):547–555. doi:10.1111/aogs.13120
- Mørch LS, Skovlund CW, Hannaford PC, Iversen L, Fielding S, Lidegaard Ø. Contemporary hormonal contraception and the risk of breast cancer. N Engl J Med 2017; 377(23):2228–2239. doi:10.1056/NEJMoa1700732
- Batur P, Sikka S, McNamara M. Contraception update: extended use of long acting methods, hormonal contraception risks, and over the counter access. J Womens Health (Larchmt) 2018. doi:10.1089/jwh.2018.7391. [Epub ahead of print]
- Michels KA, Pfeiffer RM, Brinton LA, Trabert B. Modification of the associations between duration of oral contraceptive use and ovarian, endometrial, breast, and colorectal cancers. JAMA Oncol 2018; 4(4):516–521. doi:10.1001/jamaoncol.2017.4942
- Iversen L, Fielding S, Lidegaard Ø, Mørch LS, Skovlund CW, Hannaford PC. Association between contemporary hormonal contraception and ovarian cancer in women of reproductive age in Denmark: prospective, nationwide cohort study. BMJ 2018; 362:k3609. doi:10.1136/bmj.k3609
- MacDorman MF, Declercq E, Cabral H, Morton C. Recent increases in the US maternal mortality rate: disentangling trends from measurement issues. Obstet Gynecol 2016; 128(3):447–455. doi:10.1097/AOG.0000000000001556
- Pattacini P, Nitrosi A, Giorgi Rossi P, et al; RETomo Working Group. Digital mammography versus digital mammography plus tomosynthesis for breast cancer screening: the Reggio Emilia tomosynthesis randomized trial. Radiology 2018; 288(2):375–385. doi:10.1148/radiol.2018172119
- Pace L, Keating NL. A systematic assessment of benefits and risks to guide breast cancer screening decisions. JAMA 2014; 311(13):1327–1335. doi:10.1001/jama.2014.1398
- Friedewald SM, Rafferty EA, Rose SL, et al. Breast cancer screening using tomosynthesis in combination with digital mammography. JAMA 2014; 311(24):2499–2507. doi:10.1001/jama.2014.6095
- Skaane P, Bandos AI, Gullien R, et al. Comparison of digital mammography alone and digital mammography plus tomosynthesis in a population-based screening program. Radiology 2013; 267(1):47–56. doi:10.1148/radiol.12121373
- Ciatto S, Houssami N, Bernardi D, et al. Integration of 3D digital mammography with tomosynthesis for population breast-cancer screening (STORM): a prospective comparison study. Lancet Oncol 2013; 14(7):583–589. doi:10.1016/S1470-2045(13)70134-7
- US Preventive Services Task Force. Final recommendation statement: breast cancer: screening. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/breast-cancer-screening1. Accessed May 13, 2019.
- American College of Obstetricians and Gynecologists. Breast cancer risk assessment and screening in average-risk women. www.acog.org/Clinical-Guidance-and-Publications/Practice-Bulletins/Committee-on-Practice-Bulletins-Gynecology/Breast-Cancer-Risk-Assessment-and-Screening-in-Average-Risk-Women?IsMobileSet=false#5. Accessed May 13, 2019.
Keeping up with current evidence-based healthcare practices is key to providing good clinical care to patients. This review presents 5 vignettes that highlight key issues in women’s health: osteoporosis screening, hormonal contraceptive interactions with antibiotics, hormone replacement therapy in carriers of the BRCA1 gene mutation, risks associated with hormonal contraception, and breast cancer diagnosis using digital tomosynthesis in addition to digital mammography. Supporting articles, all published in 2017 and 2018, were selected from high-impact medical and women’s health journals.
OSTEOPOROSIS SCREENING FOR FRACTURE PREVENTION
A 60-year-old woman reports that her last menstrual period was 7 years ago. She has no history of falls or fractures, and she takes no medications. She smokes 10 cigarettes per day and drinks 3 to 4 alcoholic beverages on most days of the week. She is 5 feet 6 inches (170 cm) tall and weighs 107 lb. Should she be screened for osteoporosis?
Osteoporosis is underdiagnosed
It is estimated that, in the United States, 12.3 million individuals older than 50 will develop osteoporosis by 2020. Missed opportunities to screen high-risk individuals can lead to fractures, including fractures of the hip.1
Updated screening recommendations
In 2018, the US Preventive Services Task Force (USPSTF) developed and published evidence-based recommendations for osteoporosis screening to help providers identify and treat osteoporosis early to prevent fractures.2 Available evidence on screening and treatment in women and men were reviewed with the intention of updating the 2011 USPSTF recommendations. The review also evaluated risk assessment tools, screening intervals, and efficacy of screening and treatment in various subpopulations.
Since the 2011 recommendations, more data have become available on fracture risk assessment with or without bone mineral density measurements. In its 2018 report, the USPSTF recommends that postmenopausal women younger than 65 should undergo screening with a bone density test if their 10-year risk of major osteoporotic fracture is more than 8.4%. This is equivalent to the fracture risk of a 65-year-old white woman with no major risk factors for fracture (grade B recommendation—high certainty that the benefit is moderate, or moderate certainty that the benefit is moderate to substantial).2
Assessment of fracture risk
For postmenopausal women who are under age 65 and who have at least 1 risk factor for fracture, it is reasonable to use a clinical risk assessment tool to determine who should undergo screening with bone mineral density measurement. Risk factors associated with an increased risk of osteoporotic fractures include a parental history of hip fracture, smoking, intake of 3 or more alcoholic drinks per day, low body weight, malabsorption, rheumatoid arthritis, diabetes, and postmenopausal status (not using estrogen replacement). Medications should be carefully reviewed for those that can increase the risk of fractures, including steroids and antiestrogen treatments.
The 10-year risk of a major osteoporotic or hip fracture can be assessed using the Fractional Risk Assessment Tool (FRAX), available at www.sheffield.ac.uk/FRAX/. Other acceptable tools that perform similarly to FRAX include the Osteoporosis Risk Assessment Instrument (ORAI) (10 studies; N = 16,780), Osteoporosis Index of Risk (OSIRIS) (5 studies; N = 5,649), Osteoporosis Self-Assessment Tool (OST) (13 studies; N = 44,323), and Simple Calculated Osteoporosis Risk Estimation (SCORE) (8 studies; N = 15,362).
Should this patient be screened for osteoporosis?
Based on the FRAX, this patient’s 10-year risk of major osteoporosis fracture is 9.2%. She would benefit from osteoporosis screening with a bone density test.
DO ANTIBIOTICS REDUCE EFFECTIVENESS OF HORMONAL CONTRACEPTION?
A 27-year-old woman presents with a dog bite on her right hand and is started on oral antibiotics. She takes an oral contraceptive that contains 35 µg of ethinyl estradiol and 0.25 mg of norgestimate. She asks if she should use condoms while taking antibiotics.
The antibiotics rifampin and rifabutin are known inducers of the hepatic enzymes required for contraceptive steroid metabolism, whereas other antibiotics are not. Despite the lack of compelling evidence that broad-spectrum antibiotics interfere with the efficacy of hormonal contraception, most pharmacists recommend backup contraception for women who use concomitant antibiotics.3 This practice could lead to poor compliance with the contraceptive regimen, the antibiotic regimen, or both.3
Simmons et al3 conducted a systematic review of randomized and nonrandomized studies that assessed pregnancy rates, breakthrough bleeding, ovulation suppression, and hormone pharmacokinetics in women taking oral or vaginal hormonal contraceptives in combination with nonrifamycin antibiotics, including oral, intramuscular, and intravenous forms. Oral contraceptives used in the studies included a range of doses and progestins, but lowest-dose pills, such as those containing less than 30 µg ethinyl estradiol or less than 150 µg levonorgestrel, were not included.
The contraceptive formulations in this systematic review3 included oral contraceptive pills, emergency contraception pills, and the contraceptive vaginal ring. The effect of antibiotics on other nonoral contraceptives, such as the transdermal patch, injectables, and progestin implants was not studied.
Four observational studies3 evaluated pregnancy rates or hormonal contraception failure with any antibiotic use. In 2 of these 4 studies, there was no difference in pregnancy rates in women who used oral contraceptives with and without nonrifamycin antibiotics. However, ethinyl estradiol was shown to have increased clearance when administered with dirithromycin (a macrolide).3 Twenty-five of the studies reported measures of contraceptive effectiveness (ovulation) and pharmacokinetic outcomes.
There were no observed differences in ovulation suppression or breakthrough bleeding in any study that combined hormonal contraceptives with an antibiotic. Furthermore, there was no significant decrease in progestin pharmacokinetic parameters during coadministration with an antibiotic.3 Study limitations included small sample sizes and the observational nature of the data.
How would you counsel this patient?
Available evidence suggests that nonrifamycin antibiotics do not diminish the effectiveness of the vaginal contraceptive ring or an oral hormonal contraceptive that contains at least 30 µg of ethinyl estradiol or 150 µg of levonorgestrel. Current guidelines do not recommend the use of additional backup contraception, regardless of hormonal contraception dose or formulation.4 Likewise, the most recent guidance for dental practitioners (ie, from 2012) no longer advises women to use additional contraceptive protection when taking nonrifamycin antibiotics.5
In our practice, we discuss the option of additional protection when prescribing formulations with lower estrogen doses (< 30 µg), not only because of the limitations of the available data, but also because of the high rates of unintended pregnancy with typical use of combined hormonal contraceptives (9% per year, unrelated to use of antibiotics).4 However, if our patient would rather not use additional barrier methods, she can be reassured that concomitant nonrifamycin antibiotic use is unlikely to affect contraceptive effectiveness.
HORMONE REPLACEMENT THERAPY IN CARRIERS OF THE BRCA1 MUTATION
A 41-year-old healthy mother of 3 was recently found to be a carrier of the BRCA1 mutation. She is planning to undergo prophylactic bilateral salpingo-oophorectomy for ovarian cancer prevention. However, she is apprehensive about undergoing surgical menopause. Should she be started on hormone replacement therapy after oophorectomy? How would hormone replacement therapy affect her risk of breast cancer?
In females who carry the BRCA1 mutation, the cumulative risk of both ovarian and breast cancer approaches 44% (95% confidence interval [CI] 36%–53%) and 72% (95% CI 65%–79%) by age 80.6 Prophylactic salpingo-oophorectomy reduces the risk of breast cancer by 50% and the risk of ovarian cancer by 90%. Unfortunately, premature withdrawal of ovarian hormones has been associated with long-term adverse effects including significant vasomotor symptoms, decreased quality of life, sexual dysfunction, early mortality, bone loss, decline in mood and cognition, and poor cardiovascular outcomes.7 Many of these effects can be avoided or lessened with hormone replacement therapy.
Kotsopoulos et al8 conducted a longitudinal, prospective analysis of BRCA1 mutation carriers in a multicenter study between 1995 and 2017. The mean follow-up period was 7.6 years (range 0.4–22.1). The study assessed associations between the use of hormone replacement therapy and breast cancer risk in carriers of the BRCA1 mutation who underwent prophylactic salpingo-oophorectomy. Study participants did not have a personal history of cancer. Those with a history of prophylactic mastectomy were excluded.
Participants completed a series of questionnaires every 2 years, disclosing updates in personal medical, cancer, and reproductive history. The questionnaires also inquired about the use of hormone replacement therapy, including the type used (estrogen only, progestin only, estrogen plus progestin, other), brand name, duration of use, and dose and route of administration (pill, patch, suppository).
Of the 13,087 BRCA1 mutation carriers identified, 872 met the study criteria. Of those, 377 (43%) reported using some form of hormone replacement therapy after salpingo-oophorectomy, and 495 (57%) did not. The average duration of use was 3.9 years (range 0.5–19), with most (69%) using estrogen alone; 18% used other regimens, including estrogen plus progestin and progestin only. A small percentage of participants did not indicate which formulation they used. On average, women using hormone replacement therapy underwent prophylactic oophorectomy earlier than nonusers (age 43.0 vs 48.4; absolute difference 5.5 years, P < .001).
During follow-up, there was no significant difference noted in the proportion of women diagnosed with breast cancer between hormone replacement therapy users and nonusers (10.3 vs 10.7%; absolute difference 0.4%; P = .86). In fact, for each year of estrogen-containing hormone replacement therapy, there was an 18% reduction in breast cancer risk when oophorectomy was performed before age 45 (95% CI 0.69–0.97). The authors also noted a nonsignificant 14% trend toward an increase in breast cancer risk for each year of progestin use after oophorectomy when surgery was performed before age 45 (95% CI 0.9–1.46).
Although prophylactic hysterectomy was not recommended, the authors noted that hysterectomy would eliminate the need for progestin-containing hormone replacement therapy. For those who underwent oophorectomy after age 45, hormone replacement therapy did not increase or decrease the risk of breast cancer.7
A meta-analysis by Marchetti et al9 also supports the safety of hormone replacement therapy after risk-reducing salpingo-oophorectomy. Three studies that included 1,100 patients were analyzed (including the Kotsopoulos study8 noted above). There was a nonsignificant decrease in breast cancer risk in women on estrogen-only hormone replacement therapy compared with women on estrogen-plus-progestin therapy (odds ratio 0.53, 95% CI 0.25–1.15). Overall, the authors regarded hormone replacement therapy as a safe therapeutic option after prophylactic salpingo-oophorectomy in carriers of the BRCA1 and BRCA2 mutations.9
In a case-control study published in 2016,10 hormone replacement therapy was assessed in 432 postmenopausal BRCA1 mutation carriers with invasive breast cancer (cases) and in 432 BRCA1 mutation carriers without a history of breast cancer (controls). Results showed no difference in breast cancer risk between hormone replacement therapy users and nonusers.10
Rebbeck et al11 evaluated short-term hormone replacement therapy in BRCA1 and BRCA2 gene-mutation carriers after they underwent prophylactic salpingo-oophorectomy. The results showed that hormone replacement did not affect the breast cancer risk-reduction conferred with prophylactic bilateral salpingo-oophorectomy.
Johansen et al12 evaluated hormone replacement therapy in premenopausal women after prophylactic salpingo-oophorectomy. They studied 324 carriers of BRCA gene mutations after they underwent prophylactic salpingo-oophorectomy and a subset of 950 controls who had bilateral salpingo-oophorectomy for reasons unrelated to cancer. In both groups, hormone replacement therapy was underutilized. The authors recommended using it when clinically indicated.
Should your patient start hormone replacement therapy?
This patient is healthy, and in the absence of contraindications, systemic hormone replacement therapy after prophylactic oophorectomy could mitigate the potential adverse effects of surgically induced menopause. The patient can be reassured that estrogen-containing short-term hormone replacement therapy is unlikely to increase her breast cancer risk.
HORMONAL CONTRACEPTION AND THE RISK OF BREAST CANCER
A 44-year-old woman presents to your office for an annual visit. She is sexually active but does not wish to become pregnant. She has a family history of breast cancer: her mother was diagnosed at age 53. She is interested in an oral contraceptive to prevent pregnancy and acne. However, she is nervous about being on any contraceptive that may increase her risk of breast cancer.
To date, studies assessing the effect of hormonal contraception on the risk of breast cancer have produced inconsistent results. Although most studies have shown no associated risk, a few have shown a temporary 20% to 30% increased risk of breast cancer during use.13,14 Case-controlled studies that reported an association between hormonal contraception and breast cancer included populations taking higher-dose combination pills, which are no longer prescribed. Most studies do not evaluate specific formulations of hormonal contraception, and little is known about effects associated with intrauterine devices or progestin-only contraception.
A prospective study performed by Mørch et al13 followed more than 1 million reproductive-aged women for a mean of 10.9 years. The Danish Cancer Registry was used to identify cases of invasive breast cancer. Women who used hormonal contraceptives had a relative risk of breast cancer of 1.20 compared with women not on hormonal contraception (95% CI 1.14–1.26). The study suggested that those who had been on contraceptive agents for more than 5 years had an increased risk and that this risk remained for 5 years after the agents were discontinued. Conversely, no increased risk of cancer was noted in those who used hormonal contraception for less than 5 years. No notable differences were seen among various formulations.
For women using the levonorgestrel-containing intrauterine device, the relative risk of breast cancer was 1.21 (95% CI 1.11–1.33). A few cancers were noted in those who used the progestin-only implant or those using depot medroxyprogesterone acetate. While the study showed an increased relative risk of breast cancer, the absolute risk was low—13 cases per 100,000, or approximately 1 additional case of breast cancer per 7,690 per year.13
This study had several important limitations. The authors did not adjust for common breast cancer risk factors including age at menarche, alcohol use, or breastfeeding. Additionally, the study did not account for the use of hormonal contraception before the study period and conversely, did not account for women who may have stopped taking their contraceptive despite their prescribed duration. The frequency of mammography was not explicitly noted, which could have shifted results for women who had more aggressive screening.
It is also noteworthy that the use of high-dose systemic progestins was not associated with an increased risk, whereas the levonorgestrel intrauterine device, which contains only 1/20th the dose of a low-dose oral contraceptive pill, was associated with an increased risk. This discrepancy in risk warrants further investigation, and clinicians should be aware that this inconsistency needs validation before changing clinical practice.
In an observational cohort study,15 more than 100,000 women ages 50 to 71 were followed prospectively for 15 years to evaluate the association between hormonal contraceptive use and the risk of gynecologic and breast cancers. In this study, the duration of hormonal contraceptive use, smoking status, alcohol use, body mass index, physical activity, and family history of cancer were recorded. Long-term hormonal contraceptive use reduced ovarian and endometrial cancer risks by 40% and 34%, respectively, with no increase in breast cancer risk regardless of family history.
How would you counsel the patient?
The patient should be educated on the benefits of hormonal contraception that extend beyond pregnancy prevention, including regulation of menses, improved acne, decreased risk of endometrial and ovarian cancer, and likely reductions in colorectal cancer and overall mortality risk.13–16 Further, after their own systematic review of the data assessing risk of breast cancer with hormonal contraception, the US Centers for Disease Control and Prevention state in their guidelines that all contraceptives may be used without limitation in those who have a family history of breast cancer.4 Any potential increased risk of breast cancer in women using hormonal contraception is small and would not outweigh the benefits associated with use.
One must consider the impact of an unintended pregnancy in such women, including effects on the health of the fetus and mother. Recent reports on the increasing rates of maternal death in the US (23.8 of 100,000 live births) serve as a reminder of the complications that can arise with pregnancy, especially if a mother’s health is not optimized before conception.17
MAMMOGRAPHY PLUS TOMOSYNTHESIS VS MAMMOGRAPHY ALONE
The same 44-year-old patient now inquires about screening for breast cancer. She is curious about 3-dimensional mammography and whether it would be a better screening test for her.
Digital breast tomosynthesis (DBT) is a newer imaging modality that provides a 3-dimensional reconstruction of the breast using low-dose x-ray imaging. Some studies have shown that combining DBT with digital mammography may be superior to digital mammography alone in detecting cancers.18 However, digital mammography is currently the gold standard for breast cancer screening and is the only test proven to reduce mortality.18,19
In a retrospective US study of 13 medical centers,20 breast cancer detection rates increased by 41% the year after DBT was introduced, from 2.9 to 4.1 per 1,000 cases. DBT was associated with 16 fewer patients recalled for repeat imaging out of 1,000 women screened (as opposed to mammography alone). Two European studies similarly suggested an increase in cancer detection with lower recall rates.21,22
Is 3-D mammography a better option?
In a 2-arm study by Pattacini et al,18 nearly 20,000 women ages 45 to 70 were randomized to undergo either digital mammography or digital mammography plus DBT for primary breast cancer screening. Women were enrolled over a 2-year period and were followed for 4.5 years, and the development of a primary invasive cancer was the primary end point. Recall rates, reading times, and radiation doses were also compared between the 2 groups.
Overall, the cancer detection rate was higher in the digital mammography plus DBT arm compared with digital mammography alone (8.6 vs 4.5 per 1,000). The detection rates were higher in the combined screening group among all age subgroups, with relative risks ranging from 1.83 to 2.04 (P = .93). The recall rate was 3.5% in the 2 arms, with relative risks ranging from 0.93 to 1.11 (P = .52). There was a reduction in the number of false positives seen in women undergoing digital mammography plus DBT when compared with digital mammography alone, from 30 per 1,000 to 27 per 1,000.
Detection of ductal carcinoma in situ increased in the experimental arm (relative detection 2.80, 95% CI 1.01–7.65) compared with invasive cancers. Comparing radiation, the dose was 2.3 times higher in those who underwent digital mammography plus DBT. The average reading times for digital mammography alone were 20 to 85 seconds; adding DBT added 35 to 81 seconds.19
Should you advise 3-D mammography?
The patient should be educated on the benefits of both digital mammography alone and digital mammography plus DBT. The use of digital mammography plus DBT has been supported in various studies and has been shown to increase cancer detection rates, although data are still conflicting regarding recall rates.19,20 More studies are needed to determine its effect on breast cancer morality.
Routine use of DBT in women with or without dense breast tissue has not been recommended by organizations such as the USPSTF and the American College of Obstetricians and Gynecologists.23,24 While there is an increased dose of radiation, it still falls below the US Food and Drug Administration limits and should not be the sole barrier to use.
Keeping up with current evidence-based healthcare practices is key to providing good clinical care to patients. This review presents 5 vignettes that highlight key issues in women’s health: osteoporosis screening, hormonal contraceptive interactions with antibiotics, hormone replacement therapy in carriers of the BRCA1 gene mutation, risks associated with hormonal contraception, and breast cancer diagnosis using digital tomosynthesis in addition to digital mammography. Supporting articles, all published in 2017 and 2018, were selected from high-impact medical and women’s health journals.
OSTEOPOROSIS SCREENING FOR FRACTURE PREVENTION
A 60-year-old woman reports that her last menstrual period was 7 years ago. She has no history of falls or fractures, and she takes no medications. She smokes 10 cigarettes per day and drinks 3 to 4 alcoholic beverages on most days of the week. She is 5 feet 6 inches (170 cm) tall and weighs 107 lb. Should she be screened for osteoporosis?
Osteoporosis is underdiagnosed
It is estimated that, in the United States, 12.3 million individuals older than 50 will develop osteoporosis by 2020. Missed opportunities to screen high-risk individuals can lead to fractures, including fractures of the hip.1
Updated screening recommendations
In 2018, the US Preventive Services Task Force (USPSTF) developed and published evidence-based recommendations for osteoporosis screening to help providers identify and treat osteoporosis early to prevent fractures.2 Available evidence on screening and treatment in women and men were reviewed with the intention of updating the 2011 USPSTF recommendations. The review also evaluated risk assessment tools, screening intervals, and efficacy of screening and treatment in various subpopulations.
Since the 2011 recommendations, more data have become available on fracture risk assessment with or without bone mineral density measurements. In its 2018 report, the USPSTF recommends that postmenopausal women younger than 65 should undergo screening with a bone density test if their 10-year risk of major osteoporotic fracture is more than 8.4%. This is equivalent to the fracture risk of a 65-year-old white woman with no major risk factors for fracture (grade B recommendation—high certainty that the benefit is moderate, or moderate certainty that the benefit is moderate to substantial).2
Assessment of fracture risk
For postmenopausal women who are under age 65 and who have at least 1 risk factor for fracture, it is reasonable to use a clinical risk assessment tool to determine who should undergo screening with bone mineral density measurement. Risk factors associated with an increased risk of osteoporotic fractures include a parental history of hip fracture, smoking, intake of 3 or more alcoholic drinks per day, low body weight, malabsorption, rheumatoid arthritis, diabetes, and postmenopausal status (not using estrogen replacement). Medications should be carefully reviewed for those that can increase the risk of fractures, including steroids and antiestrogen treatments.
The 10-year risk of a major osteoporotic or hip fracture can be assessed using the Fractional Risk Assessment Tool (FRAX), available at www.sheffield.ac.uk/FRAX/. Other acceptable tools that perform similarly to FRAX include the Osteoporosis Risk Assessment Instrument (ORAI) (10 studies; N = 16,780), Osteoporosis Index of Risk (OSIRIS) (5 studies; N = 5,649), Osteoporosis Self-Assessment Tool (OST) (13 studies; N = 44,323), and Simple Calculated Osteoporosis Risk Estimation (SCORE) (8 studies; N = 15,362).
Should this patient be screened for osteoporosis?
Based on the FRAX, this patient’s 10-year risk of major osteoporosis fracture is 9.2%. She would benefit from osteoporosis screening with a bone density test.
DO ANTIBIOTICS REDUCE EFFECTIVENESS OF HORMONAL CONTRACEPTION?
A 27-year-old woman presents with a dog bite on her right hand and is started on oral antibiotics. She takes an oral contraceptive that contains 35 µg of ethinyl estradiol and 0.25 mg of norgestimate. She asks if she should use condoms while taking antibiotics.
The antibiotics rifampin and rifabutin are known inducers of the hepatic enzymes required for contraceptive steroid metabolism, whereas other antibiotics are not. Despite the lack of compelling evidence that broad-spectrum antibiotics interfere with the efficacy of hormonal contraception, most pharmacists recommend backup contraception for women who use concomitant antibiotics.3 This practice could lead to poor compliance with the contraceptive regimen, the antibiotic regimen, or both.3
Simmons et al3 conducted a systematic review of randomized and nonrandomized studies that assessed pregnancy rates, breakthrough bleeding, ovulation suppression, and hormone pharmacokinetics in women taking oral or vaginal hormonal contraceptives in combination with nonrifamycin antibiotics, including oral, intramuscular, and intravenous forms. Oral contraceptives used in the studies included a range of doses and progestins, but lowest-dose pills, such as those containing less than 30 µg ethinyl estradiol or less than 150 µg levonorgestrel, were not included.
The contraceptive formulations in this systematic review3 included oral contraceptive pills, emergency contraception pills, and the contraceptive vaginal ring. The effect of antibiotics on other nonoral contraceptives, such as the transdermal patch, injectables, and progestin implants was not studied.
Four observational studies3 evaluated pregnancy rates or hormonal contraception failure with any antibiotic use. In 2 of these 4 studies, there was no difference in pregnancy rates in women who used oral contraceptives with and without nonrifamycin antibiotics. However, ethinyl estradiol was shown to have increased clearance when administered with dirithromycin (a macrolide).3 Twenty-five of the studies reported measures of contraceptive effectiveness (ovulation) and pharmacokinetic outcomes.
There were no observed differences in ovulation suppression or breakthrough bleeding in any study that combined hormonal contraceptives with an antibiotic. Furthermore, there was no significant decrease in progestin pharmacokinetic parameters during coadministration with an antibiotic.3 Study limitations included small sample sizes and the observational nature of the data.
How would you counsel this patient?
Available evidence suggests that nonrifamycin antibiotics do not diminish the effectiveness of the vaginal contraceptive ring or an oral hormonal contraceptive that contains at least 30 µg of ethinyl estradiol or 150 µg of levonorgestrel. Current guidelines do not recommend the use of additional backup contraception, regardless of hormonal contraception dose or formulation.4 Likewise, the most recent guidance for dental practitioners (ie, from 2012) no longer advises women to use additional contraceptive protection when taking nonrifamycin antibiotics.5
In our practice, we discuss the option of additional protection when prescribing formulations with lower estrogen doses (< 30 µg), not only because of the limitations of the available data, but also because of the high rates of unintended pregnancy with typical use of combined hormonal contraceptives (9% per year, unrelated to use of antibiotics).4 However, if our patient would rather not use additional barrier methods, she can be reassured that concomitant nonrifamycin antibiotic use is unlikely to affect contraceptive effectiveness.
HORMONE REPLACEMENT THERAPY IN CARRIERS OF THE BRCA1 MUTATION
A 41-year-old healthy mother of 3 was recently found to be a carrier of the BRCA1 mutation. She is planning to undergo prophylactic bilateral salpingo-oophorectomy for ovarian cancer prevention. However, she is apprehensive about undergoing surgical menopause. Should she be started on hormone replacement therapy after oophorectomy? How would hormone replacement therapy affect her risk of breast cancer?
In females who carry the BRCA1 mutation, the cumulative risk of both ovarian and breast cancer approaches 44% (95% confidence interval [CI] 36%–53%) and 72% (95% CI 65%–79%) by age 80.6 Prophylactic salpingo-oophorectomy reduces the risk of breast cancer by 50% and the risk of ovarian cancer by 90%. Unfortunately, premature withdrawal of ovarian hormones has been associated with long-term adverse effects including significant vasomotor symptoms, decreased quality of life, sexual dysfunction, early mortality, bone loss, decline in mood and cognition, and poor cardiovascular outcomes.7 Many of these effects can be avoided or lessened with hormone replacement therapy.
Kotsopoulos et al8 conducted a longitudinal, prospective analysis of BRCA1 mutation carriers in a multicenter study between 1995 and 2017. The mean follow-up period was 7.6 years (range 0.4–22.1). The study assessed associations between the use of hormone replacement therapy and breast cancer risk in carriers of the BRCA1 mutation who underwent prophylactic salpingo-oophorectomy. Study participants did not have a personal history of cancer. Those with a history of prophylactic mastectomy were excluded.
Participants completed a series of questionnaires every 2 years, disclosing updates in personal medical, cancer, and reproductive history. The questionnaires also inquired about the use of hormone replacement therapy, including the type used (estrogen only, progestin only, estrogen plus progestin, other), brand name, duration of use, and dose and route of administration (pill, patch, suppository).
Of the 13,087 BRCA1 mutation carriers identified, 872 met the study criteria. Of those, 377 (43%) reported using some form of hormone replacement therapy after salpingo-oophorectomy, and 495 (57%) did not. The average duration of use was 3.9 years (range 0.5–19), with most (69%) using estrogen alone; 18% used other regimens, including estrogen plus progestin and progestin only. A small percentage of participants did not indicate which formulation they used. On average, women using hormone replacement therapy underwent prophylactic oophorectomy earlier than nonusers (age 43.0 vs 48.4; absolute difference 5.5 years, P < .001).
During follow-up, there was no significant difference noted in the proportion of women diagnosed with breast cancer between hormone replacement therapy users and nonusers (10.3 vs 10.7%; absolute difference 0.4%; P = .86). In fact, for each year of estrogen-containing hormone replacement therapy, there was an 18% reduction in breast cancer risk when oophorectomy was performed before age 45 (95% CI 0.69–0.97). The authors also noted a nonsignificant 14% trend toward an increase in breast cancer risk for each year of progestin use after oophorectomy when surgery was performed before age 45 (95% CI 0.9–1.46).
Although prophylactic hysterectomy was not recommended, the authors noted that hysterectomy would eliminate the need for progestin-containing hormone replacement therapy. For those who underwent oophorectomy after age 45, hormone replacement therapy did not increase or decrease the risk of breast cancer.7
A meta-analysis by Marchetti et al9 also supports the safety of hormone replacement therapy after risk-reducing salpingo-oophorectomy. Three studies that included 1,100 patients were analyzed (including the Kotsopoulos study8 noted above). There was a nonsignificant decrease in breast cancer risk in women on estrogen-only hormone replacement therapy compared with women on estrogen-plus-progestin therapy (odds ratio 0.53, 95% CI 0.25–1.15). Overall, the authors regarded hormone replacement therapy as a safe therapeutic option after prophylactic salpingo-oophorectomy in carriers of the BRCA1 and BRCA2 mutations.9
In a case-control study published in 2016,10 hormone replacement therapy was assessed in 432 postmenopausal BRCA1 mutation carriers with invasive breast cancer (cases) and in 432 BRCA1 mutation carriers without a history of breast cancer (controls). Results showed no difference in breast cancer risk between hormone replacement therapy users and nonusers.10
Rebbeck et al11 evaluated short-term hormone replacement therapy in BRCA1 and BRCA2 gene-mutation carriers after they underwent prophylactic salpingo-oophorectomy. The results showed that hormone replacement did not affect the breast cancer risk-reduction conferred with prophylactic bilateral salpingo-oophorectomy.
Johansen et al12 evaluated hormone replacement therapy in premenopausal women after prophylactic salpingo-oophorectomy. They studied 324 carriers of BRCA gene mutations after they underwent prophylactic salpingo-oophorectomy and a subset of 950 controls who had bilateral salpingo-oophorectomy for reasons unrelated to cancer. In both groups, hormone replacement therapy was underutilized. The authors recommended using it when clinically indicated.
Should your patient start hormone replacement therapy?
This patient is healthy, and in the absence of contraindications, systemic hormone replacement therapy after prophylactic oophorectomy could mitigate the potential adverse effects of surgically induced menopause. The patient can be reassured that estrogen-containing short-term hormone replacement therapy is unlikely to increase her breast cancer risk.
HORMONAL CONTRACEPTION AND THE RISK OF BREAST CANCER
A 44-year-old woman presents to your office for an annual visit. She is sexually active but does not wish to become pregnant. She has a family history of breast cancer: her mother was diagnosed at age 53. She is interested in an oral contraceptive to prevent pregnancy and acne. However, she is nervous about being on any contraceptive that may increase her risk of breast cancer.
To date, studies assessing the effect of hormonal contraception on the risk of breast cancer have produced inconsistent results. Although most studies have shown no associated risk, a few have shown a temporary 20% to 30% increased risk of breast cancer during use.13,14 Case-controlled studies that reported an association between hormonal contraception and breast cancer included populations taking higher-dose combination pills, which are no longer prescribed. Most studies do not evaluate specific formulations of hormonal contraception, and little is known about effects associated with intrauterine devices or progestin-only contraception.
A prospective study performed by Mørch et al13 followed more than 1 million reproductive-aged women for a mean of 10.9 years. The Danish Cancer Registry was used to identify cases of invasive breast cancer. Women who used hormonal contraceptives had a relative risk of breast cancer of 1.20 compared with women not on hormonal contraception (95% CI 1.14–1.26). The study suggested that those who had been on contraceptive agents for more than 5 years had an increased risk and that this risk remained for 5 years after the agents were discontinued. Conversely, no increased risk of cancer was noted in those who used hormonal contraception for less than 5 years. No notable differences were seen among various formulations.
For women using the levonorgestrel-containing intrauterine device, the relative risk of breast cancer was 1.21 (95% CI 1.11–1.33). A few cancers were noted in those who used the progestin-only implant or those using depot medroxyprogesterone acetate. While the study showed an increased relative risk of breast cancer, the absolute risk was low—13 cases per 100,000, or approximately 1 additional case of breast cancer per 7,690 per year.13
This study had several important limitations. The authors did not adjust for common breast cancer risk factors including age at menarche, alcohol use, or breastfeeding. Additionally, the study did not account for the use of hormonal contraception before the study period and conversely, did not account for women who may have stopped taking their contraceptive despite their prescribed duration. The frequency of mammography was not explicitly noted, which could have shifted results for women who had more aggressive screening.
It is also noteworthy that the use of high-dose systemic progestins was not associated with an increased risk, whereas the levonorgestrel intrauterine device, which contains only 1/20th the dose of a low-dose oral contraceptive pill, was associated with an increased risk. This discrepancy in risk warrants further investigation, and clinicians should be aware that this inconsistency needs validation before changing clinical practice.
In an observational cohort study,15 more than 100,000 women ages 50 to 71 were followed prospectively for 15 years to evaluate the association between hormonal contraceptive use and the risk of gynecologic and breast cancers. In this study, the duration of hormonal contraceptive use, smoking status, alcohol use, body mass index, physical activity, and family history of cancer were recorded. Long-term hormonal contraceptive use reduced ovarian and endometrial cancer risks by 40% and 34%, respectively, with no increase in breast cancer risk regardless of family history.
How would you counsel the patient?
The patient should be educated on the benefits of hormonal contraception that extend beyond pregnancy prevention, including regulation of menses, improved acne, decreased risk of endometrial and ovarian cancer, and likely reductions in colorectal cancer and overall mortality risk.13–16 Further, after their own systematic review of the data assessing risk of breast cancer with hormonal contraception, the US Centers for Disease Control and Prevention state in their guidelines that all contraceptives may be used without limitation in those who have a family history of breast cancer.4 Any potential increased risk of breast cancer in women using hormonal contraception is small and would not outweigh the benefits associated with use.
One must consider the impact of an unintended pregnancy in such women, including effects on the health of the fetus and mother. Recent reports on the increasing rates of maternal death in the US (23.8 of 100,000 live births) serve as a reminder of the complications that can arise with pregnancy, especially if a mother’s health is not optimized before conception.17
MAMMOGRAPHY PLUS TOMOSYNTHESIS VS MAMMOGRAPHY ALONE
The same 44-year-old patient now inquires about screening for breast cancer. She is curious about 3-dimensional mammography and whether it would be a better screening test for her.
Digital breast tomosynthesis (DBT) is a newer imaging modality that provides a 3-dimensional reconstruction of the breast using low-dose x-ray imaging. Some studies have shown that combining DBT with digital mammography may be superior to digital mammography alone in detecting cancers.18 However, digital mammography is currently the gold standard for breast cancer screening and is the only test proven to reduce mortality.18,19
In a retrospective US study of 13 medical centers,20 breast cancer detection rates increased by 41% the year after DBT was introduced, from 2.9 to 4.1 per 1,000 cases. DBT was associated with 16 fewer patients recalled for repeat imaging out of 1,000 women screened (as opposed to mammography alone). Two European studies similarly suggested an increase in cancer detection with lower recall rates.21,22
Is 3-D mammography a better option?
In a 2-arm study by Pattacini et al,18 nearly 20,000 women ages 45 to 70 were randomized to undergo either digital mammography or digital mammography plus DBT for primary breast cancer screening. Women were enrolled over a 2-year period and were followed for 4.5 years, and the development of a primary invasive cancer was the primary end point. Recall rates, reading times, and radiation doses were also compared between the 2 groups.
Overall, the cancer detection rate was higher in the digital mammography plus DBT arm compared with digital mammography alone (8.6 vs 4.5 per 1,000). The detection rates were higher in the combined screening group among all age subgroups, with relative risks ranging from 1.83 to 2.04 (P = .93). The recall rate was 3.5% in the 2 arms, with relative risks ranging from 0.93 to 1.11 (P = .52). There was a reduction in the number of false positives seen in women undergoing digital mammography plus DBT when compared with digital mammography alone, from 30 per 1,000 to 27 per 1,000.
Detection of ductal carcinoma in situ increased in the experimental arm (relative detection 2.80, 95% CI 1.01–7.65) compared with invasive cancers. Comparing radiation, the dose was 2.3 times higher in those who underwent digital mammography plus DBT. The average reading times for digital mammography alone were 20 to 85 seconds; adding DBT added 35 to 81 seconds.19
Should you advise 3-D mammography?
The patient should be educated on the benefits of both digital mammography alone and digital mammography plus DBT. The use of digital mammography plus DBT has been supported in various studies and has been shown to increase cancer detection rates, although data are still conflicting regarding recall rates.19,20 More studies are needed to determine its effect on breast cancer morality.
Routine use of DBT in women with or without dense breast tissue has not been recommended by organizations such as the USPSTF and the American College of Obstetricians and Gynecologists.23,24 While there is an increased dose of radiation, it still falls below the US Food and Drug Administration limits and should not be the sole barrier to use.
- Cauley JA. Screening for osteoporosis. JAMA 2018; 319(24):2483–2485. doi:10.1001/jama.2018.5722
- US Preventive Services Task Force, Curry SJ, Krist AH, Owens DK, et al. Screening for osteoporosis to prevent fractures: US Preventive Services Task Force recommendation statement. JAMA 2018; 319(24):2521–2531. doi:10.1001/jama.2018.7498
- Simmons KB, Haddad LB, Nanda K, Curtis KM. Drug interactions between non-rifamycin antibiotics and hormonal contraception: a systematic review. Am J Obstet Gynecol 2018; 218(1):88–97.e14. doi:10.1016/j.ajog.2017.07.003
- Curtis KM, Tepper NK, Jatlaoui TC, et al. US Medical eligibility criteria for contraceptive use, 2016. MMWR Recomm Rep 2016; 65(3):1–103. doi:10.15585/mmwr.rr6503a1
- Taylor J, Pemberton MN. Antibiotics and oral contraceptives: new considerations for dental practice. Br Dent J 2012; 212(10):481–483. doi:10.1038/sj.bdj.2012.414
- Kuchenbaecker KB, Hopper JL, Barnes DR, et al. Risks of breast, ovarian, and contralateral breast cancer for BRCA1 and BRCA2 mutation carriers. JAMA 2017; 317(23):2402–2416. doi:10.1001/jama.2017.7112
- Faubion SS, Kuhle CL, Shuster LT, Rocca WA. Long-term health consequences of premature or early menopause and considerations for management. Climacteric 2015; 18(4):483–491. doi:10.3109/13697137.2015.1020484
- Kotsopoulos J, Gronwald J, Karlan BY, et al; Hereditary Breast Cancer Clinical Study Group. Hormone replacement therapy after oophorectomy and breast cancer risk among BRCA1 mutation carriers. JAMA Oncol 2018; 4(8):1059–1065. doi:10.1001/jamaoncol.2018.0211
- Marchetti C, De Felice F, Boccia S, et al. Hormone replacement therapy after prophylactic risk reducing salpingo-oophorectomy and breast cancer risk in BRCA1 and BRCA2 mutation carriers: a meta-analysis. Crit Rev Oncol Hematol 2018; 132:111–115. doi:10.1016/j.critrevonc.2018.09.018
- Kotsopoulos J, Huzarski T, Gronwald J, et al. Hormone replacement therapy after menopause and risk of breast cancer in BRCA1 mutation carriers: a case-control study. Breast Cancer Res Treat 2016; 155(2):365–373. doi:10.1007/s10549-016-3685-3
- Rebbeck TR, Friebel T, Wagner T, et al; PROSE Study Group. Effect of short-term hormone replacement therapy on breast cancer risk reduction after bilateral prophylactic oophorectomy in BRCA1 and BRCA2 mutation carriers: the PROSE Study Group. J Clin Oncol 2005; 23(31):7804–7810. doi:10.1200/JCO.2004.00.8151
- Johansen N, Liavaag AH, Iversen OE, Dørum A, Braaten T, Michelsen TM. Use of hormone replacement therapy after risk-reducing salpingo-oophorectomy. Acta Obstet Gynecol Scand 2017; 96(5):547–555. doi:10.1111/aogs.13120
- Mørch LS, Skovlund CW, Hannaford PC, Iversen L, Fielding S, Lidegaard Ø. Contemporary hormonal contraception and the risk of breast cancer. N Engl J Med 2017; 377(23):2228–2239. doi:10.1056/NEJMoa1700732
- Batur P, Sikka S, McNamara M. Contraception update: extended use of long acting methods, hormonal contraception risks, and over the counter access. J Womens Health (Larchmt) 2018. doi:10.1089/jwh.2018.7391. [Epub ahead of print]
- Michels KA, Pfeiffer RM, Brinton LA, Trabert B. Modification of the associations between duration of oral contraceptive use and ovarian, endometrial, breast, and colorectal cancers. JAMA Oncol 2018; 4(4):516–521. doi:10.1001/jamaoncol.2017.4942
- Iversen L, Fielding S, Lidegaard Ø, Mørch LS, Skovlund CW, Hannaford PC. Association between contemporary hormonal contraception and ovarian cancer in women of reproductive age in Denmark: prospective, nationwide cohort study. BMJ 2018; 362:k3609. doi:10.1136/bmj.k3609
- MacDorman MF, Declercq E, Cabral H, Morton C. Recent increases in the US maternal mortality rate: disentangling trends from measurement issues. Obstet Gynecol 2016; 128(3):447–455. doi:10.1097/AOG.0000000000001556
- Pattacini P, Nitrosi A, Giorgi Rossi P, et al; RETomo Working Group. Digital mammography versus digital mammography plus tomosynthesis for breast cancer screening: the Reggio Emilia tomosynthesis randomized trial. Radiology 2018; 288(2):375–385. doi:10.1148/radiol.2018172119
- Pace L, Keating NL. A systematic assessment of benefits and risks to guide breast cancer screening decisions. JAMA 2014; 311(13):1327–1335. doi:10.1001/jama.2014.1398
- Friedewald SM, Rafferty EA, Rose SL, et al. Breast cancer screening using tomosynthesis in combination with digital mammography. JAMA 2014; 311(24):2499–2507. doi:10.1001/jama.2014.6095
- Skaane P, Bandos AI, Gullien R, et al. Comparison of digital mammography alone and digital mammography plus tomosynthesis in a population-based screening program. Radiology 2013; 267(1):47–56. doi:10.1148/radiol.12121373
- Ciatto S, Houssami N, Bernardi D, et al. Integration of 3D digital mammography with tomosynthesis for population breast-cancer screening (STORM): a prospective comparison study. Lancet Oncol 2013; 14(7):583–589. doi:10.1016/S1470-2045(13)70134-7
- US Preventive Services Task Force. Final recommendation statement: breast cancer: screening. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/breast-cancer-screening1. Accessed May 13, 2019.
- American College of Obstetricians and Gynecologists. Breast cancer risk assessment and screening in average-risk women. www.acog.org/Clinical-Guidance-and-Publications/Practice-Bulletins/Committee-on-Practice-Bulletins-Gynecology/Breast-Cancer-Risk-Assessment-and-Screening-in-Average-Risk-Women?IsMobileSet=false#5. Accessed May 13, 2019.
- Cauley JA. Screening for osteoporosis. JAMA 2018; 319(24):2483–2485. doi:10.1001/jama.2018.5722
- US Preventive Services Task Force, Curry SJ, Krist AH, Owens DK, et al. Screening for osteoporosis to prevent fractures: US Preventive Services Task Force recommendation statement. JAMA 2018; 319(24):2521–2531. doi:10.1001/jama.2018.7498
- Simmons KB, Haddad LB, Nanda K, Curtis KM. Drug interactions between non-rifamycin antibiotics and hormonal contraception: a systematic review. Am J Obstet Gynecol 2018; 218(1):88–97.e14. doi:10.1016/j.ajog.2017.07.003
- Curtis KM, Tepper NK, Jatlaoui TC, et al. US Medical eligibility criteria for contraceptive use, 2016. MMWR Recomm Rep 2016; 65(3):1–103. doi:10.15585/mmwr.rr6503a1
- Taylor J, Pemberton MN. Antibiotics and oral contraceptives: new considerations for dental practice. Br Dent J 2012; 212(10):481–483. doi:10.1038/sj.bdj.2012.414
- Kuchenbaecker KB, Hopper JL, Barnes DR, et al. Risks of breast, ovarian, and contralateral breast cancer for BRCA1 and BRCA2 mutation carriers. JAMA 2017; 317(23):2402–2416. doi:10.1001/jama.2017.7112
- Faubion SS, Kuhle CL, Shuster LT, Rocca WA. Long-term health consequences of premature or early menopause and considerations for management. Climacteric 2015; 18(4):483–491. doi:10.3109/13697137.2015.1020484
- Kotsopoulos J, Gronwald J, Karlan BY, et al; Hereditary Breast Cancer Clinical Study Group. Hormone replacement therapy after oophorectomy and breast cancer risk among BRCA1 mutation carriers. JAMA Oncol 2018; 4(8):1059–1065. doi:10.1001/jamaoncol.2018.0211
- Marchetti C, De Felice F, Boccia S, et al. Hormone replacement therapy after prophylactic risk reducing salpingo-oophorectomy and breast cancer risk in BRCA1 and BRCA2 mutation carriers: a meta-analysis. Crit Rev Oncol Hematol 2018; 132:111–115. doi:10.1016/j.critrevonc.2018.09.018
- Kotsopoulos J, Huzarski T, Gronwald J, et al. Hormone replacement therapy after menopause and risk of breast cancer in BRCA1 mutation carriers: a case-control study. Breast Cancer Res Treat 2016; 155(2):365–373. doi:10.1007/s10549-016-3685-3
- Rebbeck TR, Friebel T, Wagner T, et al; PROSE Study Group. Effect of short-term hormone replacement therapy on breast cancer risk reduction after bilateral prophylactic oophorectomy in BRCA1 and BRCA2 mutation carriers: the PROSE Study Group. J Clin Oncol 2005; 23(31):7804–7810. doi:10.1200/JCO.2004.00.8151
- Johansen N, Liavaag AH, Iversen OE, Dørum A, Braaten T, Michelsen TM. Use of hormone replacement therapy after risk-reducing salpingo-oophorectomy. Acta Obstet Gynecol Scand 2017; 96(5):547–555. doi:10.1111/aogs.13120
- Mørch LS, Skovlund CW, Hannaford PC, Iversen L, Fielding S, Lidegaard Ø. Contemporary hormonal contraception and the risk of breast cancer. N Engl J Med 2017; 377(23):2228–2239. doi:10.1056/NEJMoa1700732
- Batur P, Sikka S, McNamara M. Contraception update: extended use of long acting methods, hormonal contraception risks, and over the counter access. J Womens Health (Larchmt) 2018. doi:10.1089/jwh.2018.7391. [Epub ahead of print]
- Michels KA, Pfeiffer RM, Brinton LA, Trabert B. Modification of the associations between duration of oral contraceptive use and ovarian, endometrial, breast, and colorectal cancers. JAMA Oncol 2018; 4(4):516–521. doi:10.1001/jamaoncol.2017.4942
- Iversen L, Fielding S, Lidegaard Ø, Mørch LS, Skovlund CW, Hannaford PC. Association between contemporary hormonal contraception and ovarian cancer in women of reproductive age in Denmark: prospective, nationwide cohort study. BMJ 2018; 362:k3609. doi:10.1136/bmj.k3609
- MacDorman MF, Declercq E, Cabral H, Morton C. Recent increases in the US maternal mortality rate: disentangling trends from measurement issues. Obstet Gynecol 2016; 128(3):447–455. doi:10.1097/AOG.0000000000001556
- Pattacini P, Nitrosi A, Giorgi Rossi P, et al; RETomo Working Group. Digital mammography versus digital mammography plus tomosynthesis for breast cancer screening: the Reggio Emilia tomosynthesis randomized trial. Radiology 2018; 288(2):375–385. doi:10.1148/radiol.2018172119
- Pace L, Keating NL. A systematic assessment of benefits and risks to guide breast cancer screening decisions. JAMA 2014; 311(13):1327–1335. doi:10.1001/jama.2014.1398
- Friedewald SM, Rafferty EA, Rose SL, et al. Breast cancer screening using tomosynthesis in combination with digital mammography. JAMA 2014; 311(24):2499–2507. doi:10.1001/jama.2014.6095
- Skaane P, Bandos AI, Gullien R, et al. Comparison of digital mammography alone and digital mammography plus tomosynthesis in a population-based screening program. Radiology 2013; 267(1):47–56. doi:10.1148/radiol.12121373
- Ciatto S, Houssami N, Bernardi D, et al. Integration of 3D digital mammography with tomosynthesis for population breast-cancer screening (STORM): a prospective comparison study. Lancet Oncol 2013; 14(7):583–589. doi:10.1016/S1470-2045(13)70134-7
- US Preventive Services Task Force. Final recommendation statement: breast cancer: screening. www.uspreventiveservicestaskforce.org/Page/Document/RecommendationStatementFinal/breast-cancer-screening1. Accessed May 13, 2019.
- American College of Obstetricians and Gynecologists. Breast cancer risk assessment and screening in average-risk women. www.acog.org/Clinical-Guidance-and-Publications/Practice-Bulletins/Committee-on-Practice-Bulletins-Gynecology/Breast-Cancer-Risk-Assessment-and-Screening-in-Average-Risk-Women?IsMobileSet=false#5. Accessed May 13, 2019.
KEY POINTS
- The US Preventive Services Task Force recommends screening bone density when the 10-year risk of major osteoporotic fracture is more than 8.4%.
- Women can be reassured that nonrifamycin antibiotics are unlikely to reduce efficacy of hormonal contraception.
- Hormone replacement therapy after prophylactic bilateral salpingo-oophorectomy does not increase breast cancer risk in women who carry the BRCA1 gene mutation.
- Hormonal contraception may increase the risk of breast cancer by 1 extra case per 7,690 women, although most studies suggest there is no increased risk.
- The use of digital breast tomosynthesis along with digital mammography can increase cancer detection in women with dense breast tissue, but it is not yet routinely recommended by most professional societies.
Disseminated invasive aspergillosis in an immunocompetent patient
A 57-year-old woman was admitted to our hospital for progressive hypoxic respiratory failure that developed after 10 days of empiric treatment at another hospital for an exacerbation of chronic obstructive pulmonary disease (COPD).
Computed tomography (CT) showed a lesion in the upper lobe of the left lung, with new ground-glass opacities with cystic and cavitary changes raising concern for an inflammatory or infectious cause (Figure 1). Respiratory culture of expectorated secretions grew Aspergillus. Assays for beta-d-glucan and serum Aspergillus immunoglobulin G (IgG) antibodies were positive, although given the improvement in her oxygenation requirements and overall clinical status, these were thought to be trivial. Tests for immunoglobulin deficiencies and human immunodeficiency virus were negative, ruling out primary immunodeficiency. However, within the next 48 hours, her respiratory status declined, and voriconazole was started out of concern for invasive pulmonary aspergillosis based on results of serum IgG testing.
Despite 2 days of treatment with voriconazole, the patient developed respiratory failure. Repeat CT showed that the ground-glass opacities were more dense, especially in the lower lobes, and new patchy infiltrates were noted in the left lung. The patient developed a right tension pneumothorax requiring emergency intubation and chest tube insertion.1 She subsequently developed acute abdominal pain with worsening abdominal distention, diagnosed as pneumoperitoneum. Emergency exploratory laparotomy revealed perforations in the cecum with fecal spillage, requiring ileocecectomy and ileostomy.
Pathologic study of bowel specimens confirmed fungal hyphae with “tree-branch” structures consistent with fungal infection in the bowel (Figure 2).
Oral voriconazole was continued. The patient’s respiratory status improved, and she no longer required supplemental oxygen. She was discharged on a regimen of oral voriconazole 200 mg twice daily. However, over the next 12 months, she had additional hospitalizations for severe sepsis from abdominal wound infections, pneumonia, and Clostridium difficile infection. She will require lifelong antifungal treatment.
INVASIVE PULMONARY ASPERGILLOSIS
Invasive pulmonary aspergillosis is the most severe form of aspergillosis and is most often seen in immunocompromised patients. The death rate is as high as 50% in neutropenic patients regardless of the time to diagnosis or effective treatment.2 It becomes life-threatening as the infection enters the blood stream, leading to formation of thrombi and precipitating embolism and necrosis in the lungs.3
In immunocompetent patients, COPD, tuberculosis, bronchiectasis, liver disease, severe sepsis, and diabetes mellitus predispose to invasive pulmonary aspergillosis.2 Other risk factors include long-term steroid therapy at doses equivalent to prednisone 20 mg/day for at least 13 weeks4 and viral infection such as influenza.5 Chronic use of inhaled corticosteroids has been hypothesized to increase risk.4
Histopathologic confirmation of fungal elements is the gold standard for diagnosis.3 New biomarkers such as beta-d-glucan have shown promise in enabling earlier diagnosis to allow effective treatment of disseminated aspergillosis, as in our patient.6
TAKE-HOME MESSAGE
Although not common, invasive aspergillosis can occur in immunocompetent and near-immunocompetent patients, particularly those with COPD or other underlying lung disease.
Acknowledgment: The authors thank Kimberley Woodward, MD, Inova Fairfax Hospital, Falls Church, VA, for her study of the bowel specimen and for providing the histology slide.
- Vukicevic TA, Dudvarski-Ilic A, Zugic V, Stevanovic G, Rubino S, Barac A. Subacute invasive pulmonary aspergillosis as a rare cause of pneumothorax in immunocompetent patient: brief report. Infection 2017; 45(3):377–380. doi:10.1007/s15010-017-0994-3
- Moreno-González G, Ricart de Mesones A, Tazi-Mezalek R, Marron-Moya MT, Rosell A, Mañez R. Invasive pulmonary aspergillosis with disseminated infection in immunocompetent patient. Can Respir J 2016; 2016:7984032. doi:10.1155/2016/7984032
- Chen L, Liu Y, Wang W, Liu K. Adrenal and hepatic aspergillosis in an immunocompetent patient. Infect Dis (Lond) 2015; 47(6):428–432. doi:10.3109/00365548.2014.995697
- Taccone FS, Van den Abeele AM, Bulpa P, et al; AspICU Study Investigators. Epidemiology of invasive aspergillosis in critically ill patients: clinical presentation, underlying conditions, and outcomes. Crit Care 2015; 19:7. doi:10.1186/s13054-014-0722-7
- Crum-Cianflone NF. Invasive aspergillosis associated with severe influenza infections. Open Forum Infect Dis 2016; 3(3):ofw171. doi:10.1093/ofid/ofw171
- Ergene U, Akcali Z, Ozbalci D, Nese N, Senol S. Disseminated aspergillosis due to Aspergillus niger in immunocompetent patient: a case report. Case Rep Infect Dis 2013; 2013:385190. doi:10.1155/2013/385190
A 57-year-old woman was admitted to our hospital for progressive hypoxic respiratory failure that developed after 10 days of empiric treatment at another hospital for an exacerbation of chronic obstructive pulmonary disease (COPD).
Computed tomography (CT) showed a lesion in the upper lobe of the left lung, with new ground-glass opacities with cystic and cavitary changes raising concern for an inflammatory or infectious cause (Figure 1). Respiratory culture of expectorated secretions grew Aspergillus. Assays for beta-d-glucan and serum Aspergillus immunoglobulin G (IgG) antibodies were positive, although given the improvement in her oxygenation requirements and overall clinical status, these were thought to be trivial. Tests for immunoglobulin deficiencies and human immunodeficiency virus were negative, ruling out primary immunodeficiency. However, within the next 48 hours, her respiratory status declined, and voriconazole was started out of concern for invasive pulmonary aspergillosis based on results of serum IgG testing.
Despite 2 days of treatment with voriconazole, the patient developed respiratory failure. Repeat CT showed that the ground-glass opacities were more dense, especially in the lower lobes, and new patchy infiltrates were noted in the left lung. The patient developed a right tension pneumothorax requiring emergency intubation and chest tube insertion.1 She subsequently developed acute abdominal pain with worsening abdominal distention, diagnosed as pneumoperitoneum. Emergency exploratory laparotomy revealed perforations in the cecum with fecal spillage, requiring ileocecectomy and ileostomy.
Pathologic study of bowel specimens confirmed fungal hyphae with “tree-branch” structures consistent with fungal infection in the bowel (Figure 2).
Oral voriconazole was continued. The patient’s respiratory status improved, and she no longer required supplemental oxygen. She was discharged on a regimen of oral voriconazole 200 mg twice daily. However, over the next 12 months, she had additional hospitalizations for severe sepsis from abdominal wound infections, pneumonia, and Clostridium difficile infection. She will require lifelong antifungal treatment.
INVASIVE PULMONARY ASPERGILLOSIS
Invasive pulmonary aspergillosis is the most severe form of aspergillosis and is most often seen in immunocompromised patients. The death rate is as high as 50% in neutropenic patients regardless of the time to diagnosis or effective treatment.2 It becomes life-threatening as the infection enters the blood stream, leading to formation of thrombi and precipitating embolism and necrosis in the lungs.3
In immunocompetent patients, COPD, tuberculosis, bronchiectasis, liver disease, severe sepsis, and diabetes mellitus predispose to invasive pulmonary aspergillosis.2 Other risk factors include long-term steroid therapy at doses equivalent to prednisone 20 mg/day for at least 13 weeks4 and viral infection such as influenza.5 Chronic use of inhaled corticosteroids has been hypothesized to increase risk.4
Histopathologic confirmation of fungal elements is the gold standard for diagnosis.3 New biomarkers such as beta-d-glucan have shown promise in enabling earlier diagnosis to allow effective treatment of disseminated aspergillosis, as in our patient.6
TAKE-HOME MESSAGE
Although not common, invasive aspergillosis can occur in immunocompetent and near-immunocompetent patients, particularly those with COPD or other underlying lung disease.
Acknowledgment: The authors thank Kimberley Woodward, MD, Inova Fairfax Hospital, Falls Church, VA, for her study of the bowel specimen and for providing the histology slide.
A 57-year-old woman was admitted to our hospital for progressive hypoxic respiratory failure that developed after 10 days of empiric treatment at another hospital for an exacerbation of chronic obstructive pulmonary disease (COPD).
Computed tomography (CT) showed a lesion in the upper lobe of the left lung, with new ground-glass opacities with cystic and cavitary changes raising concern for an inflammatory or infectious cause (Figure 1). Respiratory culture of expectorated secretions grew Aspergillus. Assays for beta-d-glucan and serum Aspergillus immunoglobulin G (IgG) antibodies were positive, although given the improvement in her oxygenation requirements and overall clinical status, these were thought to be trivial. Tests for immunoglobulin deficiencies and human immunodeficiency virus were negative, ruling out primary immunodeficiency. However, within the next 48 hours, her respiratory status declined, and voriconazole was started out of concern for invasive pulmonary aspergillosis based on results of serum IgG testing.
Despite 2 days of treatment with voriconazole, the patient developed respiratory failure. Repeat CT showed that the ground-glass opacities were more dense, especially in the lower lobes, and new patchy infiltrates were noted in the left lung. The patient developed a right tension pneumothorax requiring emergency intubation and chest tube insertion.1 She subsequently developed acute abdominal pain with worsening abdominal distention, diagnosed as pneumoperitoneum. Emergency exploratory laparotomy revealed perforations in the cecum with fecal spillage, requiring ileocecectomy and ileostomy.
Pathologic study of bowel specimens confirmed fungal hyphae with “tree-branch” structures consistent with fungal infection in the bowel (Figure 2).
Oral voriconazole was continued. The patient’s respiratory status improved, and she no longer required supplemental oxygen. She was discharged on a regimen of oral voriconazole 200 mg twice daily. However, over the next 12 months, she had additional hospitalizations for severe sepsis from abdominal wound infections, pneumonia, and Clostridium difficile infection. She will require lifelong antifungal treatment.
INVASIVE PULMONARY ASPERGILLOSIS
Invasive pulmonary aspergillosis is the most severe form of aspergillosis and is most often seen in immunocompromised patients. The death rate is as high as 50% in neutropenic patients regardless of the time to diagnosis or effective treatment.2 It becomes life-threatening as the infection enters the blood stream, leading to formation of thrombi and precipitating embolism and necrosis in the lungs.3
In immunocompetent patients, COPD, tuberculosis, bronchiectasis, liver disease, severe sepsis, and diabetes mellitus predispose to invasive pulmonary aspergillosis.2 Other risk factors include long-term steroid therapy at doses equivalent to prednisone 20 mg/day for at least 13 weeks4 and viral infection such as influenza.5 Chronic use of inhaled corticosteroids has been hypothesized to increase risk.4
Histopathologic confirmation of fungal elements is the gold standard for diagnosis.3 New biomarkers such as beta-d-glucan have shown promise in enabling earlier diagnosis to allow effective treatment of disseminated aspergillosis, as in our patient.6
TAKE-HOME MESSAGE
Although not common, invasive aspergillosis can occur in immunocompetent and near-immunocompetent patients, particularly those with COPD or other underlying lung disease.
Acknowledgment: The authors thank Kimberley Woodward, MD, Inova Fairfax Hospital, Falls Church, VA, for her study of the bowel specimen and for providing the histology slide.
- Vukicevic TA, Dudvarski-Ilic A, Zugic V, Stevanovic G, Rubino S, Barac A. Subacute invasive pulmonary aspergillosis as a rare cause of pneumothorax in immunocompetent patient: brief report. Infection 2017; 45(3):377–380. doi:10.1007/s15010-017-0994-3
- Moreno-González G, Ricart de Mesones A, Tazi-Mezalek R, Marron-Moya MT, Rosell A, Mañez R. Invasive pulmonary aspergillosis with disseminated infection in immunocompetent patient. Can Respir J 2016; 2016:7984032. doi:10.1155/2016/7984032
- Chen L, Liu Y, Wang W, Liu K. Adrenal and hepatic aspergillosis in an immunocompetent patient. Infect Dis (Lond) 2015; 47(6):428–432. doi:10.3109/00365548.2014.995697
- Taccone FS, Van den Abeele AM, Bulpa P, et al; AspICU Study Investigators. Epidemiology of invasive aspergillosis in critically ill patients: clinical presentation, underlying conditions, and outcomes. Crit Care 2015; 19:7. doi:10.1186/s13054-014-0722-7
- Crum-Cianflone NF. Invasive aspergillosis associated with severe influenza infections. Open Forum Infect Dis 2016; 3(3):ofw171. doi:10.1093/ofid/ofw171
- Ergene U, Akcali Z, Ozbalci D, Nese N, Senol S. Disseminated aspergillosis due to Aspergillus niger in immunocompetent patient: a case report. Case Rep Infect Dis 2013; 2013:385190. doi:10.1155/2013/385190
- Vukicevic TA, Dudvarski-Ilic A, Zugic V, Stevanovic G, Rubino S, Barac A. Subacute invasive pulmonary aspergillosis as a rare cause of pneumothorax in immunocompetent patient: brief report. Infection 2017; 45(3):377–380. doi:10.1007/s15010-017-0994-3
- Moreno-González G, Ricart de Mesones A, Tazi-Mezalek R, Marron-Moya MT, Rosell A, Mañez R. Invasive pulmonary aspergillosis with disseminated infection in immunocompetent patient. Can Respir J 2016; 2016:7984032. doi:10.1155/2016/7984032
- Chen L, Liu Y, Wang W, Liu K. Adrenal and hepatic aspergillosis in an immunocompetent patient. Infect Dis (Lond) 2015; 47(6):428–432. doi:10.3109/00365548.2014.995697
- Taccone FS, Van den Abeele AM, Bulpa P, et al; AspICU Study Investigators. Epidemiology of invasive aspergillosis in critically ill patients: clinical presentation, underlying conditions, and outcomes. Crit Care 2015; 19:7. doi:10.1186/s13054-014-0722-7
- Crum-Cianflone NF. Invasive aspergillosis associated with severe influenza infections. Open Forum Infect Dis 2016; 3(3):ofw171. doi:10.1093/ofid/ofw171
- Ergene U, Akcali Z, Ozbalci D, Nese N, Senol S. Disseminated aspergillosis due to Aspergillus niger in immunocompetent patient: a case report. Case Rep Infect Dis 2013; 2013:385190. doi:10.1155/2013/385190
Cannabis: Doctors tell FDA to get out of the weeds
The Food and Drug Administration held its first-ever public hearing about products containing cannabis or cannabis-derived compounds – and it got an earful.
Hearing from over 100 individuals, the all-staff FDA panel was asked repeatedly to take the lead in bringing order to a confused morass of state and local cannabis regulations. The regulatory landscape currently contains many voids that slow research and put consumers at risk, many witnesses testified.
The federal Farm Bill of 2018 legalized the cultivation of hemp – cannabis with very low 9-tetrahydrocannabinol (THC) content – with regulatory restrictions.
However, the Farm Bill did not legalize low-THC cannabis products, said FDA Acting Commissioner Norman Sharpless, MD. The agency has concluded that both THC and cannabidiol (CBD) are drugs – not dietary supplements – and any exception to these provisions “would be new terrain for the FDA,” he said.
And although restrictions on CBD sales have generally not been enforced, “under current law, CBD and THC cannot lawfully be added to a food or marketed as a dietary supplement,” said Dr. Sharpless.
Though the FDA could choose to carve out regulatory exceptions, it has not yet done so.
Stakeholders who gave testimony included not just physicians, scientists, consumers, and advocates, but also growers, manufacturers, distributors, and retailers – as well as the legal firms that represent these interests.
Broadly, physicians and scientists encouraged the FDA to move forward with classifying CBD and most CBD-containing products as drugs, rather than dietary supplements. In general, the opposite approach was promoted by agriculture and manufacturing representatives who testified.
However, all were united in asking the FDA for clarity – and alacrity.
Again and again, speakers asked the FDA to move posthaste in tidying up the current clutter of regulations. Ryan Vandrey, PhD, of Johns Hopkins University, Baltimore, explained that today, “Hemp-derived CBD is unscheduled, Epidiolex is Schedule V, and synthetic CBD is Schedule I in the DEA’s current framework.”
Kevin Chapman, MD, of Children’s Hospital Colorado, representing the American Epilepsy Society, called for regulation of CBD as a drug, and an accelerated clinical trial path. He noted that many families of children with epilepsy other than Lennox-Gastaut or Dravet syndrome (the only approved Epidiolex indications) are dosing other CBD products, “making it up as they go along."
Jacqueline French MD, chief scientific officer of the Epilepsy Foundation, agreed that many families of children with epilepsy are doing their best to find consistent and unadulterated CBD products. She said her worry is that “abrupt withdrawal of CBD from the market could lead to seizure worsening, injury, or even death for patients who now rely on non-pharmaceutical CBD for seizure control.”
Dr. French and Dr. Chapman each made the point that without insurance reimbursement, Epidiolex costs families over $30,000 annually, while CBD is a small fraction of that – as little as $1,000, said Dr. Chapman.
The lack of standards for CBD preparations means the content of oils, tinctures, and e-cigarette products can vary widely. Michelle Peace, PhD, a forensic scientist at Virginia Commonwealth University, Richmond, receives funding from the U.S. Department of Justice to study licit and illicit drug use with e-cigarettes. She and her colleagues have found dextromethorphan and the potent synthetic cannabinoid 5F-ADB in vaping supplies advertised as containing only CBD.
In a recent investigation prompted by multiple consumer reports of hallucinations after vaping CBD-labeled products, “17 of 18 samples contained a synthetic cannabinoid. Clinics will not find these kinds of drugs when they just do drug testing,” Dr. Peace said.
Another sampling of CBD products available from retail outlets showed that claims often bore little relation to cannabinoid content, said Bill Gurley, PhD, co-director of the Center for Dietary Supplement Research at the University of Arkansas, Little Rock. While several products that claimed to have CBD actually contained none at all, some had many more CBD than the labeled amount – 228 times more, in one instance. One tested product was actually 45% THC, with little CBD content.
The potential for CBD to interact with many other drugs is cause for concern, noted several presenters. Barry Gidal, PharmD, director of pharmacy practice at the University of Wisconsin, Madison, said that “CBD is a complicated molecule. It has a complicated biotransformation pathway” through at least 9 enzymes within the hepatic cytochrome P450 system.
“It wasn’t until the Epidiolex development program that we began to understand the clinical ramifications of this…. The effect of CBD on other drugs may go beyond the anti-seizure drugs that have been studied so far,” said Dr. Gidal. He pointed to recent published case reports of potential CBD-drug interactions reporting elevated international normalized ratios for a patient on warfarin using CBD, and another report of elevated tacrolimus levels in a patient using CBD.
The way forward
A variety of regulatory pathways were proposed at the hearing. To prevent adulteration and contamination issues, many advocated standardized good manufacturing practices (GMPs), product reporting, and identification or registration, and a centralized reporting registry for adverse events.
Patient advocates, physicians, and scientists called for an easing of access to cannabis for medical research. Currently, cannabis, still classified as a Schedule I substance by the Drug Enforcement Administration, is only legally available for this purpose through a supply maintained by the National Institute on Drug Abuse. A limited number of strains are available, and access requires a lengthy approval process.
Most discussion centered around CBD, though some presenters asked for smoother sailing for THC research as well, particularly as a potential adjunct or alternative to opioids for chronic pain. Cannabidiol has generally been recognized as non-psychoactive, and the FDA assigned it a very low probability of causing dependence or addiction problems in its own review of human data.
However, in his opening remarks, Dr. Sharpless warned that this fact does not make CBD a benign substance, and many questions remain unanswered.
“How much CBD is safe to consume in a day? What if someone applies a topical CBD lotion, consumes a CBD beverage or candy, and also consumes some CBD oil? How much is too much? How will it interact with other drugs the person might be taking? What if she’s pregnant? What if children access CBD products like gummy edibles? What happens when someone chronically uses CBD for prolonged periods? These and many other questions represent important and significant gaps in our knowledge,” said Dr. Sharpless.
The FDA has established a public docket where the public may submit comments and documents until July 2, 2019.
The Food and Drug Administration held its first-ever public hearing about products containing cannabis or cannabis-derived compounds – and it got an earful.
Hearing from over 100 individuals, the all-staff FDA panel was asked repeatedly to take the lead in bringing order to a confused morass of state and local cannabis regulations. The regulatory landscape currently contains many voids that slow research and put consumers at risk, many witnesses testified.
The federal Farm Bill of 2018 legalized the cultivation of hemp – cannabis with very low 9-tetrahydrocannabinol (THC) content – with regulatory restrictions.
However, the Farm Bill did not legalize low-THC cannabis products, said FDA Acting Commissioner Norman Sharpless, MD. The agency has concluded that both THC and cannabidiol (CBD) are drugs – not dietary supplements – and any exception to these provisions “would be new terrain for the FDA,” he said.
And although restrictions on CBD sales have generally not been enforced, “under current law, CBD and THC cannot lawfully be added to a food or marketed as a dietary supplement,” said Dr. Sharpless.
Though the FDA could choose to carve out regulatory exceptions, it has not yet done so.
Stakeholders who gave testimony included not just physicians, scientists, consumers, and advocates, but also growers, manufacturers, distributors, and retailers – as well as the legal firms that represent these interests.
Broadly, physicians and scientists encouraged the FDA to move forward with classifying CBD and most CBD-containing products as drugs, rather than dietary supplements. In general, the opposite approach was promoted by agriculture and manufacturing representatives who testified.
However, all were united in asking the FDA for clarity – and alacrity.
Again and again, speakers asked the FDA to move posthaste in tidying up the current clutter of regulations. Ryan Vandrey, PhD, of Johns Hopkins University, Baltimore, explained that today, “Hemp-derived CBD is unscheduled, Epidiolex is Schedule V, and synthetic CBD is Schedule I in the DEA’s current framework.”
Kevin Chapman, MD, of Children’s Hospital Colorado, representing the American Epilepsy Society, called for regulation of CBD as a drug, and an accelerated clinical trial path. He noted that many families of children with epilepsy other than Lennox-Gastaut or Dravet syndrome (the only approved Epidiolex indications) are dosing other CBD products, “making it up as they go along."
Jacqueline French MD, chief scientific officer of the Epilepsy Foundation, agreed that many families of children with epilepsy are doing their best to find consistent and unadulterated CBD products. She said her worry is that “abrupt withdrawal of CBD from the market could lead to seizure worsening, injury, or even death for patients who now rely on non-pharmaceutical CBD for seizure control.”
Dr. French and Dr. Chapman each made the point that without insurance reimbursement, Epidiolex costs families over $30,000 annually, while CBD is a small fraction of that – as little as $1,000, said Dr. Chapman.
The lack of standards for CBD preparations means the content of oils, tinctures, and e-cigarette products can vary widely. Michelle Peace, PhD, a forensic scientist at Virginia Commonwealth University, Richmond, receives funding from the U.S. Department of Justice to study licit and illicit drug use with e-cigarettes. She and her colleagues have found dextromethorphan and the potent synthetic cannabinoid 5F-ADB in vaping supplies advertised as containing only CBD.
In a recent investigation prompted by multiple consumer reports of hallucinations after vaping CBD-labeled products, “17 of 18 samples contained a synthetic cannabinoid. Clinics will not find these kinds of drugs when they just do drug testing,” Dr. Peace said.
Another sampling of CBD products available from retail outlets showed that claims often bore little relation to cannabinoid content, said Bill Gurley, PhD, co-director of the Center for Dietary Supplement Research at the University of Arkansas, Little Rock. While several products that claimed to have CBD actually contained none at all, some had many more CBD than the labeled amount – 228 times more, in one instance. One tested product was actually 45% THC, with little CBD content.
The potential for CBD to interact with many other drugs is cause for concern, noted several presenters. Barry Gidal, PharmD, director of pharmacy practice at the University of Wisconsin, Madison, said that “CBD is a complicated molecule. It has a complicated biotransformation pathway” through at least 9 enzymes within the hepatic cytochrome P450 system.
“It wasn’t until the Epidiolex development program that we began to understand the clinical ramifications of this…. The effect of CBD on other drugs may go beyond the anti-seizure drugs that have been studied so far,” said Dr. Gidal. He pointed to recent published case reports of potential CBD-drug interactions reporting elevated international normalized ratios for a patient on warfarin using CBD, and another report of elevated tacrolimus levels in a patient using CBD.
The way forward
A variety of regulatory pathways were proposed at the hearing. To prevent adulteration and contamination issues, many advocated standardized good manufacturing practices (GMPs), product reporting, and identification or registration, and a centralized reporting registry for adverse events.
Patient advocates, physicians, and scientists called for an easing of access to cannabis for medical research. Currently, cannabis, still classified as a Schedule I substance by the Drug Enforcement Administration, is only legally available for this purpose through a supply maintained by the National Institute on Drug Abuse. A limited number of strains are available, and access requires a lengthy approval process.
Most discussion centered around CBD, though some presenters asked for smoother sailing for THC research as well, particularly as a potential adjunct or alternative to opioids for chronic pain. Cannabidiol has generally been recognized as non-psychoactive, and the FDA assigned it a very low probability of causing dependence or addiction problems in its own review of human data.
However, in his opening remarks, Dr. Sharpless warned that this fact does not make CBD a benign substance, and many questions remain unanswered.
“How much CBD is safe to consume in a day? What if someone applies a topical CBD lotion, consumes a CBD beverage or candy, and also consumes some CBD oil? How much is too much? How will it interact with other drugs the person might be taking? What if she’s pregnant? What if children access CBD products like gummy edibles? What happens when someone chronically uses CBD for prolonged periods? These and many other questions represent important and significant gaps in our knowledge,” said Dr. Sharpless.
The FDA has established a public docket where the public may submit comments and documents until July 2, 2019.
The Food and Drug Administration held its first-ever public hearing about products containing cannabis or cannabis-derived compounds – and it got an earful.
Hearing from over 100 individuals, the all-staff FDA panel was asked repeatedly to take the lead in bringing order to a confused morass of state and local cannabis regulations. The regulatory landscape currently contains many voids that slow research and put consumers at risk, many witnesses testified.
The federal Farm Bill of 2018 legalized the cultivation of hemp – cannabis with very low 9-tetrahydrocannabinol (THC) content – with regulatory restrictions.
However, the Farm Bill did not legalize low-THC cannabis products, said FDA Acting Commissioner Norman Sharpless, MD. The agency has concluded that both THC and cannabidiol (CBD) are drugs – not dietary supplements – and any exception to these provisions “would be new terrain for the FDA,” he said.
And although restrictions on CBD sales have generally not been enforced, “under current law, CBD and THC cannot lawfully be added to a food or marketed as a dietary supplement,” said Dr. Sharpless.
Though the FDA could choose to carve out regulatory exceptions, it has not yet done so.
Stakeholders who gave testimony included not just physicians, scientists, consumers, and advocates, but also growers, manufacturers, distributors, and retailers – as well as the legal firms that represent these interests.
Broadly, physicians and scientists encouraged the FDA to move forward with classifying CBD and most CBD-containing products as drugs, rather than dietary supplements. In general, the opposite approach was promoted by agriculture and manufacturing representatives who testified.
However, all were united in asking the FDA for clarity – and alacrity.
Again and again, speakers asked the FDA to move posthaste in tidying up the current clutter of regulations. Ryan Vandrey, PhD, of Johns Hopkins University, Baltimore, explained that today, “Hemp-derived CBD is unscheduled, Epidiolex is Schedule V, and synthetic CBD is Schedule I in the DEA’s current framework.”
Kevin Chapman, MD, of Children’s Hospital Colorado, representing the American Epilepsy Society, called for regulation of CBD as a drug, and an accelerated clinical trial path. He noted that many families of children with epilepsy other than Lennox-Gastaut or Dravet syndrome (the only approved Epidiolex indications) are dosing other CBD products, “making it up as they go along."
Jacqueline French MD, chief scientific officer of the Epilepsy Foundation, agreed that many families of children with epilepsy are doing their best to find consistent and unadulterated CBD products. She said her worry is that “abrupt withdrawal of CBD from the market could lead to seizure worsening, injury, or even death for patients who now rely on non-pharmaceutical CBD for seizure control.”
Dr. French and Dr. Chapman each made the point that without insurance reimbursement, Epidiolex costs families over $30,000 annually, while CBD is a small fraction of that – as little as $1,000, said Dr. Chapman.
The lack of standards for CBD preparations means the content of oils, tinctures, and e-cigarette products can vary widely. Michelle Peace, PhD, a forensic scientist at Virginia Commonwealth University, Richmond, receives funding from the U.S. Department of Justice to study licit and illicit drug use with e-cigarettes. She and her colleagues have found dextromethorphan and the potent synthetic cannabinoid 5F-ADB in vaping supplies advertised as containing only CBD.
In a recent investigation prompted by multiple consumer reports of hallucinations after vaping CBD-labeled products, “17 of 18 samples contained a synthetic cannabinoid. Clinics will not find these kinds of drugs when they just do drug testing,” Dr. Peace said.
Another sampling of CBD products available from retail outlets showed that claims often bore little relation to cannabinoid content, said Bill Gurley, PhD, co-director of the Center for Dietary Supplement Research at the University of Arkansas, Little Rock. While several products that claimed to have CBD actually contained none at all, some had many more CBD than the labeled amount – 228 times more, in one instance. One tested product was actually 45% THC, with little CBD content.
The potential for CBD to interact with many other drugs is cause for concern, noted several presenters. Barry Gidal, PharmD, director of pharmacy practice at the University of Wisconsin, Madison, said that “CBD is a complicated molecule. It has a complicated biotransformation pathway” through at least 9 enzymes within the hepatic cytochrome P450 system.
“It wasn’t until the Epidiolex development program that we began to understand the clinical ramifications of this…. The effect of CBD on other drugs may go beyond the anti-seizure drugs that have been studied so far,” said Dr. Gidal. He pointed to recent published case reports of potential CBD-drug interactions reporting elevated international normalized ratios for a patient on warfarin using CBD, and another report of elevated tacrolimus levels in a patient using CBD.
The way forward
A variety of regulatory pathways were proposed at the hearing. To prevent adulteration and contamination issues, many advocated standardized good manufacturing practices (GMPs), product reporting, and identification or registration, and a centralized reporting registry for adverse events.
Patient advocates, physicians, and scientists called for an easing of access to cannabis for medical research. Currently, cannabis, still classified as a Schedule I substance by the Drug Enforcement Administration, is only legally available for this purpose through a supply maintained by the National Institute on Drug Abuse. A limited number of strains are available, and access requires a lengthy approval process.
Most discussion centered around CBD, though some presenters asked for smoother sailing for THC research as well, particularly as a potential adjunct or alternative to opioids for chronic pain. Cannabidiol has generally been recognized as non-psychoactive, and the FDA assigned it a very low probability of causing dependence or addiction problems in its own review of human data.
However, in his opening remarks, Dr. Sharpless warned that this fact does not make CBD a benign substance, and many questions remain unanswered.
“How much CBD is safe to consume in a day? What if someone applies a topical CBD lotion, consumes a CBD beverage or candy, and also consumes some CBD oil? How much is too much? How will it interact with other drugs the person might be taking? What if she’s pregnant? What if children access CBD products like gummy edibles? What happens when someone chronically uses CBD for prolonged periods? These and many other questions represent important and significant gaps in our knowledge,” said Dr. Sharpless.
The FDA has established a public docket where the public may submit comments and documents until July 2, 2019.
FROM AN FDA PUBLIC HEARING
Studies cast doubt on FDA’s accelerated cancer drug pathway
In the first study, lead investigator Emerson Y. Chen, MD, of Oregon Health & Science University, Portland, and colleagues conducted a retrospective analysis of all drugs approved by the FDA on the basis of response rate – the percentage of patients who experience tumor shrinkage – from Jan. 1, 2006, to Sept. 30, 2018. The data set consisted of 59 oncology drugs with 85 unique indications approved by the FDA for advanced-stage metastatic cancer on the basis of a response rate (RR) endpoint during the study period.
Of the 85 indications, 32 were granted regular approval immediately with limited postmarketing efficacy requirements and 53 (62%) were granted accelerated approval. Of the accelerated approvals, 29 (55%) were later converted to regular approval.
The median RR for the 85 indications was 41%, and the median sample size of such RR trials was 117 patients, according to the analysis published in JAMA Internal Medicine.
Among all approvals, 14 of 85 (16%) had an RR less than 20%, 28 of 85 (33%) had an RR less than 30%, and 40 of 85 (47%) had an RR less than 40%.
Most approved drugs had an RR ranging from 20% to 59%, the study found. Of 81 available indications, the median complete response rate – defined as the percentage of patients with no visible disease and normalization of lymph nodes – was 6%. (Complete response data were not reported for four drug indications.)
The investigators found that many of the drugs studied have remained on the market for years without subsequent confirmatory data. For example, when the accelerated approvals based on RR were converted to full approval, 23 of 29 were made on the basis of surrogate endpoints (progression-free survival or RR), 7 of 29 were made on the basis of RR, and just 6 of 29 were made on the basis of overall survival (OS).
The findings suggest that most cancer drugs approved by the FDA based on RR have less than transformational response rates, and that such indications do not have confirmed clinical benefit, the study authors wrote.
While in some settings, a response can equal prognostic value regarding overall survival, the authors wrote that “the ability of RR to serve as a validated surrogate for OS varies among cancer types and is generally poor.”
In the second study, researchers found that confirmatory trials for only one-fifth of cancer drug indications approved via the FDA’s accelerated approval route demonstrated improvements in overall patient survival.
Lead investigator Bishal Gyawali, MD, PhD, of Queen’s University, Kingston, Ont., and colleagues examined FDA data on recent drugs and indications that received accelerated approval and were later granted full approval.
For their analysis, the investigators reviewed the FDA’s database of postmarketing requirements and commitments, as well as PubMed, to determine the current status of postmarket trials for indications labeled as “ongoing” in the original FDA data.
Of 93 cancer drug indications for which accelerated approval was granted from Dec. 11, 1992, to May 31, 2017, the FDA reported clinical benefit was adequately confirmed in 51 indications. Of these confirmations, 15 demonstrated improvement in overall survival.
In their updated analysis, the investigators determined that confirmatory trials for 19 of the 93 (20%) cancer drug approvals reported an improvement in OS, 19 trials (20%) reported improvement in the same surrogate used in the preapproval trial, and 20 trials (21%) reported improvement in a different surrogate, according to the study, also published in JAMA Internal Medicine.
Additionally, results showed that 5 confirmatory trials were delayed, 10 trials were pending, and 9 trials were ongoing.
For three recent accelerated approvals, the primary endpoints were not met in the confirmatory trials, but one of the indications still received full approval.
The findings raise several concerns about the accelerated cancer drug pathway, including whether the same surrogate efficacy measure should be used as verification of drug benefit, according to the investigators. Conversely, using a different surrogate endpoint than the original measure can cause confusion among physicians and patients about whether the cancer drug improves survival or quality of life, information that is essential in the benefit-risk evaluation for clinical decision making.
That a number of the confirmatory trials examined were delayed or pending emphasize the considerable time that can elapse between drug approval and confirmatory trial completion, they added.
“Timely planning and completion of postmarketing trials is necessary for proper implementation of the accelerated approval pathway, and the FDA should minimize the period during which patients and physicians are using drugs approved through accelerated pathways without rigorous data on their ultimate clinical benefit,” the authors wrote in the analysis.
Dr. Chen, lead author of the RR study, said both studies call into question what criteria is optimal when assessing cancer drug value, while ensuring such measurements are not too high to achieve – preventing useful drugs to market – but also not too low – allowing drugs with marginal benefit into the market.
“There has been tremendous drug development within the oncology space, and it is always important to look back to reassess and see if the process [matches] the original vision so that we can correct any misuse or concerns,” Dr. Chen said in an interview.
Dr. Chen said his study indicates the RR endpoint has been misused in scenarios with low response rate, common cancer, and/or situations with already available therapies. In the study by Dr. Gyawali, the results suggest many drugs approved on the basis of a surrogate endpoint (RR or progression-free survival) ultimately do not demonstrate survival benefit confirmation or patient-reported benefit, Dr. Chen said.
“We hope that readers of these JAMA IM studies and the accompanying commentaries will recognize that there could be a set of guidance criteria from regulatory agencies or oncology organizations to recommend use of surrogate endpoints in special situations: high response rate of the drug, very rare cancer, or highly innovative therapy not yet seen before,” he said. “The use of surrogate endpoints to justify these therapies must also have postmarketing confirmation of survival or patient-reported benefit.”
The study led by Dr. Chen was supported by the Laura and John Arnold Foundation. Dr Chen reported receiving lecture honorarium from Horizon CME; another coauthor reported receiving honorarium from universities, medical centers, and publishers. The study led by Dr. Gyawali was supported by the Arnold Ventures; one of the coauthors reported receiving grant support from the Harvard-MIT Center for Regulatory Science and the Engelberg Foundation, as well as unrelated research funding from the FDA.
SOURCES: Chen EY et al. JAMA Intern Med. 2019 May 28. doi: 10.1001/jamainternmed.2019.0583; Gyawali B et al. JAMA Intern Med. 2019 May 28. doi: 10.1001/jamainternmed.2019.0462.
In the first study, lead investigator Emerson Y. Chen, MD, of Oregon Health & Science University, Portland, and colleagues conducted a retrospective analysis of all drugs approved by the FDA on the basis of response rate – the percentage of patients who experience tumor shrinkage – from Jan. 1, 2006, to Sept. 30, 2018. The data set consisted of 59 oncology drugs with 85 unique indications approved by the FDA for advanced-stage metastatic cancer on the basis of a response rate (RR) endpoint during the study period.
Of the 85 indications, 32 were granted regular approval immediately with limited postmarketing efficacy requirements and 53 (62%) were granted accelerated approval. Of the accelerated approvals, 29 (55%) were later converted to regular approval.
The median RR for the 85 indications was 41%, and the median sample size of such RR trials was 117 patients, according to the analysis published in JAMA Internal Medicine.
Among all approvals, 14 of 85 (16%) had an RR less than 20%, 28 of 85 (33%) had an RR less than 30%, and 40 of 85 (47%) had an RR less than 40%.
Most approved drugs had an RR ranging from 20% to 59%, the study found. Of 81 available indications, the median complete response rate – defined as the percentage of patients with no visible disease and normalization of lymph nodes – was 6%. (Complete response data were not reported for four drug indications.)
The investigators found that many of the drugs studied have remained on the market for years without subsequent confirmatory data. For example, when the accelerated approvals based on RR were converted to full approval, 23 of 29 were made on the basis of surrogate endpoints (progression-free survival or RR), 7 of 29 were made on the basis of RR, and just 6 of 29 were made on the basis of overall survival (OS).
The findings suggest that most cancer drugs approved by the FDA based on RR have less than transformational response rates, and that such indications do not have confirmed clinical benefit, the study authors wrote.
While in some settings, a response can equal prognostic value regarding overall survival, the authors wrote that “the ability of RR to serve as a validated surrogate for OS varies among cancer types and is generally poor.”
In the second study, researchers found that confirmatory trials for only one-fifth of cancer drug indications approved via the FDA’s accelerated approval route demonstrated improvements in overall patient survival.
Lead investigator Bishal Gyawali, MD, PhD, of Queen’s University, Kingston, Ont., and colleagues examined FDA data on recent drugs and indications that received accelerated approval and were later granted full approval.
For their analysis, the investigators reviewed the FDA’s database of postmarketing requirements and commitments, as well as PubMed, to determine the current status of postmarket trials for indications labeled as “ongoing” in the original FDA data.
Of 93 cancer drug indications for which accelerated approval was granted from Dec. 11, 1992, to May 31, 2017, the FDA reported clinical benefit was adequately confirmed in 51 indications. Of these confirmations, 15 demonstrated improvement in overall survival.
In their updated analysis, the investigators determined that confirmatory trials for 19 of the 93 (20%) cancer drug approvals reported an improvement in OS, 19 trials (20%) reported improvement in the same surrogate used in the preapproval trial, and 20 trials (21%) reported improvement in a different surrogate, according to the study, also published in JAMA Internal Medicine.
Additionally, results showed that 5 confirmatory trials were delayed, 10 trials were pending, and 9 trials were ongoing.
For three recent accelerated approvals, the primary endpoints were not met in the confirmatory trials, but one of the indications still received full approval.
The findings raise several concerns about the accelerated cancer drug pathway, including whether the same surrogate efficacy measure should be used as verification of drug benefit, according to the investigators. Conversely, using a different surrogate endpoint than the original measure can cause confusion among physicians and patients about whether the cancer drug improves survival or quality of life, information that is essential in the benefit-risk evaluation for clinical decision making.
That a number of the confirmatory trials examined were delayed or pending emphasize the considerable time that can elapse between drug approval and confirmatory trial completion, they added.
“Timely planning and completion of postmarketing trials is necessary for proper implementation of the accelerated approval pathway, and the FDA should minimize the period during which patients and physicians are using drugs approved through accelerated pathways without rigorous data on their ultimate clinical benefit,” the authors wrote in the analysis.
Dr. Chen, lead author of the RR study, said both studies call into question what criteria is optimal when assessing cancer drug value, while ensuring such measurements are not too high to achieve – preventing useful drugs to market – but also not too low – allowing drugs with marginal benefit into the market.
“There has been tremendous drug development within the oncology space, and it is always important to look back to reassess and see if the process [matches] the original vision so that we can correct any misuse or concerns,” Dr. Chen said in an interview.
Dr. Chen said his study indicates the RR endpoint has been misused in scenarios with low response rate, common cancer, and/or situations with already available therapies. In the study by Dr. Gyawali, the results suggest many drugs approved on the basis of a surrogate endpoint (RR or progression-free survival) ultimately do not demonstrate survival benefit confirmation or patient-reported benefit, Dr. Chen said.
“We hope that readers of these JAMA IM studies and the accompanying commentaries will recognize that there could be a set of guidance criteria from regulatory agencies or oncology organizations to recommend use of surrogate endpoints in special situations: high response rate of the drug, very rare cancer, or highly innovative therapy not yet seen before,” he said. “The use of surrogate endpoints to justify these therapies must also have postmarketing confirmation of survival or patient-reported benefit.”
The study led by Dr. Chen was supported by the Laura and John Arnold Foundation. Dr Chen reported receiving lecture honorarium from Horizon CME; another coauthor reported receiving honorarium from universities, medical centers, and publishers. The study led by Dr. Gyawali was supported by the Arnold Ventures; one of the coauthors reported receiving grant support from the Harvard-MIT Center for Regulatory Science and the Engelberg Foundation, as well as unrelated research funding from the FDA.
SOURCES: Chen EY et al. JAMA Intern Med. 2019 May 28. doi: 10.1001/jamainternmed.2019.0583; Gyawali B et al. JAMA Intern Med. 2019 May 28. doi: 10.1001/jamainternmed.2019.0462.
In the first study, lead investigator Emerson Y. Chen, MD, of Oregon Health & Science University, Portland, and colleagues conducted a retrospective analysis of all drugs approved by the FDA on the basis of response rate – the percentage of patients who experience tumor shrinkage – from Jan. 1, 2006, to Sept. 30, 2018. The data set consisted of 59 oncology drugs with 85 unique indications approved by the FDA for advanced-stage metastatic cancer on the basis of a response rate (RR) endpoint during the study period.
Of the 85 indications, 32 were granted regular approval immediately with limited postmarketing efficacy requirements and 53 (62%) were granted accelerated approval. Of the accelerated approvals, 29 (55%) were later converted to regular approval.
The median RR for the 85 indications was 41%, and the median sample size of such RR trials was 117 patients, according to the analysis published in JAMA Internal Medicine.
Among all approvals, 14 of 85 (16%) had an RR less than 20%, 28 of 85 (33%) had an RR less than 30%, and 40 of 85 (47%) had an RR less than 40%.
Most approved drugs had an RR ranging from 20% to 59%, the study found. Of 81 available indications, the median complete response rate – defined as the percentage of patients with no visible disease and normalization of lymph nodes – was 6%. (Complete response data were not reported for four drug indications.)
The investigators found that many of the drugs studied have remained on the market for years without subsequent confirmatory data. For example, when the accelerated approvals based on RR were converted to full approval, 23 of 29 were made on the basis of surrogate endpoints (progression-free survival or RR), 7 of 29 were made on the basis of RR, and just 6 of 29 were made on the basis of overall survival (OS).
The findings suggest that most cancer drugs approved by the FDA based on RR have less than transformational response rates, and that such indications do not have confirmed clinical benefit, the study authors wrote.
While in some settings, a response can equal prognostic value regarding overall survival, the authors wrote that “the ability of RR to serve as a validated surrogate for OS varies among cancer types and is generally poor.”
In the second study, researchers found that confirmatory trials for only one-fifth of cancer drug indications approved via the FDA’s accelerated approval route demonstrated improvements in overall patient survival.
Lead investigator Bishal Gyawali, MD, PhD, of Queen’s University, Kingston, Ont., and colleagues examined FDA data on recent drugs and indications that received accelerated approval and were later granted full approval.
For their analysis, the investigators reviewed the FDA’s database of postmarketing requirements and commitments, as well as PubMed, to determine the current status of postmarket trials for indications labeled as “ongoing” in the original FDA data.
Of 93 cancer drug indications for which accelerated approval was granted from Dec. 11, 1992, to May 31, 2017, the FDA reported clinical benefit was adequately confirmed in 51 indications. Of these confirmations, 15 demonstrated improvement in overall survival.
In their updated analysis, the investigators determined that confirmatory trials for 19 of the 93 (20%) cancer drug approvals reported an improvement in OS, 19 trials (20%) reported improvement in the same surrogate used in the preapproval trial, and 20 trials (21%) reported improvement in a different surrogate, according to the study, also published in JAMA Internal Medicine.
Additionally, results showed that 5 confirmatory trials were delayed, 10 trials were pending, and 9 trials were ongoing.
For three recent accelerated approvals, the primary endpoints were not met in the confirmatory trials, but one of the indications still received full approval.
The findings raise several concerns about the accelerated cancer drug pathway, including whether the same surrogate efficacy measure should be used as verification of drug benefit, according to the investigators. Conversely, using a different surrogate endpoint than the original measure can cause confusion among physicians and patients about whether the cancer drug improves survival or quality of life, information that is essential in the benefit-risk evaluation for clinical decision making.
That a number of the confirmatory trials examined were delayed or pending emphasize the considerable time that can elapse between drug approval and confirmatory trial completion, they added.
“Timely planning and completion of postmarketing trials is necessary for proper implementation of the accelerated approval pathway, and the FDA should minimize the period during which patients and physicians are using drugs approved through accelerated pathways without rigorous data on their ultimate clinical benefit,” the authors wrote in the analysis.
Dr. Chen, lead author of the RR study, said both studies call into question what criteria is optimal when assessing cancer drug value, while ensuring such measurements are not too high to achieve – preventing useful drugs to market – but also not too low – allowing drugs with marginal benefit into the market.
“There has been tremendous drug development within the oncology space, and it is always important to look back to reassess and see if the process [matches] the original vision so that we can correct any misuse or concerns,” Dr. Chen said in an interview.
Dr. Chen said his study indicates the RR endpoint has been misused in scenarios with low response rate, common cancer, and/or situations with already available therapies. In the study by Dr. Gyawali, the results suggest many drugs approved on the basis of a surrogate endpoint (RR or progression-free survival) ultimately do not demonstrate survival benefit confirmation or patient-reported benefit, Dr. Chen said.
“We hope that readers of these JAMA IM studies and the accompanying commentaries will recognize that there could be a set of guidance criteria from regulatory agencies or oncology organizations to recommend use of surrogate endpoints in special situations: high response rate of the drug, very rare cancer, or highly innovative therapy not yet seen before,” he said. “The use of surrogate endpoints to justify these therapies must also have postmarketing confirmation of survival or patient-reported benefit.”
The study led by Dr. Chen was supported by the Laura and John Arnold Foundation. Dr Chen reported receiving lecture honorarium from Horizon CME; another coauthor reported receiving honorarium from universities, medical centers, and publishers. The study led by Dr. Gyawali was supported by the Arnold Ventures; one of the coauthors reported receiving grant support from the Harvard-MIT Center for Regulatory Science and the Engelberg Foundation, as well as unrelated research funding from the FDA.
SOURCES: Chen EY et al. JAMA Intern Med. 2019 May 28. doi: 10.1001/jamainternmed.2019.0583; Gyawali B et al. JAMA Intern Med. 2019 May 28. doi: 10.1001/jamainternmed.2019.0462.
FROM JAMA INTERNAL MEDICINE
FDA grants Priority Review to Vascepa for cardiovascular risk reduction
The Food and Drug Administration has granted a Priority Review to the supplemental new drug application for icosapent ethyl (Vascepa).
If approved, Vascepa – which is produced by Amarin – would be the first drug indicated to reduce residual cardiovascular risk in patients with LDL cholesterol managed by statins who still have persistent elevated triglycerides. The drug is now approved for reducing triglyceride levels in patients with baseline values of 500 mg/dL or greater.
The Priority Review is based on results of REDUCE-IT, a landmark cardiovascular outcomes trial whose primary results were presented at the American Heart Association scientific sessions last November and published in the New England Journal of Medicine. Vascepa achieved the primary study endpoint, reducing the relative risk for the first occurrence of a major adverse cardiovascular event significantly, by 25%.
The drug also met the study’s key secondary endpoint, reducing the incidence of a composite of cardiovascular death, nonfatal heart attack, and nonfatal stroke by 26%. Significant adverse events associated with Vascepa in the trial were peripheral edema, constipation, and atrial fibrillation.
Vascepa is currently indicated as an adjunct to diet to reduce triglyceride in adults with severe hypertriglyceridemia, a significantly smaller population than that represented in REDUCE-IT.
“We expect earlier approval of an expanded indication for Vascepa to lead to faster improvements in care for millions of patients with residual cardiovascular risk after statin therapy,” John F. Thero, president and CEO of Amarin, said in the statement.
The FDA is expected to issue a complete response by the end of September. Find the full press release on the Amarin website.
The Food and Drug Administration has granted a Priority Review to the supplemental new drug application for icosapent ethyl (Vascepa).
If approved, Vascepa – which is produced by Amarin – would be the first drug indicated to reduce residual cardiovascular risk in patients with LDL cholesterol managed by statins who still have persistent elevated triglycerides. The drug is now approved for reducing triglyceride levels in patients with baseline values of 500 mg/dL or greater.
The Priority Review is based on results of REDUCE-IT, a landmark cardiovascular outcomes trial whose primary results were presented at the American Heart Association scientific sessions last November and published in the New England Journal of Medicine. Vascepa achieved the primary study endpoint, reducing the relative risk for the first occurrence of a major adverse cardiovascular event significantly, by 25%.
The drug also met the study’s key secondary endpoint, reducing the incidence of a composite of cardiovascular death, nonfatal heart attack, and nonfatal stroke by 26%. Significant adverse events associated with Vascepa in the trial were peripheral edema, constipation, and atrial fibrillation.
Vascepa is currently indicated as an adjunct to diet to reduce triglyceride in adults with severe hypertriglyceridemia, a significantly smaller population than that represented in REDUCE-IT.
“We expect earlier approval of an expanded indication for Vascepa to lead to faster improvements in care for millions of patients with residual cardiovascular risk after statin therapy,” John F. Thero, president and CEO of Amarin, said in the statement.
The FDA is expected to issue a complete response by the end of September. Find the full press release on the Amarin website.
The Food and Drug Administration has granted a Priority Review to the supplemental new drug application for icosapent ethyl (Vascepa).
If approved, Vascepa – which is produced by Amarin – would be the first drug indicated to reduce residual cardiovascular risk in patients with LDL cholesterol managed by statins who still have persistent elevated triglycerides. The drug is now approved for reducing triglyceride levels in patients with baseline values of 500 mg/dL or greater.
The Priority Review is based on results of REDUCE-IT, a landmark cardiovascular outcomes trial whose primary results were presented at the American Heart Association scientific sessions last November and published in the New England Journal of Medicine. Vascepa achieved the primary study endpoint, reducing the relative risk for the first occurrence of a major adverse cardiovascular event significantly, by 25%.
The drug also met the study’s key secondary endpoint, reducing the incidence of a composite of cardiovascular death, nonfatal heart attack, and nonfatal stroke by 26%. Significant adverse events associated with Vascepa in the trial were peripheral edema, constipation, and atrial fibrillation.
Vascepa is currently indicated as an adjunct to diet to reduce triglyceride in adults with severe hypertriglyceridemia, a significantly smaller population than that represented in REDUCE-IT.
“We expect earlier approval of an expanded indication for Vascepa to lead to faster improvements in care for millions of patients with residual cardiovascular risk after statin therapy,” John F. Thero, president and CEO of Amarin, said in the statement.
The FDA is expected to issue a complete response by the end of September. Find the full press release on the Amarin website.
Synthetic drugs pose regulatory, diagnostic challenges
SAN FRANCISCO – Designer drugs, especially synthetic opioids and cannabinoids, are presenting increasing challenges to psychiatrists treating patients with overdoses or psychiatric adverse effects. In 2017, synthetic opioids caused more than 28,000 deaths in the United States, more than any other type. Some of these drugs are technically legal, because their modified chemical structures aren’t covered as legal definitions struggle to keep up with street drug identities.
Vanessa Torres-Llenza, MD, assistant professor of psychiatry at George Washington University, Washington, said in an interview. Dr. Torres-Llenza moderated a session on synthetic opioids at the annual meeting of the American Psychiatric Association.
Of particular concern is the synthetic opioid fentanyl, which has a potency about 50 times that of heroin, and 100 times that of morphine. It is a legal pharmaceutical drug for use in severe pain, but it can be made illicitly, and it is frequently mixed with heroin or cocaine and put into counterfeit pills. The user often is not even aware of its presence. Another derivative, carfentanil, is even more dangerous. Used as a large-animal tranquilizer, and illegal for human use, carfentanil is about 100 times more potent than fentanyl.
These developments may require reconsideration of treatment using the opioid antagonist naloxone and similar drugs. The current guidance for naloxone is a 0.4- to 2-mg dose, followed by repeat dose at 2- to 3-minute intervals as needed. Considering the increasing presence of more potent drugs, “there may not be time to wait,” Dr. Torres-Llenza said.
Another concern is illicit manufacturing: By making even slight modifications to legal drugs, illegal operations can stay a step ahead of regulators because these derivatives are completely legal until legislation is passed to ban them. Estimates peg the number of such new derivatives at about 250 per year.
The recent history of the Food Drug Administration’s regulation of synthetic opioids, presented during the session by Gowri Ramachandran, MD, a resident at George Washington University, illustrates the challenges. The Controlled Substances Act of 1970 assigned every regulated drug into one of five classes based on medical use, and potential for abuse and dependence. Schedule I substances are flagged for a high potential of abuse, having no medical use in the United States, and a lack of accepted safety data for use under medical supervision. Schedule II substances have accepted medical uses.
In 2012, the Synthetic Drug Abuse Prevention Act amended the earlier legislation, declaring that any chemical or related derivative with cannabimimetic properties, as well as some other hallucinogenic molecules and their close relatives, were included as schedule I controlled substances.
The amended legislation also extended the potential length of temporary schedule I status, from 1 year with a 6-month extension, to 2 years with a 1-year extension, to give regulators more time to catch up with both legal and illegal synthetic changes to determine if a drug should be schedule I or II.
A recent example of this problem is bath salts, which are far more powerful, synthetic versions of a stimulant derived from the khat plant that is grown in East Africa and southern Arabia. Bath salts can produce hallucinogenic and euphoric effects similar to methamphetamine and ecstasy, but they are readily available online and in retail stores, labeled as “not for human use” and marketed as “bath salts,” “plant food,” “jewelry cleaner,” or “phone screen cleaner.”
Another concern is synthetic cannabinoids, which resemble the 100 or so cannabinoids found in marijuana, tetrahydrocannabinol (THC), and cannabidiol (CBD) being the most well-known examples. These began to appear in recreational use in 2005, representing legal forms of marijuana and sold with names like K2, Spice, and Kronic. They are sold in tobacco shops, again labeled “not for human consumption,” trumpeted instead as a “harmless incense blend” or “natural herbs.” Manufacture and content of these derivatives are completely unregulated, according to Dr. Ramachandran.
Like other drug classes, synthetic cannabinoids – many related to THC – have been structurally altered in recent years, posing challenges to regulation and even detection. This is especially concerning because a synthetic cannabinoid product could contain a potpourri of other drugs such as opioids or herbs, leading to unpredictable effects. It’s also nearly impossible to identify everything in a patient’s system, Dr. Torrez-Llenza said.
That makes diagnosis challenging given that synthetic cannabinoids can cause a wide range of symptoms, commonly violence, agitation, panic attacks, hallucinations, hyperglycemia, hyperkalemia, and tachycardia.
Synthetic cannabinoids usually do not contain CBD, which has some antipsychotic and anxiolytic effects. Instead they are generally derived from THC, which is associated with psychosis, and they are 40-660 times more potent than natural THC. This suggests that synthetic versions may pose a greater psychosis risk than natural cannabis. However, only case reports have examined the existence of an association between synthetic cannabinoids and psychosis, and it is difficult to distinguish a toxic syndrome from exacerbation of a previous prodromal syndrome, or new-onset illness.
Acute reactions can occur within minutes of use and last 2-5 hours or more. But this is all very unpredictable as it depends on the specific mixture used.
In the emergency department, agitation, aggression, and impulsive behaviors may signal exposure to synthetic cannabinoids. Most patients can be treated in the ED with antipsychotics or benzodiazepines to manage symptoms. There could be regional toxidromes that arise from local distribution of specific synthetic cannabinoid combinations.
While testing for synthetic cannabinoids remains challenging, Quest Diagnostics has a urine-based panel that includes them, and the company says it is working with information from the National Forensic Laboratory Information System, the Drug Enforcement Agency, industry sources, and the scientific literature to periodically update its standard panel.
Dr. Torres-Llenza had no relevant financial disclosures.
SAN FRANCISCO – Designer drugs, especially synthetic opioids and cannabinoids, are presenting increasing challenges to psychiatrists treating patients with overdoses or psychiatric adverse effects. In 2017, synthetic opioids caused more than 28,000 deaths in the United States, more than any other type. Some of these drugs are technically legal, because their modified chemical structures aren’t covered as legal definitions struggle to keep up with street drug identities.
Vanessa Torres-Llenza, MD, assistant professor of psychiatry at George Washington University, Washington, said in an interview. Dr. Torres-Llenza moderated a session on synthetic opioids at the annual meeting of the American Psychiatric Association.
Of particular concern is the synthetic opioid fentanyl, which has a potency about 50 times that of heroin, and 100 times that of morphine. It is a legal pharmaceutical drug for use in severe pain, but it can be made illicitly, and it is frequently mixed with heroin or cocaine and put into counterfeit pills. The user often is not even aware of its presence. Another derivative, carfentanil, is even more dangerous. Used as a large-animal tranquilizer, and illegal for human use, carfentanil is about 100 times more potent than fentanyl.
These developments may require reconsideration of treatment using the opioid antagonist naloxone and similar drugs. The current guidance for naloxone is a 0.4- to 2-mg dose, followed by repeat dose at 2- to 3-minute intervals as needed. Considering the increasing presence of more potent drugs, “there may not be time to wait,” Dr. Torres-Llenza said.
Another concern is illicit manufacturing: By making even slight modifications to legal drugs, illegal operations can stay a step ahead of regulators because these derivatives are completely legal until legislation is passed to ban them. Estimates peg the number of such new derivatives at about 250 per year.
The recent history of the Food Drug Administration’s regulation of synthetic opioids, presented during the session by Gowri Ramachandran, MD, a resident at George Washington University, illustrates the challenges. The Controlled Substances Act of 1970 assigned every regulated drug into one of five classes based on medical use, and potential for abuse and dependence. Schedule I substances are flagged for a high potential of abuse, having no medical use in the United States, and a lack of accepted safety data for use under medical supervision. Schedule II substances have accepted medical uses.
In 2012, the Synthetic Drug Abuse Prevention Act amended the earlier legislation, declaring that any chemical or related derivative with cannabimimetic properties, as well as some other hallucinogenic molecules and their close relatives, were included as schedule I controlled substances.
The amended legislation also extended the potential length of temporary schedule I status, from 1 year with a 6-month extension, to 2 years with a 1-year extension, to give regulators more time to catch up with both legal and illegal synthetic changes to determine if a drug should be schedule I or II.
A recent example of this problem is bath salts, which are far more powerful, synthetic versions of a stimulant derived from the khat plant that is grown in East Africa and southern Arabia. Bath salts can produce hallucinogenic and euphoric effects similar to methamphetamine and ecstasy, but they are readily available online and in retail stores, labeled as “not for human use” and marketed as “bath salts,” “plant food,” “jewelry cleaner,” or “phone screen cleaner.”
Another concern is synthetic cannabinoids, which resemble the 100 or so cannabinoids found in marijuana, tetrahydrocannabinol (THC), and cannabidiol (CBD) being the most well-known examples. These began to appear in recreational use in 2005, representing legal forms of marijuana and sold with names like K2, Spice, and Kronic. They are sold in tobacco shops, again labeled “not for human consumption,” trumpeted instead as a “harmless incense blend” or “natural herbs.” Manufacture and content of these derivatives are completely unregulated, according to Dr. Ramachandran.
Like other drug classes, synthetic cannabinoids – many related to THC – have been structurally altered in recent years, posing challenges to regulation and even detection. This is especially concerning because a synthetic cannabinoid product could contain a potpourri of other drugs such as opioids or herbs, leading to unpredictable effects. It’s also nearly impossible to identify everything in a patient’s system, Dr. Torrez-Llenza said.
That makes diagnosis challenging given that synthetic cannabinoids can cause a wide range of symptoms, commonly violence, agitation, panic attacks, hallucinations, hyperglycemia, hyperkalemia, and tachycardia.
Synthetic cannabinoids usually do not contain CBD, which has some antipsychotic and anxiolytic effects. Instead they are generally derived from THC, which is associated with psychosis, and they are 40-660 times more potent than natural THC. This suggests that synthetic versions may pose a greater psychosis risk than natural cannabis. However, only case reports have examined the existence of an association between synthetic cannabinoids and psychosis, and it is difficult to distinguish a toxic syndrome from exacerbation of a previous prodromal syndrome, or new-onset illness.
Acute reactions can occur within minutes of use and last 2-5 hours or more. But this is all very unpredictable as it depends on the specific mixture used.
In the emergency department, agitation, aggression, and impulsive behaviors may signal exposure to synthetic cannabinoids. Most patients can be treated in the ED with antipsychotics or benzodiazepines to manage symptoms. There could be regional toxidromes that arise from local distribution of specific synthetic cannabinoid combinations.
While testing for synthetic cannabinoids remains challenging, Quest Diagnostics has a urine-based panel that includes them, and the company says it is working with information from the National Forensic Laboratory Information System, the Drug Enforcement Agency, industry sources, and the scientific literature to periodically update its standard panel.
Dr. Torres-Llenza had no relevant financial disclosures.
SAN FRANCISCO – Designer drugs, especially synthetic opioids and cannabinoids, are presenting increasing challenges to psychiatrists treating patients with overdoses or psychiatric adverse effects. In 2017, synthetic opioids caused more than 28,000 deaths in the United States, more than any other type. Some of these drugs are technically legal, because their modified chemical structures aren’t covered as legal definitions struggle to keep up with street drug identities.
Vanessa Torres-Llenza, MD, assistant professor of psychiatry at George Washington University, Washington, said in an interview. Dr. Torres-Llenza moderated a session on synthetic opioids at the annual meeting of the American Psychiatric Association.
Of particular concern is the synthetic opioid fentanyl, which has a potency about 50 times that of heroin, and 100 times that of morphine. It is a legal pharmaceutical drug for use in severe pain, but it can be made illicitly, and it is frequently mixed with heroin or cocaine and put into counterfeit pills. The user often is not even aware of its presence. Another derivative, carfentanil, is even more dangerous. Used as a large-animal tranquilizer, and illegal for human use, carfentanil is about 100 times more potent than fentanyl.
These developments may require reconsideration of treatment using the opioid antagonist naloxone and similar drugs. The current guidance for naloxone is a 0.4- to 2-mg dose, followed by repeat dose at 2- to 3-minute intervals as needed. Considering the increasing presence of more potent drugs, “there may not be time to wait,” Dr. Torres-Llenza said.
Another concern is illicit manufacturing: By making even slight modifications to legal drugs, illegal operations can stay a step ahead of regulators because these derivatives are completely legal until legislation is passed to ban them. Estimates peg the number of such new derivatives at about 250 per year.
The recent history of the Food Drug Administration’s regulation of synthetic opioids, presented during the session by Gowri Ramachandran, MD, a resident at George Washington University, illustrates the challenges. The Controlled Substances Act of 1970 assigned every regulated drug into one of five classes based on medical use, and potential for abuse and dependence. Schedule I substances are flagged for a high potential of abuse, having no medical use in the United States, and a lack of accepted safety data for use under medical supervision. Schedule II substances have accepted medical uses.
In 2012, the Synthetic Drug Abuse Prevention Act amended the earlier legislation, declaring that any chemical or related derivative with cannabimimetic properties, as well as some other hallucinogenic molecules and their close relatives, were included as schedule I controlled substances.
The amended legislation also extended the potential length of temporary schedule I status, from 1 year with a 6-month extension, to 2 years with a 1-year extension, to give regulators more time to catch up with both legal and illegal synthetic changes to determine if a drug should be schedule I or II.
A recent example of this problem is bath salts, which are far more powerful, synthetic versions of a stimulant derived from the khat plant that is grown in East Africa and southern Arabia. Bath salts can produce hallucinogenic and euphoric effects similar to methamphetamine and ecstasy, but they are readily available online and in retail stores, labeled as “not for human use” and marketed as “bath salts,” “plant food,” “jewelry cleaner,” or “phone screen cleaner.”
Another concern is synthetic cannabinoids, which resemble the 100 or so cannabinoids found in marijuana, tetrahydrocannabinol (THC), and cannabidiol (CBD) being the most well-known examples. These began to appear in recreational use in 2005, representing legal forms of marijuana and sold with names like K2, Spice, and Kronic. They are sold in tobacco shops, again labeled “not for human consumption,” trumpeted instead as a “harmless incense blend” or “natural herbs.” Manufacture and content of these derivatives are completely unregulated, according to Dr. Ramachandran.
Like other drug classes, synthetic cannabinoids – many related to THC – have been structurally altered in recent years, posing challenges to regulation and even detection. This is especially concerning because a synthetic cannabinoid product could contain a potpourri of other drugs such as opioids or herbs, leading to unpredictable effects. It’s also nearly impossible to identify everything in a patient’s system, Dr. Torrez-Llenza said.
That makes diagnosis challenging given that synthetic cannabinoids can cause a wide range of symptoms, commonly violence, agitation, panic attacks, hallucinations, hyperglycemia, hyperkalemia, and tachycardia.
Synthetic cannabinoids usually do not contain CBD, which has some antipsychotic and anxiolytic effects. Instead they are generally derived from THC, which is associated with psychosis, and they are 40-660 times more potent than natural THC. This suggests that synthetic versions may pose a greater psychosis risk than natural cannabis. However, only case reports have examined the existence of an association between synthetic cannabinoids and psychosis, and it is difficult to distinguish a toxic syndrome from exacerbation of a previous prodromal syndrome, or new-onset illness.
Acute reactions can occur within minutes of use and last 2-5 hours or more. But this is all very unpredictable as it depends on the specific mixture used.
In the emergency department, agitation, aggression, and impulsive behaviors may signal exposure to synthetic cannabinoids. Most patients can be treated in the ED with antipsychotics or benzodiazepines to manage symptoms. There could be regional toxidromes that arise from local distribution of specific synthetic cannabinoid combinations.
While testing for synthetic cannabinoids remains challenging, Quest Diagnostics has a urine-based panel that includes them, and the company says it is working with information from the National Forensic Laboratory Information System, the Drug Enforcement Agency, industry sources, and the scientific literature to periodically update its standard panel.
Dr. Torres-Llenza had no relevant financial disclosures.
REPORTING FROM APA 2019