User login
In reply: Short QT syndrome
In Reply: We thank Dr. Ratanapo and colleagues for their interest in our article. As we mentioned in our paper, and as they emphasized, the QT interval response to heart rate variation seems to be minimal. They wonder if using a beta-blockers in addition to Holter monitoring can provide a better estimate of the “true corrected QT interval” since it will allow the measurement of corrected QT with slower heart rates. While we agree that Holter monitoring may provide an opportunity to observe the lack of prolongation of the QT interval when the heart rate slows down naturally (eg, during sleep), we have reservations about the other points.
First, we prefer not to use the term “true corrected QT interval” because, as we mentioned in our article, the correction formulas do not perform well in short QT syndrome. A better thing would be to use the QT interval itself, no matter what the heart rate is.
Second, whether beta-blockers would alter the heart rate without altering the QT interval is something that deserves to be evaluated in patients with an established diagnosis of short QT syndrome. Since catecholamines can cause shortening of the QT interval,1 could beta-blockers have a different effect on the QT interval in patients with and without short QT syndrome? To our knowledge, there are no data that specifically address this question.
The last point we would like to emphasize is the complexity of making the diagnosis of short QT syndrome. Electrocardiographic criteria, especially when equivocal, should probably not be the sole diagnostic basis for short QT syndrome. A personal or family history of arrhythmias, with or without genetic testing, has additive value as demonstrated by the excellent paper by Gollob et al.2
- Bjerregaard P, Gusak I. Short QT syndrome: mechanism, diagnosis, and treatment. Nat Clin Pract Cardiovasc Med 2005; 2:84–87.
- Gollob MH, Redpath CJ, Roberts JD. The short QT syndrome: proposed diagnostic criteria. J Am Coll Cardiol 2011; 57:802–812.
In Reply: We thank Dr. Ratanapo and colleagues for their interest in our article. As we mentioned in our paper, and as they emphasized, the QT interval response to heart rate variation seems to be minimal. They wonder if using a beta-blockers in addition to Holter monitoring can provide a better estimate of the “true corrected QT interval” since it will allow the measurement of corrected QT with slower heart rates. While we agree that Holter monitoring may provide an opportunity to observe the lack of prolongation of the QT interval when the heart rate slows down naturally (eg, during sleep), we have reservations about the other points.
First, we prefer not to use the term “true corrected QT interval” because, as we mentioned in our article, the correction formulas do not perform well in short QT syndrome. A better thing would be to use the QT interval itself, no matter what the heart rate is.
Second, whether beta-blockers would alter the heart rate without altering the QT interval is something that deserves to be evaluated in patients with an established diagnosis of short QT syndrome. Since catecholamines can cause shortening of the QT interval,1 could beta-blockers have a different effect on the QT interval in patients with and without short QT syndrome? To our knowledge, there are no data that specifically address this question.
The last point we would like to emphasize is the complexity of making the diagnosis of short QT syndrome. Electrocardiographic criteria, especially when equivocal, should probably not be the sole diagnostic basis for short QT syndrome. A personal or family history of arrhythmias, with or without genetic testing, has additive value as demonstrated by the excellent paper by Gollob et al.2
In Reply: We thank Dr. Ratanapo and colleagues for their interest in our article. As we mentioned in our paper, and as they emphasized, the QT interval response to heart rate variation seems to be minimal. They wonder if using a beta-blockers in addition to Holter monitoring can provide a better estimate of the “true corrected QT interval” since it will allow the measurement of corrected QT with slower heart rates. While we agree that Holter monitoring may provide an opportunity to observe the lack of prolongation of the QT interval when the heart rate slows down naturally (eg, during sleep), we have reservations about the other points.
First, we prefer not to use the term “true corrected QT interval” because, as we mentioned in our article, the correction formulas do not perform well in short QT syndrome. A better thing would be to use the QT interval itself, no matter what the heart rate is.
Second, whether beta-blockers would alter the heart rate without altering the QT interval is something that deserves to be evaluated in patients with an established diagnosis of short QT syndrome. Since catecholamines can cause shortening of the QT interval,1 could beta-blockers have a different effect on the QT interval in patients with and without short QT syndrome? To our knowledge, there are no data that specifically address this question.
The last point we would like to emphasize is the complexity of making the diagnosis of short QT syndrome. Electrocardiographic criteria, especially when equivocal, should probably not be the sole diagnostic basis for short QT syndrome. A personal or family history of arrhythmias, with or without genetic testing, has additive value as demonstrated by the excellent paper by Gollob et al.2
- Bjerregaard P, Gusak I. Short QT syndrome: mechanism, diagnosis, and treatment. Nat Clin Pract Cardiovasc Med 2005; 2:84–87.
- Gollob MH, Redpath CJ, Roberts JD. The short QT syndrome: proposed diagnostic criteria. J Am Coll Cardiol 2011; 57:802–812.
- Bjerregaard P, Gusak I. Short QT syndrome: mechanism, diagnosis, and treatment. Nat Clin Pract Cardiovasc Med 2005; 2:84–87.
- Gollob MH, Redpath CJ, Roberts JD. The short QT syndrome: proposed diagnostic criteria. J Am Coll Cardiol 2011; 57:802–812.
Options for managing severe aortic stenosis: A case-based review
Surgical aortic valve replacement remains the gold standard treatment for symptomatic aortic valve stenosis in patients at low or moderate risk of surgical complications. But this is a disease of the elderly, many of whom are too frail or too sick to undergo surgery.
Now, patients who cannot undergo this surgery can be offered the less invasive option of transcatheter aortic valve replacement. Balloon valvuloplasty, sodium nitroprusside, and intra-aortic balloon counterpulsation can buy time for ill patients while more permanent mechanical interventions are being considered.
In this review, we will present several cases that highlight management choices for patients with severe aortic stenosis.
A PROGRESSIVE DISEASE OF THE ELDERLY
Aortic stenosis is the most common acquired valvular disease in the United States, and its incidence and prevalence are rising as the population ages. Epidemiologic studies suggest that 2% to 7% of all patients over age 65 have it.1,2
The natural history of the untreated disease is well established, with several case series showing an average decrease of 0.1 cm2 per year in aortic valve area and an increase of 7 mm Hg per year in the pressure gradient across the valve once the diagnosis is made.3,4 Development of angina, syncope, or heart failure is associated with adverse clinical outcomes, including death, and warrants prompt intervention with aortic valve replacement.5–7 Without intervention, the mortality rates reach as high as 75% in 3 years once symptoms develop.
Statins, bisphosphonates, and angiotensin-converting enzyme inhibitors have been used in attempts to slow or reverse the progression of aortic stenosis. However, studies of these drugs have had mixed results, and no definitive benefit has been shown.8–13 Surgical aortic valve replacement, on the other hand, normalizes the life expectancy of patients with aortic stenosis to that of age- and sex-matched controls and remains the gold standard therapy for patients who have symptoms.14
Traditionally, valve replacement has involved open heart surgery, since it requires direct visualization of the valve while the patient is on cardiopulmonary bypass. Unfortunately, many patients have multiple comorbid conditions and therefore are not candidates for open heart surgery. Options for these patients include aortic valvuloplasty and transcatheter aortic valve replacement. While there is considerable experience with aortic valvuloplasty, transcatheter aortic valve replacement is relatively new. In large randomized trials and registries, the transcatheter procedure has been shown to significantly improve long-term survival compared with medical management alone in inoperable patients and to have benefit similar to that of surgery in the high-risk population.15–17
CASE 1: SEVERE, SYMPTOMATIC STENOSIS IN A GOOD SURGICAL CANDIDATE
Mr. A, age 83, presents with shortness of breath and peripheral edema that have been worsening over the past several months. His pulse rate is 64 beats per minute and his blood pressure is 110/90 mm Hg. Auscultation reveals an absent aortic second heart sound with a late peaking systolic murmur that increases with expiration.
On echocardiography, his left ventricular ejection fraction is 55%, peak transaortic valve gradient 88 mm Hg, mean gradient 60 mm Hg, and effective valve area 0.6 cm2. He undergoes catheterization of the left side of his heart, which shows normal coronary arteries.
Mr. A also has hypertension and hyperlipidemia; his renal and pulmonary functions are normal.
How would you manage Mr. A’s aortic stenosis?
Symptomatic aortic stenosis leads to adverse clinical outcomes if managed medically without mechanical intervention,5–7 but patients who undergo aortic valve replacement have age-corrected postoperative survival rates that are nearly normal.14 Furthermore, thanks to improvements in surgical techniques and perioperative management, surgical mortality rates have decreased significantly in recent years and now range from 1% to 8%.18–20 The accumulated evidence showing clear superiority of a surgical approach over medical therapy has greatly simplified the therapeutic algorithm.21
Consequently, the current guidelines from the American College of Cardiology and American Heart Association (ACC/AHA) give surgery a class I indication (evidence or general agreement that the procedure is beneficial, useful, and effective) for symptomatic severe aortic stenosis (Figure 1). This level of recommendation also applies to patients who have severe but asymptomatic aortic stenosis who are undergoing other types of cardiac surgery and also to patients with severe aortic stenosis and left ventricular dysfunction (defined as an ejection fraction < 50%).21
Mr. A was referred for surgical aortic valve replacement, given its clear survival benefit.
CASE 2: SYMPTOMS AND LEFT VENTRICULAR DYSFUNCTION
Ms. B, age 79, has hypertension and hyperlipidemia and now presents to the outpatient department with worsening shortness of breath and chest discomfort. Electrocardiography shows significant left ventricular hypertrophy and abnormal repolarization. Left heart catheterization reveals mild nonobstructive coronary artery disease.
Echocardiography reveals an ejection fraction of 25%, severe left ventricular hypertrophy, and global hypokinesis. The aortic valve leaflets appear heavily calcified, with restricted motion. The peak and mean gradients across the aortic valve are 40 and 28 mm Hg, and the valve area is 0.8 cm2. Right heart catheterization shows a cardiac output of 3.1 L/min.
Does this patient’s aortic stenosis account for her clinical presentation?
Managing patients who have suspected severe aortic stenosis, left ventricular dysfunction, and low aortic valve gradients can be challenging. Although data for surgical intervention are not as robust for these patient subsets as for patients like Mr. A, several case series have suggested that survival in these patients is significantly better with surgery than with medical therapy alone.22–27
Specific factors predict whether patients with ventricular dysfunction and low gradients will benefit from aortic valve replacement. Dobutamine stress echocardiography is helpful in distinguishing true severe aortic stenosis from “pseudostenosis,” in which leaflet motion is restricted due to primary cardiomyopathy and low flow. Distinguishing between true aortic stenosis and pseudostenosis is of paramount value, as surgery is associated with improved long-term outcomes in patients with true aortic stenosis (even though they are at higher surgical risk), whereas those with pseudostenosis will not benefit from surgery.28–31
Infusion of dobutamine increases the flow across the aortic valve (if the left ventricle has contractile reserve; more on this below), and an increasing valve area with increasing doses of dobutamine is consistent with pseudostenosis. In this situation, treatment of the underlying cardiomyopathy is indicated as opposed to replacement of the aortic valve (Figure 2).
Contractile reserve is defined as an increase in stroke volume (> 20%), valvular gradient (> 10 mm Hg), or peak velocity (> 0.6 m/s) with peak dobutamine infusion. The presence of contractile reserve in patients with aortic stenosis identifies a high-risk group that benefits from aortic valve replacement (Figure 2).
Treatment of patients who have inadequate reserve is controversial. In the absence of contractile reserve, an adjunct imaging study such as computed tomography may be of value in detecting calcified valve leaflets, as the presence of calcium is associated with true aortic stenosis. Comorbid conditions should be taken into account as well, given the higher surgical risk in this patient subset, as aortic valve replacement in this already high-risk group of patients might be futile in some cases.
The ACC/AHA guidelines now give dobutamine stress echocardiography a class IIa indication (meaning the weight of the evidence or opinion is in favor of usefulness or efficacy) for determination of contractile reserve and valvular stenosis for patients with an ejection fraction of 30% or less or a mean gradient of 40 mm Hg or less.21
Ms. B underwent dobutamine stress echocardiography. It showed increases in ejection fraction, stroke volume, and transvalvular gradients, indicating that she did have contractile reserve and true severe aortic stenosis. Consequently, she was referred for surgical aortic valve replacement.
CASE 3: MODERATE STENOSIS AND THREE-VESSEL CORONARY ARTERY DISEASE
Mr. C, age 81, has hypertension and hyperlipidemia. He now presents to the emergency department with chest discomfort that began suddenly, awakening him from sleep. His presenting electrocardiogram shows nonspecific changes, and he is diagnosed with non-ST-elevation myocardial infarction. He undergoes left heart catheterization, which reveals severe three-vessel coronary artery disease.
Echocardiography reveals an ejection fraction of 55% and aortic stenosis, with an aortic valve area of 1.2 cm2, a peak gradient of 44 mm Hg, and a mean gradient of 28 mm Hg.
How would you manage his aortic stenosis?
Moderate aortic stenosis in a patient who needs surgery for severe triple-vessel coronary artery disease, other valve diseases, or aortic disease raises the question of whether aortic valve replacement should be performed in conjunction with these surgeries. Although these patients would not otherwise qualify for aortic valve replacement, the fact that they will undergo a procedure that will expose them to the risks associated with open heart surgery makes them reasonable candidates. Even if the patient does not need aortic valve replacement right now, aortic stenosis progresses at a predictable rate—the valve area decreases by a mean of 0.1 cm2/year and the gradients increase by 7 mm Hg/year. Therefore, clinical judgment should be exercised so that the patient will not need to undergo open heart surgery again in the near future.
The ACC/AHA guidelines recommend aortic valve replacement for patients with moderate aortic stenosis undergoing coronary artery bypass grafting or surgery on the aorta or other heart valves, giving it a class IIa indication.21 This recommendation is based on several retrospective case series that evaluated survival, the need for reoperation for aortic valve replacement, or both in patients undergoing coronary artery bypass grafting.32–35
No data exist, however, on adding aortic valve replacement to coronary artery bypass grafting in cases of mild aortic stenosis. As a result, it is controversial and carries a class IIb recommendation (meaning that its usefulness or efficacy is less well established). The ACC/AHA guidelines state that aortic valve replacement “may be considered” in patients undergoing coronary artery bypass grafting who have mild aortic stenosis (mean gradient < 30 mm Hg or jet velocity < 3 m/s) when there is evidence, such as moderate or severe valve calcification, that progression may be rapid (level of evidence C: based only on consensus opinion of experts, case studies or standard of care).21
Mr. C, who has moderate aortic stenosis, underwent aortic valve replacement in conjunction with three-vessel bypass grafting.
CASE 4: ASYMPTOMATIC BUT SEVERE STENOSIS
Mr. D, age 74, has hypertension, hyperlipidemia, and aortic stenosis. He now presents to the outpatient department for his annual echocardiogram to follow his aortic stenosis. He has a sedentary lifestyle but feels well performing activities of daily living. He denies dyspnea on exertion, chest pain, or syncope.
His echocardiogram reveals an effective aortic valve area of 0.7 cm2, peak gradient 90 mm Hg, and mean gradient 70 mm Hg. There is evidence of severe left ventricular hypertrophy, and the valve leaflets show bulky calcification and severe restriction. An echocardiogram performed at the same institution a year earlier revealed gradients of 60 and 40 mm Hg.
Blood is drawn for laboratory tests, including N-terminal pro-brain natriuretic peptide, which is 350 pg/mL (reference range for his age < 125 pg/mL). He is referred for a treadmill stress test, which elicits symptoms at a moderate activity level.
How would you manage his aortic stenosis?
Aortic valve replacement can be considered in patients who have asymptomatic but severe aortic stenosis with preserved left ventricular function (class IIb indication).21
Clinical assessment of asymptomatic aortic stenosis can be challenging, however, as patients may underreport their symptoms or decrease their activity levels to avoid symptoms. Exercise testing in such patients can elicit symptoms, unmask diminished exercise capacity, and help determine if they should be referred for surgery.36,37 Natriuretic peptide levels have been shown to correlate with the severity of aortic stenosis,38,39 and more importantly, to help predict symptom onset, cardiac death, and need for aortic valve replacement.40–42
Some patients with asymptomatic but severe aortic stenosis are at higher risk of morbidity and death. High-risk subsets include patients with rapid progression of aortic stenosis and those with critical aortic stenosis characterized by an aortic valve area less than 0.60 cm2, mean gradient greater than 60 mm Hg, and jet velocity greater than 5.0 m/s. It is reasonable to offer these patients surgery if their expected operative mortality risk is less than 1.0%.21
Mr. D has evidence of rapid progression as defined by an increase in aortic jet velocity of more than 0.3 m/s/year. He is at low surgical risk and was referred for elective aortic valve replacement.
CASE 5: TOO FRAIL FOR SURGERY
Mr. E, age 84, has severe aortic stenosis (valve area 0.6 cm2, peak and mean gradients of 88 and 56 mm Hg), coronary artery disease status post coronary artery bypass grafting, moderate chronic obstructive pulmonary disease (forced expiratory volume in 1 second 0.8 L), chronic kidney disease (serum creatinine 1.9 mg/dL), hypertension, hyperlipidemia, and diabetes mellitus. He has preserved left ventricular function. He presents to the outpatient department with worsening shortness of breath and peripheral edema over the past several months. Your impression is that he is very frail. How would you manage Mr. E’s aortic stenosis?
Advances in surgical techniques and perioperative management over the years have enabled higher-risk patients to undergo surgical aortic valve replacement with excellent out-comes.18–20,43 Yet many patients still cannot undergo surgery because their risk is too high. Patients ineligible for surgery have traditionally been treated medically—with poor out-comes—or with balloon aortic valvuloplasty to palliate symptoms.
Transcatheter aortic valve replacement, approved by the US Food and Drug Administration (FDA) in 2011, now provides another option for these patients. In this procedure, a bioprosthetic valve mounted on a metal frame is implanted over the native stenotic valve.
Currently, the only FDA-approved and commercially available valve in the United States is the Edwards SAPIEN valve, which has bovine pericardial tissue leaflets fixed to a balloon-expandable stainless steel frame (Figure 3). In the Placement of Aortic Transcatheter Valves (PARTNER) trial,15 patients who could not undergo surgery who underwent transcatheter replacement with this valve had a significantly better survival rate than patients treated medically.15,17 Use of this valve has also been compared against conventional surgical aortic valve replacement in high-risk patients and was found to have similar long-term outcomes (Figure 4).16 It was on the basis of this trial that this valve was granted approval for patients who cannot undergo surgery.
The standard of care for high-risk patients remains surgical aortic valve replacement, although it remains to be seen whether transcatheter replacement will be made available as well to patients eligible for surgery in the near future. There are currently no randomized data for transcatheter aortic valve replacement in patients at moderate to low surgical risk, and these patients should not be considered for this procedure.
Although the initial studies are encouraging for patients who cannot undergo surgery and who are at high risk without it, several issues and concerns remain. Importantly, the long-term durability of the transcatheter valve and longer-term outcomes remain unknown. Furthermore, the risk of vascular complications remains high (10% to 15%), dictating the need for careful patient selection. There are also concerns about the risks of stroke and of paravalvular aortic insufficiency. These issues are being investigated and addressed, however, and we hope that with increasing operator experience and improvements in the technique, outcomes will be improved.
Which approach for transcatheter aortic valve replacement?
There are several considerations in determining a patient’s eligibility for transcatheter aortic valve replacement.
Initially, these valves were placed by a transvenous, transseptal approach, but now retrograde placement through the femoral artery has become standard. In this procedure, the device is advanced retrograde from the femoral artery through the aorta and placed across the native aortic valve under fluoroscopic and echocardiographic guidance.
Patients who are not eligible for transfemoral placement because of severe atherosclerosis, tortuosity, or ectasia of the iliofemoral artery or aorta can still undergo percutaneous treatment with a transapical approach. This is a hybrid surgical-transcatheter approach in which the valve is delivered through a sheath placed by left ventricular apical puncture.17,44
A newer approach gaining popularity is the transaortic technique, in which the ascending aorta is accessed directly through a ministernotomy and the delivery sheath is placed with a direct puncture. Other approaches are through the axillary and subclavian arteries.
Other valves are under development
Several other valves are under development and will likely change the landscape of transcatheter aortic valve replacement with improving outcomes. Valves that are available in the United States are shown in Figure 3. The CoreValve, consisting of porcine pericardial leaflets mounted on a self-expanding nitinol stent, is currently being studied in a trial in the United States, and the manufacturer (Medtronic) will seek approval when results are complete in the near future.
Mr. E was initially referred for surgery, but when deemed to be unable to undergo surgery was found to be a good candidate for transcatheter aortic valve replacement.
CASE 6: LIFE-LIMITING COMORBID ILLNESS
Mr. F, age 77, has multiple problems: severe aortic stenosis (aortic valve area 0.6 cm2; peak and mean gradients of 92 and 59 mm Hg), stage IV pancreatic cancer, coronary artery disease status post coronary artery bypass grafting, chronic kidney disease (serum creatinine 1.9 mg/dL), hypertension, and hyperlipidemia. He presents to the outpatient department with shortness of breath at rest, orthopnea, effort intolerance, and peripheral edema over the past several months.
On physical examination rales in both lung bases can be heard. Left heart catheterization shows patent bypass grafts.
How would you manage Mr. F’s aortic stenosis?
Aortic valve replacement is not considered an option in patients with noncardiac illnesses and comorbidities that are life-limiting in the near term. Under these circumstances, aortic valvuloplasty can be offered as a means of palliating symptoms or, if the comorbid conditions can be modified, as a bridge to more definitive treatment with aortic valve replacement.
Since first described in 1986,45 percutaneous aortic valvuloplasty has been studied in several case series and registries, with consistent findings. Acutely, it increases the valve area and lessens the gradients across the valve, relieving symptoms. The risk of death during the procedure ranged from 3% to 13.5% in several case series, with a 30-day survival rate greater than 85%.46 However, the hemodynamic and symptomatic improvement is only short-term, as valve area and gradients gradually worsen within several months.47,48 Consequently, balloon valvuloplasty is considered a palliative approach.
Mr. F has a potentially life-limiting illness, ie, cancer, which would make him a candidate for aortic valvuloplasty rather than replacement. He can be referred for evaluation for this procedure in hopes of palliating his symptoms by relieving his dyspnea and improving his quality of life.
CASE 7: HEMODYNAMIC INSTABILITY
Mr. G, age 87, is scheduled for surgical aortic valve replacement because of severe aortic stenosis (valve area 0.5 cm2, peak and mean gradients 89 and 45 mm Hg) with an ejection fraction of 30%.
Two weeks before his scheduled surgery he presents to the emergency department with worsening fluid overload and increasing shortness of breath. His initial laboratory work shows new-onset renal failure, and he has signs of hypoperfusion on physical examination. He is transferred to the cardiac intensive care unit for further care.
How would you manage his aortic stenosis?
Patients with decompensated aortic stenosis and hemodynamic instability are at extreme risk during surgery. Medical stabilization beforehand may mitigate the risks associated with surgical or transcatheter aortic valve replacement. Aortic valvuloplasty, treatment with sodium nitroprusside, and support with intra-aortic balloon counterpulsation may help stabilize patients in this “low-output” setting.
Sodium nitroprusside has long been used in low-output states. By relaxing vascular smooth muscle, it leads to increased venous capacitance, decreasing preload and congestion. It also decreases systemic vascular resistance with a subsequent decrease in afterload, which in turn improves systolic emptying. Together, these effects reduce systolic and diastolic wall stress, lower myocardial oxygen consumption, and ultimately increase cardiac output.49,50
These theoretical benefits translate to clinical improvement and increased cardiac output, as shown in a case series of 25 patients with severe aortic stenosis and left ventricular systolic dysfunction (ejection fraction 35%) presenting in a low-output state in the absence of hypotension.51 These findings have led to a ACC/AHA recommendation for the use of sodium nitroprusside in patients who have severe aortic stenosis presenting in low-output state with decompensated heart failure.21
Intra-aortic balloon counterpulsation, introduced in 1968, has been used in several clinical settings, including acute coronary syndromes, intractable ventricular arrhythmias, and refractory heart failure, and for support of hemodynamics in the perioperative setting. Its role in managing ventricular septal rupture and acute mitral regurgitation is well established. It reliably reduces afterload and improves coronary perfusion, augmenting the cardiac output. This in turn leads to improved systemic perfusion, which can buy time for a critically ill patient during which the primary disease process is addressed.
Recently, a case series in which intraaortic balloon counterpulsation devices were placed in patients with severe aortic stenosis and cardiogenic shock showed findings similar to those with sodium nitroprusside infusion. Specifically, their use was associated with improved cardiac indices and filling pressures with a decrease in systemic vascular resistance. These changes have led to increased cardiac performance, resulting in better systemic perfusion.52 Thus, intra-aortic balloon counterpulsation can be an option for stabilizing patients with severe aortic stenosis and cardiogenic shock.
Mr. G was treated with sodium nitroprusside and intravenous diuretics. He achieved symptomatic relief and his renal function returned to baseline. He subsequently underwent aortic valve replacement during the hospitalization.
- Carabello BA, Paulus WJ. Aortic stenosis. Lancet 2009; 373:956–966.
- Lindroos M, Kupari M, Heikkilä J, Tilvis R. Prevalence of aortic valve abnormalities in the elderly: an echocardiographic study of a random population sample. J Am Coll Cardiol 1993; 21:1220–1225.
- Otto CM, Pearlman AS, Gardner CL. Hemodynamic progression of aortic stenosis in adults assessed by Doppler echocardiography. J Am Coll Cardiol 1989; 13:545–550.
- Otto CM, Burwash IG, Legget ME, et al. Prospective study of asymptomatic valvular aortic stenosis. Clinical, echocardiographic, and exercise predictors of outcome. Circulation 1997; 95:2262–2270.
- Varadarajan P, Kapoor N, Bansal RC, Pai RG. Clinical profile and natural history of 453 nonsurgically managed patients with severe aortic stenosis. Ann Thorac Surg 2006; 82:2111–2115.
- Turina J, Hess O, Sepulcri F, Krayenbuehl HP. Spontaneous course of aortic valve disease. Eur Heart J 1987; 8:471–483.
- Horstkotte D, Loogen F. The natural history of aortic valve stenosis. Eur Heart J 1988; 9(suppl E):57–64.
- Novaro GM, Tiong IY, Pearce GL, Lauer MS, Sprecher DL, Griffin BP. Effect of hydroxymethylglutaryl coenzyme a reductase inhibitors on the progression of calcific aortic stenosis. Circulation 2001; 104:2205–2209.
- Cowell SJ, Newby DE, Prescott RJ, et al; Scottish Aortic Stenosis and Lipid Lowering Trial, Impact on Regression (SALTIRE) Investigators. A randomized trial of intensive lipid-lowering therapy in calcific aortic stenosis. N Engl J Med 2005; 352:2389–2397.
- Rossebø AB, Pedersen TR, Boman K, et al; SEAS Investigators. Intensive lipid lowering with simvastatin and ezetimibe in aortic stenosis. N Engl J Med 2008; 359:1343–1356.
- Moura LM, Ramos SF, Zamorano JL, et al. Rosuvastatin affecting aortic valve endothelium to slow the progression of aortic stenosis. J Am Coll Cardiol 2007; 49:554–561.
- Rosenhek R, Rader F, Loho N, et al. Statins but not angiotensin-converting enzyme inhibitors delay progression of aortic stenosis. Circulation 2004; 110:1291–1295.
- O’Brien KD, Probstfield JL, Caulked MT, et al. Angiotensin-converting enzyme inhibitors and change in aortic valve calcium. Arch Intern Med 2005; 165:858–862.
- Lindblom D, Lindblom U, Qvist J, Lundström H. Long-term relative survival rates after heart valve replacement. J Am Coll Cardiol 1990; 15:566–573.
- Makkar RR, Fontana G P, Jilaihawi H, et al; PARTNER Trial Investigators. Transcatheter aortic-valve replacement for inoperable severe aortic stenosis. N Engl J Med 2012; 366:1696–1704.
- Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:2187–2198.
- Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:1597–1607.
- Di Eusanio M, Fortuna D, Cristell D, et al; RERIC (Emilia Romagna Cardiac Surgery Registry) Investigators. Contemporary outcomes of conventional aortic valve replacement in 638 octogenarians: insights from an Italian Regional Cardiac Surgery Registry (RERIC). Eur J Cardiothorac Surg 2012; 41:1247–1252.
- Di Eusanio M, Fortuna D, De Palma R, et al. Aortic valve replacement: results and predictors of mortality from a contemporary series of 2256 patients. J Thorac Cardiovasc Surg 2011; 141:940–947.
- Jamieson WR, Edwards FH, Schwartz M, Bero JW, Clark RE, Grover FL. Risk stratification for cardiac valve replacement. National Cardiac Surgery Database. Database Committee of the Society of Thoracic Surgeons. Ann Thorac Surg 1999; 67:943–951.
- Bonow RO, Carabello BA, Chatterjee K, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2008 focused update incorporated into the ACC/AHA 2006 guidelines for the management of patients with valvular heart disease: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to revise the 1998 guidelines for the management of patients with valvular heart disease). Endorsed by the Society of Cardiovascular Anesthesiologists, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol 2008; 52:e1–e142.
- Hachicha Z, Dumesnil JG, Bogaty P, Pibarot P. Paradoxical low-flow, low-gradient severe aortic stenosis despite preserved ejection fraction is associated with higher afterload and reduced survival. Circulation 2007; 115:2856–2864.
- Vaquette B, Corbineau H, Laurent M, et al. Valve replacement in patients with critical aortic stenosis and depressed left ventricular function: predictors of operative risk, left ventricular function recovery, and long term outcome. Heart 2005; 91:1324–1329.
- Connolly HM, Oh JK, Orszulak TA, et al. Aortic valve replacement for aortic stenosis with severe left ventricular dysfunction. Prognostic indicators. Circulation 1997; 95:2395–2400.
- Connolly HM, Oh JK, Schaff HV, et al. Severe aortic stenosis with low transvalvular gradient and severe left ventricular dysfunction: result of aortic valve replacement in 52 patients. Circulation 2000; 101:1940–1946.
- Pereira JJ, Lauer MS, Bashir M, et al. Survival after aortic valve replacement for severe aortic stenosis with low transvalvular gradients and severe left ventricular dysfunction. J Am Coll Cardiol 2002; 39:1356–1363.
- Pai RG, Varadarajan P, Razzouk A. Survival benefit of aortic valve replacement in patients with severe aortic stenosis with low ejection fraction and low gradient with normal ejection fraction. Ann Thorac Surg 2008; 86:1781–1789.
- Monin JL, Monchi M, Gest V, Duval-Moulin AM, Dubois-Rande JL, Gueret P. Aortic stenosis with severe left ventricular dysfunction and low transvalvular pressure gradients: risk stratification by low-dose dobutamine echocardiography. J Am Coll Cardiol 2001; 37:2101–2107.
- Monin JL, Quéré J P, Monchi M, et al. Low-gradient aortic stenosis: operative risk stratification and predictors for long-term outcome: a multicenter study using dobutamine stress hemodynamics. Circulation 2003; 108:319–324.
- Zuppiroli A, Mori F, Olivotto I, Castelli G, Favilli S, Dolara A. Therapeutic implications of contractile reserve elicited by dobutamine echocardiography in symptomatic, low-gradient aortic stenosis. Ital Heart J 2003; 4:264–270.
- Tribouilloy C, Lévy F, Rusinaru D, et al. Outcome after aortic valve replacement for low-flow/low-gradient aortic stenosis without contractile reserve on dobutamine stress echocardiography. J Am Coll Cardiol 2009; 53:1865–1873.
- Ahmed AA, Graham AN, Lovell D, O’Kane HO. Management of mild to moderate aortic valve disease during coronary artery bypass grafting. Eur J Cardiothorac Surg 2003; 24:535–539.
- Verhoye J P, Merlicco F, Sami IM, et al. Aortic valve replacement for aortic stenosis after previous coronary artery bypass grafting: could early reoperation be prevented? J Heart Valve Dis 2006; 15:474–478.
- Hochrein J, Lucke JC, Harrison JK, et al. Mortality and need for reoperation in patients with mild-to-moderate asymptomatic aortic valve disease undergoing coronary artery bypass graft alone. Am Heart J 1999; 138:791–797.
- Pereira JJ, Balaban K, Lauer MS, Lytle B, Thomas JD, Garcia MJ. Aortic valve replacement in patients with mild or moderate aortic stenosis and coronary bypass surgery. Am J Med 2005; 118:735–742.
- Amato MC, Moffa PJ, Werner KE, Ramires JA. Treatment decision in asymptomatic aortic valve stenosis: role of exercise testing. Heart 2001; 86:381–386.
- Das P, Rimington H, Chambers J. Exercise testing to stratify risk in aortic stenosis. Eur Heart J 2005; 26:1309–1313.
- Weber M, Arnold R, Rau M, et al. Relation of N-terminal pro-B-type natriuretic peptide to severity of valvular aortic stenosis. Am J Cardiol 2004; 94:740–745.
- Weber M, Hausen M, Arnold R, et al. Prognostic value of N-terminal pro-B-type natriuretic peptide for conservatively and surgically treated patients with aortic valve stenosis. Heart 2006; 92:1639–1644.
- Gerber IL, Stewart RA, Legget ME, et al. Increased plasma natriuretic peptide levels refect symptom onset in aortic stenosis. Circulation 2003; 107:1884–1890.
- Bergler-Klein J, Klaar U, Heger M, et al. Natriuretic peptides predict symptom-free survival and postoperative outcome in severe aortic stenosis. Circulation 2004; 109:2302–2308.
- Lancellotti P, Moonen M, Magne J, et al. Prognostic effect of long-axis left ventricular dysfunction and B-type natriuretic peptide levels in asymptomatic aortic stenosis. Am J Cardiol 2010; 105:383–388.
- Langanay T, Flécher E, Fouquet O, et al. Aortic valve replacement in the elderly: the real life. Ann Thorac Surg 2012; 93:70–77.
- Christofferson RD, Kapadia SR, Rajagopal V, Tuzcu EM. Emerging transcatheter therapies for aortic and mitral disease. Heart 2009; 95:148–155.
- Cribier A, Savin T, Saoudi N, Rocha P, Berland J, Letac B. Percutaneous transluminal valvuloplasty of acquired aortic stenosis in elderly patients: an alternative to valve replacement? Lancet 1986; 1:63–67.
- Percutaneous balloon aortic valvuloplasty. Acute and 30-day follow-up results in 674 patients from the NHLBI Balloon Valvuloplasty Registry. Circulation 1991; 84:2383–2397.
- Otto CM, Mickel MC, Kennedy JW, et al. Three-year outcome after balloon aortic valvuloplasty. Insights into prognosis of valvular aortic stenosis. Circulation 1994; 89:642–650.
- Bernard Y, Etievent J, Mourand JL, et al. Long-term results of percutaneous aortic valvuloplasty compared with aortic valve replacement in patients more than 75 years old. J Am Coll Cardiol 1992; 20:796–801.
- Elkayam U, Janmohamed M, Habib M, Hatamizadeh P. Vasodilators in the management of acute heart failure. Crit Care Med 2008; 36(suppl 1):S95–S105.
- Popovic ZB, Khot UN, Novaro GM, et al. Effects of sodium nitroprusside in aortic stenosis associated with severe heart failure: pressure-volume loop analysis using a numerical model. Am J Physiol Heart Circ Physiol 2005; 288:H416–H423.
- Khot UN, Novaro GM, Popovic ZB, et al. Nitroprusside in critically ill patients with left ventricular dysfunction and aortic stenosis. N Engl J Med 2003; 348:1756–1763.
- Aksoy O, Yousefzai R, Singh D, et al. Cardiogenic shock in the setting of severe aortic stenosis: role of intra-aortic balloon pump support. Heart 2011; 97:838–843.
Surgical aortic valve replacement remains the gold standard treatment for symptomatic aortic valve stenosis in patients at low or moderate risk of surgical complications. But this is a disease of the elderly, many of whom are too frail or too sick to undergo surgery.
Now, patients who cannot undergo this surgery can be offered the less invasive option of transcatheter aortic valve replacement. Balloon valvuloplasty, sodium nitroprusside, and intra-aortic balloon counterpulsation can buy time for ill patients while more permanent mechanical interventions are being considered.
In this review, we will present several cases that highlight management choices for patients with severe aortic stenosis.
A PROGRESSIVE DISEASE OF THE ELDERLY
Aortic stenosis is the most common acquired valvular disease in the United States, and its incidence and prevalence are rising as the population ages. Epidemiologic studies suggest that 2% to 7% of all patients over age 65 have it.1,2
The natural history of the untreated disease is well established, with several case series showing an average decrease of 0.1 cm2 per year in aortic valve area and an increase of 7 mm Hg per year in the pressure gradient across the valve once the diagnosis is made.3,4 Development of angina, syncope, or heart failure is associated with adverse clinical outcomes, including death, and warrants prompt intervention with aortic valve replacement.5–7 Without intervention, the mortality rates reach as high as 75% in 3 years once symptoms develop.
Statins, bisphosphonates, and angiotensin-converting enzyme inhibitors have been used in attempts to slow or reverse the progression of aortic stenosis. However, studies of these drugs have had mixed results, and no definitive benefit has been shown.8–13 Surgical aortic valve replacement, on the other hand, normalizes the life expectancy of patients with aortic stenosis to that of age- and sex-matched controls and remains the gold standard therapy for patients who have symptoms.14
Traditionally, valve replacement has involved open heart surgery, since it requires direct visualization of the valve while the patient is on cardiopulmonary bypass. Unfortunately, many patients have multiple comorbid conditions and therefore are not candidates for open heart surgery. Options for these patients include aortic valvuloplasty and transcatheter aortic valve replacement. While there is considerable experience with aortic valvuloplasty, transcatheter aortic valve replacement is relatively new. In large randomized trials and registries, the transcatheter procedure has been shown to significantly improve long-term survival compared with medical management alone in inoperable patients and to have benefit similar to that of surgery in the high-risk population.15–17
CASE 1: SEVERE, SYMPTOMATIC STENOSIS IN A GOOD SURGICAL CANDIDATE
Mr. A, age 83, presents with shortness of breath and peripheral edema that have been worsening over the past several months. His pulse rate is 64 beats per minute and his blood pressure is 110/90 mm Hg. Auscultation reveals an absent aortic second heart sound with a late peaking systolic murmur that increases with expiration.
On echocardiography, his left ventricular ejection fraction is 55%, peak transaortic valve gradient 88 mm Hg, mean gradient 60 mm Hg, and effective valve area 0.6 cm2. He undergoes catheterization of the left side of his heart, which shows normal coronary arteries.
Mr. A also has hypertension and hyperlipidemia; his renal and pulmonary functions are normal.
How would you manage Mr. A’s aortic stenosis?
Symptomatic aortic stenosis leads to adverse clinical outcomes if managed medically without mechanical intervention,5–7 but patients who undergo aortic valve replacement have age-corrected postoperative survival rates that are nearly normal.14 Furthermore, thanks to improvements in surgical techniques and perioperative management, surgical mortality rates have decreased significantly in recent years and now range from 1% to 8%.18–20 The accumulated evidence showing clear superiority of a surgical approach over medical therapy has greatly simplified the therapeutic algorithm.21
Consequently, the current guidelines from the American College of Cardiology and American Heart Association (ACC/AHA) give surgery a class I indication (evidence or general agreement that the procedure is beneficial, useful, and effective) for symptomatic severe aortic stenosis (Figure 1). This level of recommendation also applies to patients who have severe but asymptomatic aortic stenosis who are undergoing other types of cardiac surgery and also to patients with severe aortic stenosis and left ventricular dysfunction (defined as an ejection fraction < 50%).21
Mr. A was referred for surgical aortic valve replacement, given its clear survival benefit.
CASE 2: SYMPTOMS AND LEFT VENTRICULAR DYSFUNCTION
Ms. B, age 79, has hypertension and hyperlipidemia and now presents to the outpatient department with worsening shortness of breath and chest discomfort. Electrocardiography shows significant left ventricular hypertrophy and abnormal repolarization. Left heart catheterization reveals mild nonobstructive coronary artery disease.
Echocardiography reveals an ejection fraction of 25%, severe left ventricular hypertrophy, and global hypokinesis. The aortic valve leaflets appear heavily calcified, with restricted motion. The peak and mean gradients across the aortic valve are 40 and 28 mm Hg, and the valve area is 0.8 cm2. Right heart catheterization shows a cardiac output of 3.1 L/min.
Does this patient’s aortic stenosis account for her clinical presentation?
Managing patients who have suspected severe aortic stenosis, left ventricular dysfunction, and low aortic valve gradients can be challenging. Although data for surgical intervention are not as robust for these patient subsets as for patients like Mr. A, several case series have suggested that survival in these patients is significantly better with surgery than with medical therapy alone.22–27
Specific factors predict whether patients with ventricular dysfunction and low gradients will benefit from aortic valve replacement. Dobutamine stress echocardiography is helpful in distinguishing true severe aortic stenosis from “pseudostenosis,” in which leaflet motion is restricted due to primary cardiomyopathy and low flow. Distinguishing between true aortic stenosis and pseudostenosis is of paramount value, as surgery is associated with improved long-term outcomes in patients with true aortic stenosis (even though they are at higher surgical risk), whereas those with pseudostenosis will not benefit from surgery.28–31
Infusion of dobutamine increases the flow across the aortic valve (if the left ventricle has contractile reserve; more on this below), and an increasing valve area with increasing doses of dobutamine is consistent with pseudostenosis. In this situation, treatment of the underlying cardiomyopathy is indicated as opposed to replacement of the aortic valve (Figure 2).
Contractile reserve is defined as an increase in stroke volume (> 20%), valvular gradient (> 10 mm Hg), or peak velocity (> 0.6 m/s) with peak dobutamine infusion. The presence of contractile reserve in patients with aortic stenosis identifies a high-risk group that benefits from aortic valve replacement (Figure 2).
Treatment of patients who have inadequate reserve is controversial. In the absence of contractile reserve, an adjunct imaging study such as computed tomography may be of value in detecting calcified valve leaflets, as the presence of calcium is associated with true aortic stenosis. Comorbid conditions should be taken into account as well, given the higher surgical risk in this patient subset, as aortic valve replacement in this already high-risk group of patients might be futile in some cases.
The ACC/AHA guidelines now give dobutamine stress echocardiography a class IIa indication (meaning the weight of the evidence or opinion is in favor of usefulness or efficacy) for determination of contractile reserve and valvular stenosis for patients with an ejection fraction of 30% or less or a mean gradient of 40 mm Hg or less.21
Ms. B underwent dobutamine stress echocardiography. It showed increases in ejection fraction, stroke volume, and transvalvular gradients, indicating that she did have contractile reserve and true severe aortic stenosis. Consequently, she was referred for surgical aortic valve replacement.
CASE 3: MODERATE STENOSIS AND THREE-VESSEL CORONARY ARTERY DISEASE
Mr. C, age 81, has hypertension and hyperlipidemia. He now presents to the emergency department with chest discomfort that began suddenly, awakening him from sleep. His presenting electrocardiogram shows nonspecific changes, and he is diagnosed with non-ST-elevation myocardial infarction. He undergoes left heart catheterization, which reveals severe three-vessel coronary artery disease.
Echocardiography reveals an ejection fraction of 55% and aortic stenosis, with an aortic valve area of 1.2 cm2, a peak gradient of 44 mm Hg, and a mean gradient of 28 mm Hg.
How would you manage his aortic stenosis?
Moderate aortic stenosis in a patient who needs surgery for severe triple-vessel coronary artery disease, other valve diseases, or aortic disease raises the question of whether aortic valve replacement should be performed in conjunction with these surgeries. Although these patients would not otherwise qualify for aortic valve replacement, the fact that they will undergo a procedure that will expose them to the risks associated with open heart surgery makes them reasonable candidates. Even if the patient does not need aortic valve replacement right now, aortic stenosis progresses at a predictable rate—the valve area decreases by a mean of 0.1 cm2/year and the gradients increase by 7 mm Hg/year. Therefore, clinical judgment should be exercised so that the patient will not need to undergo open heart surgery again in the near future.
The ACC/AHA guidelines recommend aortic valve replacement for patients with moderate aortic stenosis undergoing coronary artery bypass grafting or surgery on the aorta or other heart valves, giving it a class IIa indication.21 This recommendation is based on several retrospective case series that evaluated survival, the need for reoperation for aortic valve replacement, or both in patients undergoing coronary artery bypass grafting.32–35
No data exist, however, on adding aortic valve replacement to coronary artery bypass grafting in cases of mild aortic stenosis. As a result, it is controversial and carries a class IIb recommendation (meaning that its usefulness or efficacy is less well established). The ACC/AHA guidelines state that aortic valve replacement “may be considered” in patients undergoing coronary artery bypass grafting who have mild aortic stenosis (mean gradient < 30 mm Hg or jet velocity < 3 m/s) when there is evidence, such as moderate or severe valve calcification, that progression may be rapid (level of evidence C: based only on consensus opinion of experts, case studies or standard of care).21
Mr. C, who has moderate aortic stenosis, underwent aortic valve replacement in conjunction with three-vessel bypass grafting.
CASE 4: ASYMPTOMATIC BUT SEVERE STENOSIS
Mr. D, age 74, has hypertension, hyperlipidemia, and aortic stenosis. He now presents to the outpatient department for his annual echocardiogram to follow his aortic stenosis. He has a sedentary lifestyle but feels well performing activities of daily living. He denies dyspnea on exertion, chest pain, or syncope.
His echocardiogram reveals an effective aortic valve area of 0.7 cm2, peak gradient 90 mm Hg, and mean gradient 70 mm Hg. There is evidence of severe left ventricular hypertrophy, and the valve leaflets show bulky calcification and severe restriction. An echocardiogram performed at the same institution a year earlier revealed gradients of 60 and 40 mm Hg.
Blood is drawn for laboratory tests, including N-terminal pro-brain natriuretic peptide, which is 350 pg/mL (reference range for his age < 125 pg/mL). He is referred for a treadmill stress test, which elicits symptoms at a moderate activity level.
How would you manage his aortic stenosis?
Aortic valve replacement can be considered in patients who have asymptomatic but severe aortic stenosis with preserved left ventricular function (class IIb indication).21
Clinical assessment of asymptomatic aortic stenosis can be challenging, however, as patients may underreport their symptoms or decrease their activity levels to avoid symptoms. Exercise testing in such patients can elicit symptoms, unmask diminished exercise capacity, and help determine if they should be referred for surgery.36,37 Natriuretic peptide levels have been shown to correlate with the severity of aortic stenosis,38,39 and more importantly, to help predict symptom onset, cardiac death, and need for aortic valve replacement.40–42
Some patients with asymptomatic but severe aortic stenosis are at higher risk of morbidity and death. High-risk subsets include patients with rapid progression of aortic stenosis and those with critical aortic stenosis characterized by an aortic valve area less than 0.60 cm2, mean gradient greater than 60 mm Hg, and jet velocity greater than 5.0 m/s. It is reasonable to offer these patients surgery if their expected operative mortality risk is less than 1.0%.21
Mr. D has evidence of rapid progression as defined by an increase in aortic jet velocity of more than 0.3 m/s/year. He is at low surgical risk and was referred for elective aortic valve replacement.
CASE 5: TOO FRAIL FOR SURGERY
Mr. E, age 84, has severe aortic stenosis (valve area 0.6 cm2, peak and mean gradients of 88 and 56 mm Hg), coronary artery disease status post coronary artery bypass grafting, moderate chronic obstructive pulmonary disease (forced expiratory volume in 1 second 0.8 L), chronic kidney disease (serum creatinine 1.9 mg/dL), hypertension, hyperlipidemia, and diabetes mellitus. He has preserved left ventricular function. He presents to the outpatient department with worsening shortness of breath and peripheral edema over the past several months. Your impression is that he is very frail. How would you manage Mr. E’s aortic stenosis?
Advances in surgical techniques and perioperative management over the years have enabled higher-risk patients to undergo surgical aortic valve replacement with excellent out-comes.18–20,43 Yet many patients still cannot undergo surgery because their risk is too high. Patients ineligible for surgery have traditionally been treated medically—with poor out-comes—or with balloon aortic valvuloplasty to palliate symptoms.
Transcatheter aortic valve replacement, approved by the US Food and Drug Administration (FDA) in 2011, now provides another option for these patients. In this procedure, a bioprosthetic valve mounted on a metal frame is implanted over the native stenotic valve.
Currently, the only FDA-approved and commercially available valve in the United States is the Edwards SAPIEN valve, which has bovine pericardial tissue leaflets fixed to a balloon-expandable stainless steel frame (Figure 3). In the Placement of Aortic Transcatheter Valves (PARTNER) trial,15 patients who could not undergo surgery who underwent transcatheter replacement with this valve had a significantly better survival rate than patients treated medically.15,17 Use of this valve has also been compared against conventional surgical aortic valve replacement in high-risk patients and was found to have similar long-term outcomes (Figure 4).16 It was on the basis of this trial that this valve was granted approval for patients who cannot undergo surgery.
The standard of care for high-risk patients remains surgical aortic valve replacement, although it remains to be seen whether transcatheter replacement will be made available as well to patients eligible for surgery in the near future. There are currently no randomized data for transcatheter aortic valve replacement in patients at moderate to low surgical risk, and these patients should not be considered for this procedure.
Although the initial studies are encouraging for patients who cannot undergo surgery and who are at high risk without it, several issues and concerns remain. Importantly, the long-term durability of the transcatheter valve and longer-term outcomes remain unknown. Furthermore, the risk of vascular complications remains high (10% to 15%), dictating the need for careful patient selection. There are also concerns about the risks of stroke and of paravalvular aortic insufficiency. These issues are being investigated and addressed, however, and we hope that with increasing operator experience and improvements in the technique, outcomes will be improved.
Which approach for transcatheter aortic valve replacement?
There are several considerations in determining a patient’s eligibility for transcatheter aortic valve replacement.
Initially, these valves were placed by a transvenous, transseptal approach, but now retrograde placement through the femoral artery has become standard. In this procedure, the device is advanced retrograde from the femoral artery through the aorta and placed across the native aortic valve under fluoroscopic and echocardiographic guidance.
Patients who are not eligible for transfemoral placement because of severe atherosclerosis, tortuosity, or ectasia of the iliofemoral artery or aorta can still undergo percutaneous treatment with a transapical approach. This is a hybrid surgical-transcatheter approach in which the valve is delivered through a sheath placed by left ventricular apical puncture.17,44
A newer approach gaining popularity is the transaortic technique, in which the ascending aorta is accessed directly through a ministernotomy and the delivery sheath is placed with a direct puncture. Other approaches are through the axillary and subclavian arteries.
Other valves are under development
Several other valves are under development and will likely change the landscape of transcatheter aortic valve replacement with improving outcomes. Valves that are available in the United States are shown in Figure 3. The CoreValve, consisting of porcine pericardial leaflets mounted on a self-expanding nitinol stent, is currently being studied in a trial in the United States, and the manufacturer (Medtronic) will seek approval when results are complete in the near future.
Mr. E was initially referred for surgery, but when deemed to be unable to undergo surgery was found to be a good candidate for transcatheter aortic valve replacement.
CASE 6: LIFE-LIMITING COMORBID ILLNESS
Mr. F, age 77, has multiple problems: severe aortic stenosis (aortic valve area 0.6 cm2; peak and mean gradients of 92 and 59 mm Hg), stage IV pancreatic cancer, coronary artery disease status post coronary artery bypass grafting, chronic kidney disease (serum creatinine 1.9 mg/dL), hypertension, and hyperlipidemia. He presents to the outpatient department with shortness of breath at rest, orthopnea, effort intolerance, and peripheral edema over the past several months.
On physical examination rales in both lung bases can be heard. Left heart catheterization shows patent bypass grafts.
How would you manage Mr. F’s aortic stenosis?
Aortic valve replacement is not considered an option in patients with noncardiac illnesses and comorbidities that are life-limiting in the near term. Under these circumstances, aortic valvuloplasty can be offered as a means of palliating symptoms or, if the comorbid conditions can be modified, as a bridge to more definitive treatment with aortic valve replacement.
Since first described in 1986,45 percutaneous aortic valvuloplasty has been studied in several case series and registries, with consistent findings. Acutely, it increases the valve area and lessens the gradients across the valve, relieving symptoms. The risk of death during the procedure ranged from 3% to 13.5% in several case series, with a 30-day survival rate greater than 85%.46 However, the hemodynamic and symptomatic improvement is only short-term, as valve area and gradients gradually worsen within several months.47,48 Consequently, balloon valvuloplasty is considered a palliative approach.
Mr. F has a potentially life-limiting illness, ie, cancer, which would make him a candidate for aortic valvuloplasty rather than replacement. He can be referred for evaluation for this procedure in hopes of palliating his symptoms by relieving his dyspnea and improving his quality of life.
CASE 7: HEMODYNAMIC INSTABILITY
Mr. G, age 87, is scheduled for surgical aortic valve replacement because of severe aortic stenosis (valve area 0.5 cm2, peak and mean gradients 89 and 45 mm Hg) with an ejection fraction of 30%.
Two weeks before his scheduled surgery he presents to the emergency department with worsening fluid overload and increasing shortness of breath. His initial laboratory work shows new-onset renal failure, and he has signs of hypoperfusion on physical examination. He is transferred to the cardiac intensive care unit for further care.
How would you manage his aortic stenosis?
Patients with decompensated aortic stenosis and hemodynamic instability are at extreme risk during surgery. Medical stabilization beforehand may mitigate the risks associated with surgical or transcatheter aortic valve replacement. Aortic valvuloplasty, treatment with sodium nitroprusside, and support with intra-aortic balloon counterpulsation may help stabilize patients in this “low-output” setting.
Sodium nitroprusside has long been used in low-output states. By relaxing vascular smooth muscle, it leads to increased venous capacitance, decreasing preload and congestion. It also decreases systemic vascular resistance with a subsequent decrease in afterload, which in turn improves systolic emptying. Together, these effects reduce systolic and diastolic wall stress, lower myocardial oxygen consumption, and ultimately increase cardiac output.49,50
These theoretical benefits translate to clinical improvement and increased cardiac output, as shown in a case series of 25 patients with severe aortic stenosis and left ventricular systolic dysfunction (ejection fraction 35%) presenting in a low-output state in the absence of hypotension.51 These findings have led to a ACC/AHA recommendation for the use of sodium nitroprusside in patients who have severe aortic stenosis presenting in low-output state with decompensated heart failure.21
Intra-aortic balloon counterpulsation, introduced in 1968, has been used in several clinical settings, including acute coronary syndromes, intractable ventricular arrhythmias, and refractory heart failure, and for support of hemodynamics in the perioperative setting. Its role in managing ventricular septal rupture and acute mitral regurgitation is well established. It reliably reduces afterload and improves coronary perfusion, augmenting the cardiac output. This in turn leads to improved systemic perfusion, which can buy time for a critically ill patient during which the primary disease process is addressed.
Recently, a case series in which intraaortic balloon counterpulsation devices were placed in patients with severe aortic stenosis and cardiogenic shock showed findings similar to those with sodium nitroprusside infusion. Specifically, their use was associated with improved cardiac indices and filling pressures with a decrease in systemic vascular resistance. These changes have led to increased cardiac performance, resulting in better systemic perfusion.52 Thus, intra-aortic balloon counterpulsation can be an option for stabilizing patients with severe aortic stenosis and cardiogenic shock.
Mr. G was treated with sodium nitroprusside and intravenous diuretics. He achieved symptomatic relief and his renal function returned to baseline. He subsequently underwent aortic valve replacement during the hospitalization.
Surgical aortic valve replacement remains the gold standard treatment for symptomatic aortic valve stenosis in patients at low or moderate risk of surgical complications. But this is a disease of the elderly, many of whom are too frail or too sick to undergo surgery.
Now, patients who cannot undergo this surgery can be offered the less invasive option of transcatheter aortic valve replacement. Balloon valvuloplasty, sodium nitroprusside, and intra-aortic balloon counterpulsation can buy time for ill patients while more permanent mechanical interventions are being considered.
In this review, we will present several cases that highlight management choices for patients with severe aortic stenosis.
A PROGRESSIVE DISEASE OF THE ELDERLY
Aortic stenosis is the most common acquired valvular disease in the United States, and its incidence and prevalence are rising as the population ages. Epidemiologic studies suggest that 2% to 7% of all patients over age 65 have it.1,2
The natural history of the untreated disease is well established, with several case series showing an average decrease of 0.1 cm2 per year in aortic valve area and an increase of 7 mm Hg per year in the pressure gradient across the valve once the diagnosis is made.3,4 Development of angina, syncope, or heart failure is associated with adverse clinical outcomes, including death, and warrants prompt intervention with aortic valve replacement.5–7 Without intervention, the mortality rates reach as high as 75% in 3 years once symptoms develop.
Statins, bisphosphonates, and angiotensin-converting enzyme inhibitors have been used in attempts to slow or reverse the progression of aortic stenosis. However, studies of these drugs have had mixed results, and no definitive benefit has been shown.8–13 Surgical aortic valve replacement, on the other hand, normalizes the life expectancy of patients with aortic stenosis to that of age- and sex-matched controls and remains the gold standard therapy for patients who have symptoms.14
Traditionally, valve replacement has involved open heart surgery, since it requires direct visualization of the valve while the patient is on cardiopulmonary bypass. Unfortunately, many patients have multiple comorbid conditions and therefore are not candidates for open heart surgery. Options for these patients include aortic valvuloplasty and transcatheter aortic valve replacement. While there is considerable experience with aortic valvuloplasty, transcatheter aortic valve replacement is relatively new. In large randomized trials and registries, the transcatheter procedure has been shown to significantly improve long-term survival compared with medical management alone in inoperable patients and to have benefit similar to that of surgery in the high-risk population.15–17
CASE 1: SEVERE, SYMPTOMATIC STENOSIS IN A GOOD SURGICAL CANDIDATE
Mr. A, age 83, presents with shortness of breath and peripheral edema that have been worsening over the past several months. His pulse rate is 64 beats per minute and his blood pressure is 110/90 mm Hg. Auscultation reveals an absent aortic second heart sound with a late peaking systolic murmur that increases with expiration.
On echocardiography, his left ventricular ejection fraction is 55%, peak transaortic valve gradient 88 mm Hg, mean gradient 60 mm Hg, and effective valve area 0.6 cm2. He undergoes catheterization of the left side of his heart, which shows normal coronary arteries.
Mr. A also has hypertension and hyperlipidemia; his renal and pulmonary functions are normal.
How would you manage Mr. A’s aortic stenosis?
Symptomatic aortic stenosis leads to adverse clinical outcomes if managed medically without mechanical intervention,5–7 but patients who undergo aortic valve replacement have age-corrected postoperative survival rates that are nearly normal.14 Furthermore, thanks to improvements in surgical techniques and perioperative management, surgical mortality rates have decreased significantly in recent years and now range from 1% to 8%.18–20 The accumulated evidence showing clear superiority of a surgical approach over medical therapy has greatly simplified the therapeutic algorithm.21
Consequently, the current guidelines from the American College of Cardiology and American Heart Association (ACC/AHA) give surgery a class I indication (evidence or general agreement that the procedure is beneficial, useful, and effective) for symptomatic severe aortic stenosis (Figure 1). This level of recommendation also applies to patients who have severe but asymptomatic aortic stenosis who are undergoing other types of cardiac surgery and also to patients with severe aortic stenosis and left ventricular dysfunction (defined as an ejection fraction < 50%).21
Mr. A was referred for surgical aortic valve replacement, given its clear survival benefit.
CASE 2: SYMPTOMS AND LEFT VENTRICULAR DYSFUNCTION
Ms. B, age 79, has hypertension and hyperlipidemia and now presents to the outpatient department with worsening shortness of breath and chest discomfort. Electrocardiography shows significant left ventricular hypertrophy and abnormal repolarization. Left heart catheterization reveals mild nonobstructive coronary artery disease.
Echocardiography reveals an ejection fraction of 25%, severe left ventricular hypertrophy, and global hypokinesis. The aortic valve leaflets appear heavily calcified, with restricted motion. The peak and mean gradients across the aortic valve are 40 and 28 mm Hg, and the valve area is 0.8 cm2. Right heart catheterization shows a cardiac output of 3.1 L/min.
Does this patient’s aortic stenosis account for her clinical presentation?
Managing patients who have suspected severe aortic stenosis, left ventricular dysfunction, and low aortic valve gradients can be challenging. Although data for surgical intervention are not as robust for these patient subsets as for patients like Mr. A, several case series have suggested that survival in these patients is significantly better with surgery than with medical therapy alone.22–27
Specific factors predict whether patients with ventricular dysfunction and low gradients will benefit from aortic valve replacement. Dobutamine stress echocardiography is helpful in distinguishing true severe aortic stenosis from “pseudostenosis,” in which leaflet motion is restricted due to primary cardiomyopathy and low flow. Distinguishing between true aortic stenosis and pseudostenosis is of paramount value, as surgery is associated with improved long-term outcomes in patients with true aortic stenosis (even though they are at higher surgical risk), whereas those with pseudostenosis will not benefit from surgery.28–31
Infusion of dobutamine increases the flow across the aortic valve (if the left ventricle has contractile reserve; more on this below), and an increasing valve area with increasing doses of dobutamine is consistent with pseudostenosis. In this situation, treatment of the underlying cardiomyopathy is indicated as opposed to replacement of the aortic valve (Figure 2).
Contractile reserve is defined as an increase in stroke volume (> 20%), valvular gradient (> 10 mm Hg), or peak velocity (> 0.6 m/s) with peak dobutamine infusion. The presence of contractile reserve in patients with aortic stenosis identifies a high-risk group that benefits from aortic valve replacement (Figure 2).
Treatment of patients who have inadequate reserve is controversial. In the absence of contractile reserve, an adjunct imaging study such as computed tomography may be of value in detecting calcified valve leaflets, as the presence of calcium is associated with true aortic stenosis. Comorbid conditions should be taken into account as well, given the higher surgical risk in this patient subset, as aortic valve replacement in this already high-risk group of patients might be futile in some cases.
The ACC/AHA guidelines now give dobutamine stress echocardiography a class IIa indication (meaning the weight of the evidence or opinion is in favor of usefulness or efficacy) for determination of contractile reserve and valvular stenosis for patients with an ejection fraction of 30% or less or a mean gradient of 40 mm Hg or less.21
Ms. B underwent dobutamine stress echocardiography. It showed increases in ejection fraction, stroke volume, and transvalvular gradients, indicating that she did have contractile reserve and true severe aortic stenosis. Consequently, she was referred for surgical aortic valve replacement.
CASE 3: MODERATE STENOSIS AND THREE-VESSEL CORONARY ARTERY DISEASE
Mr. C, age 81, has hypertension and hyperlipidemia. He now presents to the emergency department with chest discomfort that began suddenly, awakening him from sleep. His presenting electrocardiogram shows nonspecific changes, and he is diagnosed with non-ST-elevation myocardial infarction. He undergoes left heart catheterization, which reveals severe three-vessel coronary artery disease.
Echocardiography reveals an ejection fraction of 55% and aortic stenosis, with an aortic valve area of 1.2 cm2, a peak gradient of 44 mm Hg, and a mean gradient of 28 mm Hg.
How would you manage his aortic stenosis?
Moderate aortic stenosis in a patient who needs surgery for severe triple-vessel coronary artery disease, other valve diseases, or aortic disease raises the question of whether aortic valve replacement should be performed in conjunction with these surgeries. Although these patients would not otherwise qualify for aortic valve replacement, the fact that they will undergo a procedure that will expose them to the risks associated with open heart surgery makes them reasonable candidates. Even if the patient does not need aortic valve replacement right now, aortic stenosis progresses at a predictable rate—the valve area decreases by a mean of 0.1 cm2/year and the gradients increase by 7 mm Hg/year. Therefore, clinical judgment should be exercised so that the patient will not need to undergo open heart surgery again in the near future.
The ACC/AHA guidelines recommend aortic valve replacement for patients with moderate aortic stenosis undergoing coronary artery bypass grafting or surgery on the aorta or other heart valves, giving it a class IIa indication.21 This recommendation is based on several retrospective case series that evaluated survival, the need for reoperation for aortic valve replacement, or both in patients undergoing coronary artery bypass grafting.32–35
No data exist, however, on adding aortic valve replacement to coronary artery bypass grafting in cases of mild aortic stenosis. As a result, it is controversial and carries a class IIb recommendation (meaning that its usefulness or efficacy is less well established). The ACC/AHA guidelines state that aortic valve replacement “may be considered” in patients undergoing coronary artery bypass grafting who have mild aortic stenosis (mean gradient < 30 mm Hg or jet velocity < 3 m/s) when there is evidence, such as moderate or severe valve calcification, that progression may be rapid (level of evidence C: based only on consensus opinion of experts, case studies or standard of care).21
Mr. C, who has moderate aortic stenosis, underwent aortic valve replacement in conjunction with three-vessel bypass grafting.
CASE 4: ASYMPTOMATIC BUT SEVERE STENOSIS
Mr. D, age 74, has hypertension, hyperlipidemia, and aortic stenosis. He now presents to the outpatient department for his annual echocardiogram to follow his aortic stenosis. He has a sedentary lifestyle but feels well performing activities of daily living. He denies dyspnea on exertion, chest pain, or syncope.
His echocardiogram reveals an effective aortic valve area of 0.7 cm2, peak gradient 90 mm Hg, and mean gradient 70 mm Hg. There is evidence of severe left ventricular hypertrophy, and the valve leaflets show bulky calcification and severe restriction. An echocardiogram performed at the same institution a year earlier revealed gradients of 60 and 40 mm Hg.
Blood is drawn for laboratory tests, including N-terminal pro-brain natriuretic peptide, which is 350 pg/mL (reference range for his age < 125 pg/mL). He is referred for a treadmill stress test, which elicits symptoms at a moderate activity level.
How would you manage his aortic stenosis?
Aortic valve replacement can be considered in patients who have asymptomatic but severe aortic stenosis with preserved left ventricular function (class IIb indication).21
Clinical assessment of asymptomatic aortic stenosis can be challenging, however, as patients may underreport their symptoms or decrease their activity levels to avoid symptoms. Exercise testing in such patients can elicit symptoms, unmask diminished exercise capacity, and help determine if they should be referred for surgery.36,37 Natriuretic peptide levels have been shown to correlate with the severity of aortic stenosis,38,39 and more importantly, to help predict symptom onset, cardiac death, and need for aortic valve replacement.40–42
Some patients with asymptomatic but severe aortic stenosis are at higher risk of morbidity and death. High-risk subsets include patients with rapid progression of aortic stenosis and those with critical aortic stenosis characterized by an aortic valve area less than 0.60 cm2, mean gradient greater than 60 mm Hg, and jet velocity greater than 5.0 m/s. It is reasonable to offer these patients surgery if their expected operative mortality risk is less than 1.0%.21
Mr. D has evidence of rapid progression as defined by an increase in aortic jet velocity of more than 0.3 m/s/year. He is at low surgical risk and was referred for elective aortic valve replacement.
CASE 5: TOO FRAIL FOR SURGERY
Mr. E, age 84, has severe aortic stenosis (valve area 0.6 cm2, peak and mean gradients of 88 and 56 mm Hg), coronary artery disease status post coronary artery bypass grafting, moderate chronic obstructive pulmonary disease (forced expiratory volume in 1 second 0.8 L), chronic kidney disease (serum creatinine 1.9 mg/dL), hypertension, hyperlipidemia, and diabetes mellitus. He has preserved left ventricular function. He presents to the outpatient department with worsening shortness of breath and peripheral edema over the past several months. Your impression is that he is very frail. How would you manage Mr. E’s aortic stenosis?
Advances in surgical techniques and perioperative management over the years have enabled higher-risk patients to undergo surgical aortic valve replacement with excellent out-comes.18–20,43 Yet many patients still cannot undergo surgery because their risk is too high. Patients ineligible for surgery have traditionally been treated medically—with poor out-comes—or with balloon aortic valvuloplasty to palliate symptoms.
Transcatheter aortic valve replacement, approved by the US Food and Drug Administration (FDA) in 2011, now provides another option for these patients. In this procedure, a bioprosthetic valve mounted on a metal frame is implanted over the native stenotic valve.
Currently, the only FDA-approved and commercially available valve in the United States is the Edwards SAPIEN valve, which has bovine pericardial tissue leaflets fixed to a balloon-expandable stainless steel frame (Figure 3). In the Placement of Aortic Transcatheter Valves (PARTNER) trial,15 patients who could not undergo surgery who underwent transcatheter replacement with this valve had a significantly better survival rate than patients treated medically.15,17 Use of this valve has also been compared against conventional surgical aortic valve replacement in high-risk patients and was found to have similar long-term outcomes (Figure 4).16 It was on the basis of this trial that this valve was granted approval for patients who cannot undergo surgery.
The standard of care for high-risk patients remains surgical aortic valve replacement, although it remains to be seen whether transcatheter replacement will be made available as well to patients eligible for surgery in the near future. There are currently no randomized data for transcatheter aortic valve replacement in patients at moderate to low surgical risk, and these patients should not be considered for this procedure.
Although the initial studies are encouraging for patients who cannot undergo surgery and who are at high risk without it, several issues and concerns remain. Importantly, the long-term durability of the transcatheter valve and longer-term outcomes remain unknown. Furthermore, the risk of vascular complications remains high (10% to 15%), dictating the need for careful patient selection. There are also concerns about the risks of stroke and of paravalvular aortic insufficiency. These issues are being investigated and addressed, however, and we hope that with increasing operator experience and improvements in the technique, outcomes will be improved.
Which approach for transcatheter aortic valve replacement?
There are several considerations in determining a patient’s eligibility for transcatheter aortic valve replacement.
Initially, these valves were placed by a transvenous, transseptal approach, but now retrograde placement through the femoral artery has become standard. In this procedure, the device is advanced retrograde from the femoral artery through the aorta and placed across the native aortic valve under fluoroscopic and echocardiographic guidance.
Patients who are not eligible for transfemoral placement because of severe atherosclerosis, tortuosity, or ectasia of the iliofemoral artery or aorta can still undergo percutaneous treatment with a transapical approach. This is a hybrid surgical-transcatheter approach in which the valve is delivered through a sheath placed by left ventricular apical puncture.17,44
A newer approach gaining popularity is the transaortic technique, in which the ascending aorta is accessed directly through a ministernotomy and the delivery sheath is placed with a direct puncture. Other approaches are through the axillary and subclavian arteries.
Other valves are under development
Several other valves are under development and will likely change the landscape of transcatheter aortic valve replacement with improving outcomes. Valves that are available in the United States are shown in Figure 3. The CoreValve, consisting of porcine pericardial leaflets mounted on a self-expanding nitinol stent, is currently being studied in a trial in the United States, and the manufacturer (Medtronic) will seek approval when results are complete in the near future.
Mr. E was initially referred for surgery, but when deemed to be unable to undergo surgery was found to be a good candidate for transcatheter aortic valve replacement.
CASE 6: LIFE-LIMITING COMORBID ILLNESS
Mr. F, age 77, has multiple problems: severe aortic stenosis (aortic valve area 0.6 cm2; peak and mean gradients of 92 and 59 mm Hg), stage IV pancreatic cancer, coronary artery disease status post coronary artery bypass grafting, chronic kidney disease (serum creatinine 1.9 mg/dL), hypertension, and hyperlipidemia. He presents to the outpatient department with shortness of breath at rest, orthopnea, effort intolerance, and peripheral edema over the past several months.
On physical examination rales in both lung bases can be heard. Left heart catheterization shows patent bypass grafts.
How would you manage Mr. F’s aortic stenosis?
Aortic valve replacement is not considered an option in patients with noncardiac illnesses and comorbidities that are life-limiting in the near term. Under these circumstances, aortic valvuloplasty can be offered as a means of palliating symptoms or, if the comorbid conditions can be modified, as a bridge to more definitive treatment with aortic valve replacement.
Since first described in 1986,45 percutaneous aortic valvuloplasty has been studied in several case series and registries, with consistent findings. Acutely, it increases the valve area and lessens the gradients across the valve, relieving symptoms. The risk of death during the procedure ranged from 3% to 13.5% in several case series, with a 30-day survival rate greater than 85%.46 However, the hemodynamic and symptomatic improvement is only short-term, as valve area and gradients gradually worsen within several months.47,48 Consequently, balloon valvuloplasty is considered a palliative approach.
Mr. F has a potentially life-limiting illness, ie, cancer, which would make him a candidate for aortic valvuloplasty rather than replacement. He can be referred for evaluation for this procedure in hopes of palliating his symptoms by relieving his dyspnea and improving his quality of life.
CASE 7: HEMODYNAMIC INSTABILITY
Mr. G, age 87, is scheduled for surgical aortic valve replacement because of severe aortic stenosis (valve area 0.5 cm2, peak and mean gradients 89 and 45 mm Hg) with an ejection fraction of 30%.
Two weeks before his scheduled surgery he presents to the emergency department with worsening fluid overload and increasing shortness of breath. His initial laboratory work shows new-onset renal failure, and he has signs of hypoperfusion on physical examination. He is transferred to the cardiac intensive care unit for further care.
How would you manage his aortic stenosis?
Patients with decompensated aortic stenosis and hemodynamic instability are at extreme risk during surgery. Medical stabilization beforehand may mitigate the risks associated with surgical or transcatheter aortic valve replacement. Aortic valvuloplasty, treatment with sodium nitroprusside, and support with intra-aortic balloon counterpulsation may help stabilize patients in this “low-output” setting.
Sodium nitroprusside has long been used in low-output states. By relaxing vascular smooth muscle, it leads to increased venous capacitance, decreasing preload and congestion. It also decreases systemic vascular resistance with a subsequent decrease in afterload, which in turn improves systolic emptying. Together, these effects reduce systolic and diastolic wall stress, lower myocardial oxygen consumption, and ultimately increase cardiac output.49,50
These theoretical benefits translate to clinical improvement and increased cardiac output, as shown in a case series of 25 patients with severe aortic stenosis and left ventricular systolic dysfunction (ejection fraction 35%) presenting in a low-output state in the absence of hypotension.51 These findings have led to a ACC/AHA recommendation for the use of sodium nitroprusside in patients who have severe aortic stenosis presenting in low-output state with decompensated heart failure.21
Intra-aortic balloon counterpulsation, introduced in 1968, has been used in several clinical settings, including acute coronary syndromes, intractable ventricular arrhythmias, and refractory heart failure, and for support of hemodynamics in the perioperative setting. Its role in managing ventricular septal rupture and acute mitral regurgitation is well established. It reliably reduces afterload and improves coronary perfusion, augmenting the cardiac output. This in turn leads to improved systemic perfusion, which can buy time for a critically ill patient during which the primary disease process is addressed.
Recently, a case series in which intraaortic balloon counterpulsation devices were placed in patients with severe aortic stenosis and cardiogenic shock showed findings similar to those with sodium nitroprusside infusion. Specifically, their use was associated with improved cardiac indices and filling pressures with a decrease in systemic vascular resistance. These changes have led to increased cardiac performance, resulting in better systemic perfusion.52 Thus, intra-aortic balloon counterpulsation can be an option for stabilizing patients with severe aortic stenosis and cardiogenic shock.
Mr. G was treated with sodium nitroprusside and intravenous diuretics. He achieved symptomatic relief and his renal function returned to baseline. He subsequently underwent aortic valve replacement during the hospitalization.
- Carabello BA, Paulus WJ. Aortic stenosis. Lancet 2009; 373:956–966.
- Lindroos M, Kupari M, Heikkilä J, Tilvis R. Prevalence of aortic valve abnormalities in the elderly: an echocardiographic study of a random population sample. J Am Coll Cardiol 1993; 21:1220–1225.
- Otto CM, Pearlman AS, Gardner CL. Hemodynamic progression of aortic stenosis in adults assessed by Doppler echocardiography. J Am Coll Cardiol 1989; 13:545–550.
- Otto CM, Burwash IG, Legget ME, et al. Prospective study of asymptomatic valvular aortic stenosis. Clinical, echocardiographic, and exercise predictors of outcome. Circulation 1997; 95:2262–2270.
- Varadarajan P, Kapoor N, Bansal RC, Pai RG. Clinical profile and natural history of 453 nonsurgically managed patients with severe aortic stenosis. Ann Thorac Surg 2006; 82:2111–2115.
- Turina J, Hess O, Sepulcri F, Krayenbuehl HP. Spontaneous course of aortic valve disease. Eur Heart J 1987; 8:471–483.
- Horstkotte D, Loogen F. The natural history of aortic valve stenosis. Eur Heart J 1988; 9(suppl E):57–64.
- Novaro GM, Tiong IY, Pearce GL, Lauer MS, Sprecher DL, Griffin BP. Effect of hydroxymethylglutaryl coenzyme a reductase inhibitors on the progression of calcific aortic stenosis. Circulation 2001; 104:2205–2209.
- Cowell SJ, Newby DE, Prescott RJ, et al; Scottish Aortic Stenosis and Lipid Lowering Trial, Impact on Regression (SALTIRE) Investigators. A randomized trial of intensive lipid-lowering therapy in calcific aortic stenosis. N Engl J Med 2005; 352:2389–2397.
- Rossebø AB, Pedersen TR, Boman K, et al; SEAS Investigators. Intensive lipid lowering with simvastatin and ezetimibe in aortic stenosis. N Engl J Med 2008; 359:1343–1356.
- Moura LM, Ramos SF, Zamorano JL, et al. Rosuvastatin affecting aortic valve endothelium to slow the progression of aortic stenosis. J Am Coll Cardiol 2007; 49:554–561.
- Rosenhek R, Rader F, Loho N, et al. Statins but not angiotensin-converting enzyme inhibitors delay progression of aortic stenosis. Circulation 2004; 110:1291–1295.
- O’Brien KD, Probstfield JL, Caulked MT, et al. Angiotensin-converting enzyme inhibitors and change in aortic valve calcium. Arch Intern Med 2005; 165:858–862.
- Lindblom D, Lindblom U, Qvist J, Lundström H. Long-term relative survival rates after heart valve replacement. J Am Coll Cardiol 1990; 15:566–573.
- Makkar RR, Fontana G P, Jilaihawi H, et al; PARTNER Trial Investigators. Transcatheter aortic-valve replacement for inoperable severe aortic stenosis. N Engl J Med 2012; 366:1696–1704.
- Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:2187–2198.
- Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:1597–1607.
- Di Eusanio M, Fortuna D, Cristell D, et al; RERIC (Emilia Romagna Cardiac Surgery Registry) Investigators. Contemporary outcomes of conventional aortic valve replacement in 638 octogenarians: insights from an Italian Regional Cardiac Surgery Registry (RERIC). Eur J Cardiothorac Surg 2012; 41:1247–1252.
- Di Eusanio M, Fortuna D, De Palma R, et al. Aortic valve replacement: results and predictors of mortality from a contemporary series of 2256 patients. J Thorac Cardiovasc Surg 2011; 141:940–947.
- Jamieson WR, Edwards FH, Schwartz M, Bero JW, Clark RE, Grover FL. Risk stratification for cardiac valve replacement. National Cardiac Surgery Database. Database Committee of the Society of Thoracic Surgeons. Ann Thorac Surg 1999; 67:943–951.
- Bonow RO, Carabello BA, Chatterjee K, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2008 focused update incorporated into the ACC/AHA 2006 guidelines for the management of patients with valvular heart disease: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to revise the 1998 guidelines for the management of patients with valvular heart disease). Endorsed by the Society of Cardiovascular Anesthesiologists, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol 2008; 52:e1–e142.
- Hachicha Z, Dumesnil JG, Bogaty P, Pibarot P. Paradoxical low-flow, low-gradient severe aortic stenosis despite preserved ejection fraction is associated with higher afterload and reduced survival. Circulation 2007; 115:2856–2864.
- Vaquette B, Corbineau H, Laurent M, et al. Valve replacement in patients with critical aortic stenosis and depressed left ventricular function: predictors of operative risk, left ventricular function recovery, and long term outcome. Heart 2005; 91:1324–1329.
- Connolly HM, Oh JK, Orszulak TA, et al. Aortic valve replacement for aortic stenosis with severe left ventricular dysfunction. Prognostic indicators. Circulation 1997; 95:2395–2400.
- Connolly HM, Oh JK, Schaff HV, et al. Severe aortic stenosis with low transvalvular gradient and severe left ventricular dysfunction: result of aortic valve replacement in 52 patients. Circulation 2000; 101:1940–1946.
- Pereira JJ, Lauer MS, Bashir M, et al. Survival after aortic valve replacement for severe aortic stenosis with low transvalvular gradients and severe left ventricular dysfunction. J Am Coll Cardiol 2002; 39:1356–1363.
- Pai RG, Varadarajan P, Razzouk A. Survival benefit of aortic valve replacement in patients with severe aortic stenosis with low ejection fraction and low gradient with normal ejection fraction. Ann Thorac Surg 2008; 86:1781–1789.
- Monin JL, Monchi M, Gest V, Duval-Moulin AM, Dubois-Rande JL, Gueret P. Aortic stenosis with severe left ventricular dysfunction and low transvalvular pressure gradients: risk stratification by low-dose dobutamine echocardiography. J Am Coll Cardiol 2001; 37:2101–2107.
- Monin JL, Quéré J P, Monchi M, et al. Low-gradient aortic stenosis: operative risk stratification and predictors for long-term outcome: a multicenter study using dobutamine stress hemodynamics. Circulation 2003; 108:319–324.
- Zuppiroli A, Mori F, Olivotto I, Castelli G, Favilli S, Dolara A. Therapeutic implications of contractile reserve elicited by dobutamine echocardiography in symptomatic, low-gradient aortic stenosis. Ital Heart J 2003; 4:264–270.
- Tribouilloy C, Lévy F, Rusinaru D, et al. Outcome after aortic valve replacement for low-flow/low-gradient aortic stenosis without contractile reserve on dobutamine stress echocardiography. J Am Coll Cardiol 2009; 53:1865–1873.
- Ahmed AA, Graham AN, Lovell D, O’Kane HO. Management of mild to moderate aortic valve disease during coronary artery bypass grafting. Eur J Cardiothorac Surg 2003; 24:535–539.
- Verhoye J P, Merlicco F, Sami IM, et al. Aortic valve replacement for aortic stenosis after previous coronary artery bypass grafting: could early reoperation be prevented? J Heart Valve Dis 2006; 15:474–478.
- Hochrein J, Lucke JC, Harrison JK, et al. Mortality and need for reoperation in patients with mild-to-moderate asymptomatic aortic valve disease undergoing coronary artery bypass graft alone. Am Heart J 1999; 138:791–797.
- Pereira JJ, Balaban K, Lauer MS, Lytle B, Thomas JD, Garcia MJ. Aortic valve replacement in patients with mild or moderate aortic stenosis and coronary bypass surgery. Am J Med 2005; 118:735–742.
- Amato MC, Moffa PJ, Werner KE, Ramires JA. Treatment decision in asymptomatic aortic valve stenosis: role of exercise testing. Heart 2001; 86:381–386.
- Das P, Rimington H, Chambers J. Exercise testing to stratify risk in aortic stenosis. Eur Heart J 2005; 26:1309–1313.
- Weber M, Arnold R, Rau M, et al. Relation of N-terminal pro-B-type natriuretic peptide to severity of valvular aortic stenosis. Am J Cardiol 2004; 94:740–745.
- Weber M, Hausen M, Arnold R, et al. Prognostic value of N-terminal pro-B-type natriuretic peptide for conservatively and surgically treated patients with aortic valve stenosis. Heart 2006; 92:1639–1644.
- Gerber IL, Stewart RA, Legget ME, et al. Increased plasma natriuretic peptide levels refect symptom onset in aortic stenosis. Circulation 2003; 107:1884–1890.
- Bergler-Klein J, Klaar U, Heger M, et al. Natriuretic peptides predict symptom-free survival and postoperative outcome in severe aortic stenosis. Circulation 2004; 109:2302–2308.
- Lancellotti P, Moonen M, Magne J, et al. Prognostic effect of long-axis left ventricular dysfunction and B-type natriuretic peptide levels in asymptomatic aortic stenosis. Am J Cardiol 2010; 105:383–388.
- Langanay T, Flécher E, Fouquet O, et al. Aortic valve replacement in the elderly: the real life. Ann Thorac Surg 2012; 93:70–77.
- Christofferson RD, Kapadia SR, Rajagopal V, Tuzcu EM. Emerging transcatheter therapies for aortic and mitral disease. Heart 2009; 95:148–155.
- Cribier A, Savin T, Saoudi N, Rocha P, Berland J, Letac B. Percutaneous transluminal valvuloplasty of acquired aortic stenosis in elderly patients: an alternative to valve replacement? Lancet 1986; 1:63–67.
- Percutaneous balloon aortic valvuloplasty. Acute and 30-day follow-up results in 674 patients from the NHLBI Balloon Valvuloplasty Registry. Circulation 1991; 84:2383–2397.
- Otto CM, Mickel MC, Kennedy JW, et al. Three-year outcome after balloon aortic valvuloplasty. Insights into prognosis of valvular aortic stenosis. Circulation 1994; 89:642–650.
- Bernard Y, Etievent J, Mourand JL, et al. Long-term results of percutaneous aortic valvuloplasty compared with aortic valve replacement in patients more than 75 years old. J Am Coll Cardiol 1992; 20:796–801.
- Elkayam U, Janmohamed M, Habib M, Hatamizadeh P. Vasodilators in the management of acute heart failure. Crit Care Med 2008; 36(suppl 1):S95–S105.
- Popovic ZB, Khot UN, Novaro GM, et al. Effects of sodium nitroprusside in aortic stenosis associated with severe heart failure: pressure-volume loop analysis using a numerical model. Am J Physiol Heart Circ Physiol 2005; 288:H416–H423.
- Khot UN, Novaro GM, Popovic ZB, et al. Nitroprusside in critically ill patients with left ventricular dysfunction and aortic stenosis. N Engl J Med 2003; 348:1756–1763.
- Aksoy O, Yousefzai R, Singh D, et al. Cardiogenic shock in the setting of severe aortic stenosis: role of intra-aortic balloon pump support. Heart 2011; 97:838–843.
- Carabello BA, Paulus WJ. Aortic stenosis. Lancet 2009; 373:956–966.
- Lindroos M, Kupari M, Heikkilä J, Tilvis R. Prevalence of aortic valve abnormalities in the elderly: an echocardiographic study of a random population sample. J Am Coll Cardiol 1993; 21:1220–1225.
- Otto CM, Pearlman AS, Gardner CL. Hemodynamic progression of aortic stenosis in adults assessed by Doppler echocardiography. J Am Coll Cardiol 1989; 13:545–550.
- Otto CM, Burwash IG, Legget ME, et al. Prospective study of asymptomatic valvular aortic stenosis. Clinical, echocardiographic, and exercise predictors of outcome. Circulation 1997; 95:2262–2270.
- Varadarajan P, Kapoor N, Bansal RC, Pai RG. Clinical profile and natural history of 453 nonsurgically managed patients with severe aortic stenosis. Ann Thorac Surg 2006; 82:2111–2115.
- Turina J, Hess O, Sepulcri F, Krayenbuehl HP. Spontaneous course of aortic valve disease. Eur Heart J 1987; 8:471–483.
- Horstkotte D, Loogen F. The natural history of aortic valve stenosis. Eur Heart J 1988; 9(suppl E):57–64.
- Novaro GM, Tiong IY, Pearce GL, Lauer MS, Sprecher DL, Griffin BP. Effect of hydroxymethylglutaryl coenzyme a reductase inhibitors on the progression of calcific aortic stenosis. Circulation 2001; 104:2205–2209.
- Cowell SJ, Newby DE, Prescott RJ, et al; Scottish Aortic Stenosis and Lipid Lowering Trial, Impact on Regression (SALTIRE) Investigators. A randomized trial of intensive lipid-lowering therapy in calcific aortic stenosis. N Engl J Med 2005; 352:2389–2397.
- Rossebø AB, Pedersen TR, Boman K, et al; SEAS Investigators. Intensive lipid lowering with simvastatin and ezetimibe in aortic stenosis. N Engl J Med 2008; 359:1343–1356.
- Moura LM, Ramos SF, Zamorano JL, et al. Rosuvastatin affecting aortic valve endothelium to slow the progression of aortic stenosis. J Am Coll Cardiol 2007; 49:554–561.
- Rosenhek R, Rader F, Loho N, et al. Statins but not angiotensin-converting enzyme inhibitors delay progression of aortic stenosis. Circulation 2004; 110:1291–1295.
- O’Brien KD, Probstfield JL, Caulked MT, et al. Angiotensin-converting enzyme inhibitors and change in aortic valve calcium. Arch Intern Med 2005; 165:858–862.
- Lindblom D, Lindblom U, Qvist J, Lundström H. Long-term relative survival rates after heart valve replacement. J Am Coll Cardiol 1990; 15:566–573.
- Makkar RR, Fontana G P, Jilaihawi H, et al; PARTNER Trial Investigators. Transcatheter aortic-valve replacement for inoperable severe aortic stenosis. N Engl J Med 2012; 366:1696–1704.
- Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:2187–2198.
- Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:1597–1607.
- Di Eusanio M, Fortuna D, Cristell D, et al; RERIC (Emilia Romagna Cardiac Surgery Registry) Investigators. Contemporary outcomes of conventional aortic valve replacement in 638 octogenarians: insights from an Italian Regional Cardiac Surgery Registry (RERIC). Eur J Cardiothorac Surg 2012; 41:1247–1252.
- Di Eusanio M, Fortuna D, De Palma R, et al. Aortic valve replacement: results and predictors of mortality from a contemporary series of 2256 patients. J Thorac Cardiovasc Surg 2011; 141:940–947.
- Jamieson WR, Edwards FH, Schwartz M, Bero JW, Clark RE, Grover FL. Risk stratification for cardiac valve replacement. National Cardiac Surgery Database. Database Committee of the Society of Thoracic Surgeons. Ann Thorac Surg 1999; 67:943–951.
- Bonow RO, Carabello BA, Chatterjee K, et al; American College of Cardiology/American Heart Association Task Force on Practice Guidelines. 2008 focused update incorporated into the ACC/AHA 2006 guidelines for the management of patients with valvular heart disease: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Writing Committee to revise the 1998 guidelines for the management of patients with valvular heart disease). Endorsed by the Society of Cardiovascular Anesthesiologists, Society for Cardiovascular Angiography and Interventions, and Society of Thoracic Surgeons. J Am Coll Cardiol 2008; 52:e1–e142.
- Hachicha Z, Dumesnil JG, Bogaty P, Pibarot P. Paradoxical low-flow, low-gradient severe aortic stenosis despite preserved ejection fraction is associated with higher afterload and reduced survival. Circulation 2007; 115:2856–2864.
- Vaquette B, Corbineau H, Laurent M, et al. Valve replacement in patients with critical aortic stenosis and depressed left ventricular function: predictors of operative risk, left ventricular function recovery, and long term outcome. Heart 2005; 91:1324–1329.
- Connolly HM, Oh JK, Orszulak TA, et al. Aortic valve replacement for aortic stenosis with severe left ventricular dysfunction. Prognostic indicators. Circulation 1997; 95:2395–2400.
- Connolly HM, Oh JK, Schaff HV, et al. Severe aortic stenosis with low transvalvular gradient and severe left ventricular dysfunction: result of aortic valve replacement in 52 patients. Circulation 2000; 101:1940–1946.
- Pereira JJ, Lauer MS, Bashir M, et al. Survival after aortic valve replacement for severe aortic stenosis with low transvalvular gradients and severe left ventricular dysfunction. J Am Coll Cardiol 2002; 39:1356–1363.
- Pai RG, Varadarajan P, Razzouk A. Survival benefit of aortic valve replacement in patients with severe aortic stenosis with low ejection fraction and low gradient with normal ejection fraction. Ann Thorac Surg 2008; 86:1781–1789.
- Monin JL, Monchi M, Gest V, Duval-Moulin AM, Dubois-Rande JL, Gueret P. Aortic stenosis with severe left ventricular dysfunction and low transvalvular pressure gradients: risk stratification by low-dose dobutamine echocardiography. J Am Coll Cardiol 2001; 37:2101–2107.
- Monin JL, Quéré J P, Monchi M, et al. Low-gradient aortic stenosis: operative risk stratification and predictors for long-term outcome: a multicenter study using dobutamine stress hemodynamics. Circulation 2003; 108:319–324.
- Zuppiroli A, Mori F, Olivotto I, Castelli G, Favilli S, Dolara A. Therapeutic implications of contractile reserve elicited by dobutamine echocardiography in symptomatic, low-gradient aortic stenosis. Ital Heart J 2003; 4:264–270.
- Tribouilloy C, Lévy F, Rusinaru D, et al. Outcome after aortic valve replacement for low-flow/low-gradient aortic stenosis without contractile reserve on dobutamine stress echocardiography. J Am Coll Cardiol 2009; 53:1865–1873.
- Ahmed AA, Graham AN, Lovell D, O’Kane HO. Management of mild to moderate aortic valve disease during coronary artery bypass grafting. Eur J Cardiothorac Surg 2003; 24:535–539.
- Verhoye J P, Merlicco F, Sami IM, et al. Aortic valve replacement for aortic stenosis after previous coronary artery bypass grafting: could early reoperation be prevented? J Heart Valve Dis 2006; 15:474–478.
- Hochrein J, Lucke JC, Harrison JK, et al. Mortality and need for reoperation in patients with mild-to-moderate asymptomatic aortic valve disease undergoing coronary artery bypass graft alone. Am Heart J 1999; 138:791–797.
- Pereira JJ, Balaban K, Lauer MS, Lytle B, Thomas JD, Garcia MJ. Aortic valve replacement in patients with mild or moderate aortic stenosis and coronary bypass surgery. Am J Med 2005; 118:735–742.
- Amato MC, Moffa PJ, Werner KE, Ramires JA. Treatment decision in asymptomatic aortic valve stenosis: role of exercise testing. Heart 2001; 86:381–386.
- Das P, Rimington H, Chambers J. Exercise testing to stratify risk in aortic stenosis. Eur Heart J 2005; 26:1309–1313.
- Weber M, Arnold R, Rau M, et al. Relation of N-terminal pro-B-type natriuretic peptide to severity of valvular aortic stenosis. Am J Cardiol 2004; 94:740–745.
- Weber M, Hausen M, Arnold R, et al. Prognostic value of N-terminal pro-B-type natriuretic peptide for conservatively and surgically treated patients with aortic valve stenosis. Heart 2006; 92:1639–1644.
- Gerber IL, Stewart RA, Legget ME, et al. Increased plasma natriuretic peptide levels refect symptom onset in aortic stenosis. Circulation 2003; 107:1884–1890.
- Bergler-Klein J, Klaar U, Heger M, et al. Natriuretic peptides predict symptom-free survival and postoperative outcome in severe aortic stenosis. Circulation 2004; 109:2302–2308.
- Lancellotti P, Moonen M, Magne J, et al. Prognostic effect of long-axis left ventricular dysfunction and B-type natriuretic peptide levels in asymptomatic aortic stenosis. Am J Cardiol 2010; 105:383–388.
- Langanay T, Flécher E, Fouquet O, et al. Aortic valve replacement in the elderly: the real life. Ann Thorac Surg 2012; 93:70–77.
- Christofferson RD, Kapadia SR, Rajagopal V, Tuzcu EM. Emerging transcatheter therapies for aortic and mitral disease. Heart 2009; 95:148–155.
- Cribier A, Savin T, Saoudi N, Rocha P, Berland J, Letac B. Percutaneous transluminal valvuloplasty of acquired aortic stenosis in elderly patients: an alternative to valve replacement? Lancet 1986; 1:63–67.
- Percutaneous balloon aortic valvuloplasty. Acute and 30-day follow-up results in 674 patients from the NHLBI Balloon Valvuloplasty Registry. Circulation 1991; 84:2383–2397.
- Otto CM, Mickel MC, Kennedy JW, et al. Three-year outcome after balloon aortic valvuloplasty. Insights into prognosis of valvular aortic stenosis. Circulation 1994; 89:642–650.
- Bernard Y, Etievent J, Mourand JL, et al. Long-term results of percutaneous aortic valvuloplasty compared with aortic valve replacement in patients more than 75 years old. J Am Coll Cardiol 1992; 20:796–801.
- Elkayam U, Janmohamed M, Habib M, Hatamizadeh P. Vasodilators in the management of acute heart failure. Crit Care Med 2008; 36(suppl 1):S95–S105.
- Popovic ZB, Khot UN, Novaro GM, et al. Effects of sodium nitroprusside in aortic stenosis associated with severe heart failure: pressure-volume loop analysis using a numerical model. Am J Physiol Heart Circ Physiol 2005; 288:H416–H423.
- Khot UN, Novaro GM, Popovic ZB, et al. Nitroprusside in critically ill patients with left ventricular dysfunction and aortic stenosis. N Engl J Med 2003; 348:1756–1763.
- Aksoy O, Yousefzai R, Singh D, et al. Cardiogenic shock in the setting of severe aortic stenosis: role of intra-aortic balloon pump support. Heart 2011; 97:838–843.
KEY POINTS
- Calcific aortic stenosis is the most common acquired valvular disease, and its prevalence is increasing as the population ages.
- Patients who have symptoms should be referred for aortic valve replacement. Patients who are not candidates for open heart surgery may be eligible for transcatheter aortic valve replacement.
- For high-risk patients with multiple comorbidities, “bridging” therapies such as aortic valvuloplasty are an option.
- In patients with aortic stenosis who present with hemodynamic instability and circulatory collapse, time can be gained with the use of intravenous sodium nitroprusside (in the absence of hypotension) or intra-aortic balloon counterpulsation while more definitive treatment decisions are being made.
Aortic valve replacement: Options, improvements, and costs
How aortic valve disease is managed continues to evolve, with novel approaches for both aortic valve stenosis and regurgitation.1–8 Indeed, because of the spectrum of procedures, a multispecialty committee was formed to provide a detailed guideline to help physicians work through the various options.4
The paper by Aksoy and colleagues in this issue of the Journal gives further insight into the complexities of decision-making.
As a rule, the indications for a procedure to treat aortic valvular disease continue to be based on whether the patient develops certain symptoms (fatigue, exertional dyspnea, shortness of breath, syncope, chest pain), myocardial deterioration, reduced ejection fraction, or ventricular dilatation.4 Furthermore, the options depend on whether the patient has comorbid disease and is a candidate for surgical aortic valve replacement.
OPEN SURGERY: THE MAINSTAY OF TREATMENT
Open surgery—including in recent years minimally invasive J-incision “keyhole” repair or replacement—has been the mainstay of treatment. The results of surgical aortic valve repair have been excellent, so that 10 years after surgery 95% of patients who have undergone a modified David reimplantation operation have not needed a repeat operation.3 The results are comparable for repair of bicuspid aortic valves.2,3
Furthermore, surgical aortic valve replacement has become very safe. At Cleveland Clinic in 2011, only 3 (0.6%) of 479 patients died during isolated aortic valve replacement, and in 2012 the mortality rate was even better, with only 1 death (0.2%) among 495 patients as of November 2012.
GOOD RESULTS WITH TRANSCATHETER AORTIC VALVE REPLACEMENT
For a new valve procedure to be accepted into practice, it must be easy to do, safe, and consistently good in performance measures such as producing low gradients, eliminating aortic regurgitation, and leading to high rates of long-term freedom from reoperation and of survival. To see if percutaneous aortic valve replacement meets these criteria, it was evaluated by both us at Cleveland Clinic and our colleagues at other institutions in the laboratory and also in feasibility trials in the United States.
The subsequent Placement of Transcatheter Aortic Valves (PARTNER) trial established the benefit of this procedure in terms of superior survival for patients who could not undergo surgery.8 Hence, the transcatheter device was approved for patients who cannot undergo surgery who meet certain criteria (valve area < 0.8 cm2; mean gradient > 40 mm Hg or peak gradient > 64 mm Hg). Of note, the cost per procedure was $78,000, or approximately $50,000 per year of life saved.
The PARTNER A trial showed that the risk of death after transcatheter aortic valve replacement was as low as after open surgery, although the risk of stroke or transient ischemic attack risk was higher—indeed, with the transfemoral approach it was 3 times higher (4.6% vs 1.4%, P < .05).9,10 Furthermore, half the patients had perivalvular leakage after the new procedure, and even mild leakage reduced the survival rate at 2 years.11
Nevertheless, we have now done nearly 400 transcatheter aortic valve replacement procedures in patients who could not undergo open surgery or who would have been at extreme risk during surgery. With the transfemoral approach, in 267 patients, 1 patient died (0.4%), and 2 had strokes (0.7%). (In the rest of the patients, we used alternatives to the transfemoral approach, such as the transaortic, transapical, and transaxillary approaches, also with good results.)
Thus, transcatheter aortic valve replacement in properly selected patients can meet the above criteria.
COSTS AND THE FUTURE
Based on the PARTNER trial results, the Centers for Medicare and Medicaid Services (CMS) agreed to pay for this procedure at the same rate as for surgical aortic valve replacement for patients who cannot or should not undergo surgery, with the approval of two surgeons and within the context of a national registry.10
The reimbursement is adjusted for geographic area. In the United States, for example, hospitals on the East Coast or West Coast receive $88,000 to $94,000 per case, while most other areas receive $32,000 to $62,000.
The surgeon and cardiologist share the professional fee of approximately $2,500, although typically we have a team of eight to 10 physicians (representing the fields of anesthesia, echocardiography, surgery, and cardiology) in the operating room for every procedure, in addition to nursing and technical staff. The challenge for institutions and providers, however, is that the device costs $32,500, and CMS reimbursement does not cover the cost of both the valve and the procedure in many localities. This may affect how widely the valve is eventually used.
While many more options are available now for management of aortic valve disease (minimally invasive repair or replacement, and newer devices), the future usage of transcatheter aortic valve replacement may become dependent on costs, newer devices, cheaper iterations, competition, and CMS reimbursement.
There are now two additional trials, SURTAVI and PARTNER A2, evaluating transcatheter vs open aortic valve replacement in lower-risk patients. The issues that will have to be addressed with new iterations are the risk of stroke and transient ischemic attack, perivalvular leakage, and the costs of the devices.
Newer reports would suggest that the results with transcatheter aortic valve replacement in inoperable and high-risk patients continue to improve as experience evolves.
- Svensson LG, Blackstone EH, Cosgrove DM. Surgical options in young adults with aortic valve disease. Curr Probl Cardiol 2003; 28:417–480.
- Svensson LG, Kim KH, Blackstone EH, et al. Bicuspid aortic valve surgery with proactive ascending aorta repair. J Thorac Cardiovasc Surg 2011; 142:622–629.e1–e3.
- Svensson LG, Batizy LH, Blackstone EH, et al. Results of matching valve and root repair to aortic valve and root pathology. J Thorac Cardiovasc Surg 2011; 142:1491–1498.e7.
- Svensson LG, Adams DH, Bonow RO, et al. Aortic valve and ascending aorta guidelines for management and quality measures: executive summary. Ann Thorac Surg 2013; 10.1016/j.athoracsur.2012.12.027, Epub ahead of print
- Svensson LG, D’Agostino RS. “J” incision minimal-access valve operations”. Ann Thorac Surg 1998; 66:1110–1112.
- Johnston DR, Atik FA, Rajeswaran J, et al. Outcomes of less invasive J-incision approach to aortic valve surgery. J Thorac Cardiovasc Surg 2012; 144:852–858.e3.
- Albacker TB, Blackstone EH, Williams SJ, et al. Should less-invasive aortic valve replacement be avoided in patients with pulmonary dysfunction? J Thorac Cardiovasc Surg 2013; Epub ahead of print.
- Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:1597–1607.
- Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:2187–2198.
- Svensson LG, Tuzcu M, Kapadia S, et al. A comprehensive review of the PARTNER trial. J Thorac Cardiovasc Surg 2013; 145(suppl):S11–S16.
- Kodali SK, Williams MR, Smith CR, et al; PARTNER Trial Investigators. Two-year outcomes after transcatheter or surgical aortic-valve replacement. N Engl J Med 2012; 366:1686–1695.
How aortic valve disease is managed continues to evolve, with novel approaches for both aortic valve stenosis and regurgitation.1–8 Indeed, because of the spectrum of procedures, a multispecialty committee was formed to provide a detailed guideline to help physicians work through the various options.4
The paper by Aksoy and colleagues in this issue of the Journal gives further insight into the complexities of decision-making.
As a rule, the indications for a procedure to treat aortic valvular disease continue to be based on whether the patient develops certain symptoms (fatigue, exertional dyspnea, shortness of breath, syncope, chest pain), myocardial deterioration, reduced ejection fraction, or ventricular dilatation.4 Furthermore, the options depend on whether the patient has comorbid disease and is a candidate for surgical aortic valve replacement.
OPEN SURGERY: THE MAINSTAY OF TREATMENT
Open surgery—including in recent years minimally invasive J-incision “keyhole” repair or replacement—has been the mainstay of treatment. The results of surgical aortic valve repair have been excellent, so that 10 years after surgery 95% of patients who have undergone a modified David reimplantation operation have not needed a repeat operation.3 The results are comparable for repair of bicuspid aortic valves.2,3
Furthermore, surgical aortic valve replacement has become very safe. At Cleveland Clinic in 2011, only 3 (0.6%) of 479 patients died during isolated aortic valve replacement, and in 2012 the mortality rate was even better, with only 1 death (0.2%) among 495 patients as of November 2012.
GOOD RESULTS WITH TRANSCATHETER AORTIC VALVE REPLACEMENT
For a new valve procedure to be accepted into practice, it must be easy to do, safe, and consistently good in performance measures such as producing low gradients, eliminating aortic regurgitation, and leading to high rates of long-term freedom from reoperation and of survival. To see if percutaneous aortic valve replacement meets these criteria, it was evaluated by both us at Cleveland Clinic and our colleagues at other institutions in the laboratory and also in feasibility trials in the United States.
The subsequent Placement of Transcatheter Aortic Valves (PARTNER) trial established the benefit of this procedure in terms of superior survival for patients who could not undergo surgery.8 Hence, the transcatheter device was approved for patients who cannot undergo surgery who meet certain criteria (valve area < 0.8 cm2; mean gradient > 40 mm Hg or peak gradient > 64 mm Hg). Of note, the cost per procedure was $78,000, or approximately $50,000 per year of life saved.
The PARTNER A trial showed that the risk of death after transcatheter aortic valve replacement was as low as after open surgery, although the risk of stroke or transient ischemic attack risk was higher—indeed, with the transfemoral approach it was 3 times higher (4.6% vs 1.4%, P < .05).9,10 Furthermore, half the patients had perivalvular leakage after the new procedure, and even mild leakage reduced the survival rate at 2 years.11
Nevertheless, we have now done nearly 400 transcatheter aortic valve replacement procedures in patients who could not undergo open surgery or who would have been at extreme risk during surgery. With the transfemoral approach, in 267 patients, 1 patient died (0.4%), and 2 had strokes (0.7%). (In the rest of the patients, we used alternatives to the transfemoral approach, such as the transaortic, transapical, and transaxillary approaches, also with good results.)
Thus, transcatheter aortic valve replacement in properly selected patients can meet the above criteria.
COSTS AND THE FUTURE
Based on the PARTNER trial results, the Centers for Medicare and Medicaid Services (CMS) agreed to pay for this procedure at the same rate as for surgical aortic valve replacement for patients who cannot or should not undergo surgery, with the approval of two surgeons and within the context of a national registry.10
The reimbursement is adjusted for geographic area. In the United States, for example, hospitals on the East Coast or West Coast receive $88,000 to $94,000 per case, while most other areas receive $32,000 to $62,000.
The surgeon and cardiologist share the professional fee of approximately $2,500, although typically we have a team of eight to 10 physicians (representing the fields of anesthesia, echocardiography, surgery, and cardiology) in the operating room for every procedure, in addition to nursing and technical staff. The challenge for institutions and providers, however, is that the device costs $32,500, and CMS reimbursement does not cover the cost of both the valve and the procedure in many localities. This may affect how widely the valve is eventually used.
While many more options are available now for management of aortic valve disease (minimally invasive repair or replacement, and newer devices), the future usage of transcatheter aortic valve replacement may become dependent on costs, newer devices, cheaper iterations, competition, and CMS reimbursement.
There are now two additional trials, SURTAVI and PARTNER A2, evaluating transcatheter vs open aortic valve replacement in lower-risk patients. The issues that will have to be addressed with new iterations are the risk of stroke and transient ischemic attack, perivalvular leakage, and the costs of the devices.
Newer reports would suggest that the results with transcatheter aortic valve replacement in inoperable and high-risk patients continue to improve as experience evolves.
How aortic valve disease is managed continues to evolve, with novel approaches for both aortic valve stenosis and regurgitation.1–8 Indeed, because of the spectrum of procedures, a multispecialty committee was formed to provide a detailed guideline to help physicians work through the various options.4
The paper by Aksoy and colleagues in this issue of the Journal gives further insight into the complexities of decision-making.
As a rule, the indications for a procedure to treat aortic valvular disease continue to be based on whether the patient develops certain symptoms (fatigue, exertional dyspnea, shortness of breath, syncope, chest pain), myocardial deterioration, reduced ejection fraction, or ventricular dilatation.4 Furthermore, the options depend on whether the patient has comorbid disease and is a candidate for surgical aortic valve replacement.
OPEN SURGERY: THE MAINSTAY OF TREATMENT
Open surgery—including in recent years minimally invasive J-incision “keyhole” repair or replacement—has been the mainstay of treatment. The results of surgical aortic valve repair have been excellent, so that 10 years after surgery 95% of patients who have undergone a modified David reimplantation operation have not needed a repeat operation.3 The results are comparable for repair of bicuspid aortic valves.2,3
Furthermore, surgical aortic valve replacement has become very safe. At Cleveland Clinic in 2011, only 3 (0.6%) of 479 patients died during isolated aortic valve replacement, and in 2012 the mortality rate was even better, with only 1 death (0.2%) among 495 patients as of November 2012.
GOOD RESULTS WITH TRANSCATHETER AORTIC VALVE REPLACEMENT
For a new valve procedure to be accepted into practice, it must be easy to do, safe, and consistently good in performance measures such as producing low gradients, eliminating aortic regurgitation, and leading to high rates of long-term freedom from reoperation and of survival. To see if percutaneous aortic valve replacement meets these criteria, it was evaluated by both us at Cleveland Clinic and our colleagues at other institutions in the laboratory and also in feasibility trials in the United States.
The subsequent Placement of Transcatheter Aortic Valves (PARTNER) trial established the benefit of this procedure in terms of superior survival for patients who could not undergo surgery.8 Hence, the transcatheter device was approved for patients who cannot undergo surgery who meet certain criteria (valve area < 0.8 cm2; mean gradient > 40 mm Hg or peak gradient > 64 mm Hg). Of note, the cost per procedure was $78,000, or approximately $50,000 per year of life saved.
The PARTNER A trial showed that the risk of death after transcatheter aortic valve replacement was as low as after open surgery, although the risk of stroke or transient ischemic attack risk was higher—indeed, with the transfemoral approach it was 3 times higher (4.6% vs 1.4%, P < .05).9,10 Furthermore, half the patients had perivalvular leakage after the new procedure, and even mild leakage reduced the survival rate at 2 years.11
Nevertheless, we have now done nearly 400 transcatheter aortic valve replacement procedures in patients who could not undergo open surgery or who would have been at extreme risk during surgery. With the transfemoral approach, in 267 patients, 1 patient died (0.4%), and 2 had strokes (0.7%). (In the rest of the patients, we used alternatives to the transfemoral approach, such as the transaortic, transapical, and transaxillary approaches, also with good results.)
Thus, transcatheter aortic valve replacement in properly selected patients can meet the above criteria.
COSTS AND THE FUTURE
Based on the PARTNER trial results, the Centers for Medicare and Medicaid Services (CMS) agreed to pay for this procedure at the same rate as for surgical aortic valve replacement for patients who cannot or should not undergo surgery, with the approval of two surgeons and within the context of a national registry.10
The reimbursement is adjusted for geographic area. In the United States, for example, hospitals on the East Coast or West Coast receive $88,000 to $94,000 per case, while most other areas receive $32,000 to $62,000.
The surgeon and cardiologist share the professional fee of approximately $2,500, although typically we have a team of eight to 10 physicians (representing the fields of anesthesia, echocardiography, surgery, and cardiology) in the operating room for every procedure, in addition to nursing and technical staff. The challenge for institutions and providers, however, is that the device costs $32,500, and CMS reimbursement does not cover the cost of both the valve and the procedure in many localities. This may affect how widely the valve is eventually used.
While many more options are available now for management of aortic valve disease (minimally invasive repair or replacement, and newer devices), the future usage of transcatheter aortic valve replacement may become dependent on costs, newer devices, cheaper iterations, competition, and CMS reimbursement.
There are now two additional trials, SURTAVI and PARTNER A2, evaluating transcatheter vs open aortic valve replacement in lower-risk patients. The issues that will have to be addressed with new iterations are the risk of stroke and transient ischemic attack, perivalvular leakage, and the costs of the devices.
Newer reports would suggest that the results with transcatheter aortic valve replacement in inoperable and high-risk patients continue to improve as experience evolves.
- Svensson LG, Blackstone EH, Cosgrove DM. Surgical options in young adults with aortic valve disease. Curr Probl Cardiol 2003; 28:417–480.
- Svensson LG, Kim KH, Blackstone EH, et al. Bicuspid aortic valve surgery with proactive ascending aorta repair. J Thorac Cardiovasc Surg 2011; 142:622–629.e1–e3.
- Svensson LG, Batizy LH, Blackstone EH, et al. Results of matching valve and root repair to aortic valve and root pathology. J Thorac Cardiovasc Surg 2011; 142:1491–1498.e7.
- Svensson LG, Adams DH, Bonow RO, et al. Aortic valve and ascending aorta guidelines for management and quality measures: executive summary. Ann Thorac Surg 2013; 10.1016/j.athoracsur.2012.12.027, Epub ahead of print
- Svensson LG, D’Agostino RS. “J” incision minimal-access valve operations”. Ann Thorac Surg 1998; 66:1110–1112.
- Johnston DR, Atik FA, Rajeswaran J, et al. Outcomes of less invasive J-incision approach to aortic valve surgery. J Thorac Cardiovasc Surg 2012; 144:852–858.e3.
- Albacker TB, Blackstone EH, Williams SJ, et al. Should less-invasive aortic valve replacement be avoided in patients with pulmonary dysfunction? J Thorac Cardiovasc Surg 2013; Epub ahead of print.
- Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:1597–1607.
- Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:2187–2198.
- Svensson LG, Tuzcu M, Kapadia S, et al. A comprehensive review of the PARTNER trial. J Thorac Cardiovasc Surg 2013; 145(suppl):S11–S16.
- Kodali SK, Williams MR, Smith CR, et al; PARTNER Trial Investigators. Two-year outcomes after transcatheter or surgical aortic-valve replacement. N Engl J Med 2012; 366:1686–1695.
- Svensson LG, Blackstone EH, Cosgrove DM. Surgical options in young adults with aortic valve disease. Curr Probl Cardiol 2003; 28:417–480.
- Svensson LG, Kim KH, Blackstone EH, et al. Bicuspid aortic valve surgery with proactive ascending aorta repair. J Thorac Cardiovasc Surg 2011; 142:622–629.e1–e3.
- Svensson LG, Batizy LH, Blackstone EH, et al. Results of matching valve and root repair to aortic valve and root pathology. J Thorac Cardiovasc Surg 2011; 142:1491–1498.e7.
- Svensson LG, Adams DH, Bonow RO, et al. Aortic valve and ascending aorta guidelines for management and quality measures: executive summary. Ann Thorac Surg 2013; 10.1016/j.athoracsur.2012.12.027, Epub ahead of print
- Svensson LG, D’Agostino RS. “J” incision minimal-access valve operations”. Ann Thorac Surg 1998; 66:1110–1112.
- Johnston DR, Atik FA, Rajeswaran J, et al. Outcomes of less invasive J-incision approach to aortic valve surgery. J Thorac Cardiovasc Surg 2012; 144:852–858.e3.
- Albacker TB, Blackstone EH, Williams SJ, et al. Should less-invasive aortic valve replacement be avoided in patients with pulmonary dysfunction? J Thorac Cardiovasc Surg 2013; Epub ahead of print.
- Leon MB, Smith CR, Mack M, et al; PARTNER Trial Investigators. Transcatheter aortic-valve implantation for aortic stenosis in patients who cannot undergo surgery. N Engl J Med 2010; 363:1597–1607.
- Smith CR, Leon MB, Mack MJ, et al; PARTNER Trial Investigators. Transcatheter versus surgical aortic-valve replacement in high-risk patients. N Engl J Med 2011; 364:2187–2198.
- Svensson LG, Tuzcu M, Kapadia S, et al. A comprehensive review of the PARTNER trial. J Thorac Cardiovasc Surg 2013; 145(suppl):S11–S16.
- Kodali SK, Williams MR, Smith CR, et al; PARTNER Trial Investigators. Two-year outcomes after transcatheter or surgical aortic-valve replacement. N Engl J Med 2012; 366:1686–1695.
Bone mineral density testing: Is a T score enough to determine the screening interval?
Some members of the public may have noticed the conclusions of a recent study1 that said that if an older postmenopausal woman has her bone mineral density measured to screen for osteoporosis and has a normal or only mildly low result, she does not need to come back for another measurement for approximately 15 years.
We believe this interpretation of the study’s findings is overly simplistic and may have the unfortunate result of causing some people to neglect their bone health. Moreover, the study looked mainly at baseline T scores as the determinant of the subsequent screening interval. However, clinicians must carefully consider a variety of clinical risk factors when deciding how often to obtain bone mineral density measurements. The ultimate goal is to not miss the window of opportunity for early detection and treatment when it would matter the most (ie, before fractures develop).
Here, we will review this recent study, its findings, and its implications.
OSTEOPOROSIS POSES AN ENORMOUS PUBLIC HEALTH PROBLEM
If we consider only the hip, an estimated 10 million people in the United States have osteoporosis (T score ≤ −2.5 or a preexisting fragility fracture), and 33.6 million have osteopenia (T score −1.01 to −2.49).2 The number of people with osteopenia can be assumed to be much higher if other skeletal sites are considered.
By increasing the risk of fragility fractures, osteoporosis poses an enormous public health problem. The surgeon general’s report points out that one of every two white women over age 50 will experience an osteoporosis-related fracture in her lifetime.3 Of all osteoporosis-related fractures, those of the hip carry the worse clinical outcome. Approximately one in five elderly people who experience an osteoporosis-related hip fracture need long-term nursing home care, and as many as 20% die within 1 year.3
In recognition of the burden of osteoporosis, the US Preventive Services Task Force (USPSTF)4 and other scientific bodies2,3 recommend an initial bone mineral density test for all women age 65 and older. Dual-energy x-ray absorptiometry (DXA) is considered the gold standard for bone mineral density testing. Although the patient population that should receive an initial bone mineral density test has been clearly identified (see below), guidelines on the optimal frequency of testing do not exist, as data have been lacking. Recognizing this knowledge gap, Gourlay et al1 attempted to answer the question of how often elderly postmenopausal women should be retested.
WHEN DO 10% OF ELDERLY POSTMENOPAUSAL WOMEN REACH A T SCORE OF −2.5?
Gourlay et al1 analyzed data from 4,957 women in the Study of Osteoporotic Fractures. These women were predominantly white, were at least 67 years old and ambulatory, and had normal bone mineral density or osteopenia and no history of hip or clinical vertebral fracture at baseline. They had been recruited between 1986 and 1988 at sites in Baltimore, MD, Minneapolis, MN, the Monongahela Valley near Pittsburgh, PA, and Portland, OR.
DXA of the hip had been performed at baseline and at multiple times thereafter. The average follow-up time was 8 years.
The primary outcome measured was how long it took for 10% of the patients to reach a T score of −2.5 or less at the femoral neck or total hip as they progressed from having normal bone mineral density to osteoporosis or from osteopenia to osteoporosis and before they developed a fracture or needed treatment for osteoporosis.
Clinical risk factors such as age, body mass index, estrogen use at baseline, fracture after age 50, current smoking, current or past use of glucocorticoids, and self-reported rheumatoid arthritis were included as covariates in time-to-event analyses.
ANSWER: 16.8 YEARS (IF NORMAL AT BASELINE)
The authors estimated that 10% of women would make the transition to osteoporosis before having a hip or clinical vertebral fracture in the following intervals:
- 16.8 years in women whose bone mineral density was normal at baseline (T score at femoral neck and total hip of −1.00 or higher)
- 17.3 years in women who had mild osteopenia at baseline (T score −1.01 to −1.49)
- 4.7 years in women with moderate osteopenia at baseline (T score −1.5 to −1.99)
- 1.1 years in women with advanced osteopenia at baseline (T score −2.00 to −2.49).
The authors also found that body mass index and current estrogen use were the only clinical risk factors that influenced these intervals; other clinical factors such as a fracture after age 50, current smoking, previous or current use of oral glucocorticoids, and self-reported rheumatoid arthritis did not.
They concluded that osteoporosis would develop in fewer than 10% of women if the rescreening interval was lengthened to 15 years for women with normal density or mild osteopenia, 5 years for women with moderate osteopenia, and 1 year for women with advanced osteopenia.
WHAT DOES THIS MEAN FOR THE PRACTICING CLINICIAN?
Who needs an initial DXA test according to current guidelines?
The USPSTF,4 the National Osteoporosis Foundation (NOF),5 the International Society for Clinical Densitometry (ISCD),6 and the American Association of Clinical Endocrinologists (AACE)7 propose that the following groups should undergo DXA:
- All women age 65 and older
- All postmenopausal women who have had a fragility fracture or who have one or more risk factors for osteoporosis (height loss, body mass index < 20 kg/m2, family history of osteoporosis, active smoking, excessive alcohol consumption)
- Adults who have a condition (eg, rheumatoid arthritis) or are taking a medication (eg, glucocorticoids in a daily dose ≥ 5 mg of prednisone or its equivalent for ≥ 3 months) associated with low bone mass or bone loss
- Anyone being considered for drug therapy for osteoporosis, discontinuing therapy for osteoporosis (including estrogen), or being treated for osteoporosis, to monitor the effect of treatment.
Assessing fracture risk. Although clinicians have traditionally relied on bone mineral density obtained by DXA to estimate fracture risk, the World Health Organization has developed a computer-based algorithm that calculates an individual’s 10-year fracture probability from easily obtained clinical risk factors with or without adding femoral-neck bone mineral density. The Fracture Risk Assessment tool, or FRAX, has attracted intense interest since its introduction in 2007 and has been endorsed by the USPSTF4 and by other scientific societies, including the NOF5 and the ISCD.8 In fact, the most recent USPSTF guidelines,4 which recommend screening all women age 65 and older, call for using FRAX to identify younger women at higher risk of fracture.
According to FRAX, a 65-year-old white woman who has no risk factors has a 9.3% chance of developing a major osteoporotic fracture in the next 10 years. And if a younger woman (between the ages of 50 and 64) has a fracture risk as high or higher than a 65-year-old white woman who has no risk factors, then she too should be screened by DXA.
The FRAX calculator is available online at www.shef.ac.uk/FRAX.
What are the current recommendations about follow-up DXA testing?
In eligible patients, the Centers for Medicare and Medicaid Services will pay for a DXA scan every 2 years. This interval is based on the concept that in an otherwise healthy person, it takes a minimum of 2 years to see a significant change in bone mineral density that can be attributed to a biological change in the bone and not just chance. The USPSTF4 and scientific societies such as the NOF5 generally agree with the Medicare guidelines of retesting every 2 years but recognize certain clinical situations that may warrant more frequent retesting (see below).
But the real question is how long the DXA screening interval can be extended so that meaningful information can still be obtained to help make management decisions and before a complication such as a fracture occurs. While there is convincing evidence to support the recommendations for an initial DXA test, data to answer the question of how long the resting interval should be are lacking.
Before the study by Gourlay et al,1 the only data on repeat DXA came from work by Hillier et al.9 But those investigators asked a different question. They were interested in how well repeated measurements predicted fractures. They used the same population that Gourlay et al did but evaluated fractures, not T scores. They concluded that in healthy, adult postmenopausal women, repeating the bone mineral density measurement up to 8 years later adds little value to initial measurement for predicting incident fractures.
Clinical factors also count
The T score should not be the only major factor determining the interval for bone mineral density testing in elderly women; clinical risk factors also should be kept in mind.
Gourlay et al concluded that age and T scores are the key predictive factors in determining the bone mineral density testing interval in elderly, postmenopausal women for screening purposes.1 In their statistical model, clinical risk factors such as fracture after age 50, current smoking, previous or current use of glucocorticoids, and self-reported rheumatoid arthritis did not influence the testing interval. They say that clinicians should not feel compelled to shorten the testing interval when these risk factors are present.
Readers may take this to mean that if these results were strictly applied to a 70-year-old white woman receiving oral glucocorticoids for rheumatoid arthritis and who has a baseline T score of −1.45, then her next test may be postponed by 15 years (given that both these factors did not influence the testing interval). Readers may also conclude that if this patient’s T score were −1.51, then her screening interval would be 5 years and not 15 years.
However, Gourlay et al say1 that clinicians can choose to shorten the testing interval if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in their analysis.
Soon after this study1 was published, Lewiecki et al10 and others11–13 published critical commentaries addressing controversial issues surrounding the study. They highlighted the importance of considering clinical risk factors for fracture in addition to the femoral neck and total hip T scores. In response to these comments, Gourlay et al clarified that their results were not generalizable to patients with secondary osteoporosis, such as those taking glucocorticoids or those who have rheumatic diseases.14
Readers should keep in mind that clinical risk factors make independent contributions to fracture risk (Figure 1).15
Readers should also recognize the following groups in whom the results of the study by Gourlay et al are not applicable since they were not included in their study:
- Men
- Women other than white women
- Women already diagnosed with osteoporosis and on bisphosphonates or any other osteoporosis treatment (except for estrogen). The findings also do not apply to:
- Patients who experience a significant decline in health status or who develop new clinical conditions (such as hyperparathyroidism, paraproteinemias, or type 2 diabetes) or who use medications such as glucocorticoids that cause rapid bone loss. Changes in clinical situations such as these may necessitate more frequent bone mineral density testing in spite of a “good” baseline T score.
- Perimenopausal women or women who received their first bone mineral density test before age 65. Perimenopause and menopause may trigger rapid bone loss, which may be as much as one T-score point (ie, 1 standard deviation) at the spine and femoral neck.16 Therefore, testing done during this time cannot be used as the basis of future monitoring.
The study did not address asymptomatic vertebral fractures and lumbar spine density
Gourlay et al1 did not take into account asymptomatic spinal fractures; they used only clinical vertebral fractures in their risk estimates of spinal fractures. Ascertainment of morphometric spinal fractures may be methodologically challenging, but if the study had included these fractures, the outcomes and conclusions could have been very different.
Vertebral fractures are present in as many as 14% to 33% of postmenopausal women17 and indicate osteoporosis (regardless of the bone mineral density). Moreover, most vertebral fractures are clinically silent and escape detection, and approximately only one in three radiographically defined vertebral fractures is reported clinically.18,19 Given the prevalence of these fractures, we and others10 have noted that the results of the Gourlay study may be biased toward longer screening intervals because they did not account for morphometric vertebral fractures.
Gourlay et al used T scores only of the femoral neck and total hip and not those of the lumbar spine. Some studies have found that hip measurements may be superior to spine measurements for overall osteoporotic fracture prediction.20,21 However, lumbar spine bone mineral density is predictive of fracture at other skeletal sites,22,23 is a widely accepted skeletal site measurement, and is used to diagnose osteoporosis. Moreover, the lumbar spine T score can be −2.5 or higher even if the total hip or femoral neck T score is lower than −2.5.
More fractures occur in people with osteopenia than with osteoporosis
Osteoporosis imparts a much higher risk of fracture than does osteopenia. However, if one recognizes the much greater prevalence of osteopenia (33.6 million people) compared with osteoporosis (10 million),2 it is not hard to appreciate that the number of fractures is higher in the osteopenic group than in those with osteoporosis based on T scores. Siris et al24 point out that at least half of osteoporotic fractures are in patients with osteopenia, who comprise a larger segment of the population than those with osteoporosis.
Some clinical trials have shown that bisphosphonates are not effective in preventing clinical fractures in women who do not have osteoporosis.25,26 However, clinicians must recognize that while bisphosphonates may not be as effective in preventing fractures in the osteopenic group with no other clinical risk factors, the presence of multiple clinical risk factors incrementally increases the fracture risk (which can be assessed via FRAX) and may require starting drug therapy earlier.
Women with vertebral fractures are considered to have clinical osteoporosis even if they have T scores in the osteopenic range, and must be considered for drug therapy.
The public health burden of fractures will not decrease unless individuals with low bone mineral density who are at an increased risk of fracture are identified and treated.24
Is DXA testing overused or underused? does it decrease the rate of fractures?
The study of Gourlay et al1 captured a lot of media attention, with many newspapers and blogs claiming that women may be getting tested too often.27,28 However, in reality, this test is highly underutilized. The 2011 Healthcare Effectiveness Data and Information Set report noted that 71.0% of women in Medicare health maintenance organizations and 75.0% of women in Medicare preferred provider organizations ever had a bone mineral density test for osteoporosis.29 While these numbers may not appear to be too far from the target, they are a poor gauge of DXA use as they include all types of bone mineral density tests in a woman’s lifetime, including even heel tests at health fairs.
Central DXA is used far less than one might expect. King and Fiorentino, in a recent analysis, showed that only about 14% of fee-for-service Medicare beneficiaries 65 years and older had one or more DXA tests in 2010.30 DXA retesting also does not seem to be an issue, with only 1 in 10 elderly women reporting having had a repeat test at 2-year intervals, and fewer than 1 in 100 tested reported testing more frequently.30
- Gourlay ML, Fine JP, Preisser JS, et al; Study of Osteoporotic Fractures research group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225–233.
- National Osteoporosis Foundation (NOF). America’s Bone Health: The State of Osteoporosis and Low Bone Mass in Our Nation. Washington, DC: National Osteoporosis Foundation; 2002.
- US Department of Health and Human Services. Bone Health and Osteoporosis: A Report of the Surgeon General. Rockville, MD: US Department of Health and Human Services, Office of the Surgeon General; 2004.
- US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356–364.
- National Osteoporosis Foundation. Clinician’s Guide to Prevention and Treatment of Osteoporosis. Washington, DC: National Osteoporosis Foundation; 2010.
- Baim S, Binkley N, Bilezikian JP, et al. Official Positions of the International Society for Clinical Densitometry and executive summary of the 2007 ISCD Position Development Conference. J Clin Densitom 2008; 11:75–91.
- Watts NB, Bilezikian JP, Camacho PM, et al; AACE Osteoporosis Task Force. American Association of Clinical Endocrinologists Medical Guidelines for Clinical Practice for the diagnosis and treatment of postmenopausal osteoporosis. Endocr Pract 2010; 16(suppl 3):1–37.
- The International Society for Clinical Densitometry (ISCD); the International Osteoporosis Foundation (IOF). 2010 Official Positions on FRAX. www.iscd.org/official-positions. Accessed February 1, 2013.
- Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155–160.
- Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739–742.
- Leslie WD, Morin SN, Lix LM. Bone-density testing interval and transition to osteoporosis. N Engl J Med 2012; 366:1547.
- Endocrine Society. The Endocrine Society Recommends Individualization of Bone Mineral Density Testing Frequency in Women Over the Age of 67: February 7, 2012. http://www.endo-society.org/advocacy/legislative/letters/upload/Endocrine-Society-Response-to-BMD-Testing-Final.pdf. Accessed January 29, 2013.
- The International Society for Clinical Densitometry (ISCD). ISCD response to NEJM article: January 20, 2012. http://www.american-bonehealth.org/images/stories/BMD_Testing_Interval_ISCD_Response_to_NEJM_Article.pdf. Accessed January 29, 2013.
- Gourlay ML, Preisser JS, Lui LY, Cauley JA, Ensrud BeStudy of Osteoporotic Fractures Research Group. BMD screening in older women: initial measurement and testing interval. J Bone Miner Res 2012; 27:743–746.
- Kanis JA, Oden A, Johansson H, Borgström F, Ström O, McCloskey E. FRAX and its applications to clinical practice. Bone 2009; 44:734–743.
- Recker RR. Early postmenopausal bone loss and what to do about it. Ann NY Acad Sci 2011; 1240:E26–E30.
- Genant HK, Jergas M, Palermo L, et al. Comparison of semiquantitative visual and quantitative morphometric assessment of prevalent and incident vertebral fractures in osteoporosis The Study of Osteoporotic Fractures Research Group. J Bone Miner Res 1996; 11:984–996.
- Black DM, Cummings SR, Karpf DB, et al. Randomised trial of effect of alendronate on risk of fracture in women with existing vertebral fractures. Fracture Intervention Trial Research Group. Lancet 1996; 348:1535–1541.
- Nevitt MC, Ettinger B, Black DM, et al. The association of radiographically detected vertebral fractures with back pain and function: a prospective study. Ann Intern Med 1998; 128:793–800.
- Leslie WD, Tsang JF, Caetano PA, Lix LM; Manitoba Bone Density Program. Effectiveness of bone density measurement for predicting osteoporotic fractures in clinical practice. J Clin Endocrinol Metab 2007; 92:77–81.
- Leslie WD, Lix LM, Tsang JF, Caetano PA; Manitoba Bone Density Program. Single-site vs multisite bone density measurement for fracture prediction. Arch Intern Med 2007; 167:1641–1647.
- Stone KL, Seeley DG, Lui LY, et al; Osteoporotic Fractures Research Group. BMD at multiple sites and risk of fracture of multiple types: long-term results from the Study of Osteoporotic Fractures. J Bone Miner Res 2003; 18:1947–1954.
- Black DM, Cummings SR, Genant HK, Nevitt MC, Palermo L, Browner W. Axial and appendicular bone density predict fractures in older women. J Bone Miner Res 1992; 7:633–638.
- Siris ES, Baim S, Nattiv A. Primary care use of FRAX: absolute fracture risk assessment in postmenopausal women and older men. Postgrad Med 2010; 122:82–90.
- Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:2077–2082.
- McClung MR, Geusens P, Miller PD, et al; Hip Intervention Program Study Group. Effect of risedronate on the risk of hip fracture in elderly women. Hip Intervention Program Study Group. N Engl J Med 2001; 344:333–340.
- Park A. How often do women really need bone density tests? Time: Health & Family. January 19, 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 29, 2013.
- Kolata G. Osteoporosis patients advised to delay bone density retests. The New York Times: Health. January 19, 2012. http://query.nytimes.com/gst/fullpage.html?res=9B01E1D61230F93AA25752C0A9649D8B63. Accessed January 29, 2013.
- National Committee for Quality Assurance. The State of Health Care Quality Report. http://www.ncqa.org/Portals/0/State%20of%20Health%20Care/2012/SOHC%20Report%20Web.pdf. Accessed February 1, 2013.
- King AB, Fiorentino DM. Medicare payment cuts for osteoporosis testing reduced use despite tests’ benefit in reducing fractures. Health Aff (Millwood) 2011; 30:2362–2370.
Some members of the public may have noticed the conclusions of a recent study1 that said that if an older postmenopausal woman has her bone mineral density measured to screen for osteoporosis and has a normal or only mildly low result, she does not need to come back for another measurement for approximately 15 years.
We believe this interpretation of the study’s findings is overly simplistic and may have the unfortunate result of causing some people to neglect their bone health. Moreover, the study looked mainly at baseline T scores as the determinant of the subsequent screening interval. However, clinicians must carefully consider a variety of clinical risk factors when deciding how often to obtain bone mineral density measurements. The ultimate goal is to not miss the window of opportunity for early detection and treatment when it would matter the most (ie, before fractures develop).
Here, we will review this recent study, its findings, and its implications.
OSTEOPOROSIS POSES AN ENORMOUS PUBLIC HEALTH PROBLEM
If we consider only the hip, an estimated 10 million people in the United States have osteoporosis (T score ≤ −2.5 or a preexisting fragility fracture), and 33.6 million have osteopenia (T score −1.01 to −2.49).2 The number of people with osteopenia can be assumed to be much higher if other skeletal sites are considered.
By increasing the risk of fragility fractures, osteoporosis poses an enormous public health problem. The surgeon general’s report points out that one of every two white women over age 50 will experience an osteoporosis-related fracture in her lifetime.3 Of all osteoporosis-related fractures, those of the hip carry the worse clinical outcome. Approximately one in five elderly people who experience an osteoporosis-related hip fracture need long-term nursing home care, and as many as 20% die within 1 year.3
In recognition of the burden of osteoporosis, the US Preventive Services Task Force (USPSTF)4 and other scientific bodies2,3 recommend an initial bone mineral density test for all women age 65 and older. Dual-energy x-ray absorptiometry (DXA) is considered the gold standard for bone mineral density testing. Although the patient population that should receive an initial bone mineral density test has been clearly identified (see below), guidelines on the optimal frequency of testing do not exist, as data have been lacking. Recognizing this knowledge gap, Gourlay et al1 attempted to answer the question of how often elderly postmenopausal women should be retested.
WHEN DO 10% OF ELDERLY POSTMENOPAUSAL WOMEN REACH A T SCORE OF −2.5?
Gourlay et al1 analyzed data from 4,957 women in the Study of Osteoporotic Fractures. These women were predominantly white, were at least 67 years old and ambulatory, and had normal bone mineral density or osteopenia and no history of hip or clinical vertebral fracture at baseline. They had been recruited between 1986 and 1988 at sites in Baltimore, MD, Minneapolis, MN, the Monongahela Valley near Pittsburgh, PA, and Portland, OR.
DXA of the hip had been performed at baseline and at multiple times thereafter. The average follow-up time was 8 years.
The primary outcome measured was how long it took for 10% of the patients to reach a T score of −2.5 or less at the femoral neck or total hip as they progressed from having normal bone mineral density to osteoporosis or from osteopenia to osteoporosis and before they developed a fracture or needed treatment for osteoporosis.
Clinical risk factors such as age, body mass index, estrogen use at baseline, fracture after age 50, current smoking, current or past use of glucocorticoids, and self-reported rheumatoid arthritis were included as covariates in time-to-event analyses.
ANSWER: 16.8 YEARS (IF NORMAL AT BASELINE)
The authors estimated that 10% of women would make the transition to osteoporosis before having a hip or clinical vertebral fracture in the following intervals:
- 16.8 years in women whose bone mineral density was normal at baseline (T score at femoral neck and total hip of −1.00 or higher)
- 17.3 years in women who had mild osteopenia at baseline (T score −1.01 to −1.49)
- 4.7 years in women with moderate osteopenia at baseline (T score −1.5 to −1.99)
- 1.1 years in women with advanced osteopenia at baseline (T score −2.00 to −2.49).
The authors also found that body mass index and current estrogen use were the only clinical risk factors that influenced these intervals; other clinical factors such as a fracture after age 50, current smoking, previous or current use of oral glucocorticoids, and self-reported rheumatoid arthritis did not.
They concluded that osteoporosis would develop in fewer than 10% of women if the rescreening interval was lengthened to 15 years for women with normal density or mild osteopenia, 5 years for women with moderate osteopenia, and 1 year for women with advanced osteopenia.
WHAT DOES THIS MEAN FOR THE PRACTICING CLINICIAN?
Who needs an initial DXA test according to current guidelines?
The USPSTF,4 the National Osteoporosis Foundation (NOF),5 the International Society for Clinical Densitometry (ISCD),6 and the American Association of Clinical Endocrinologists (AACE)7 propose that the following groups should undergo DXA:
- All women age 65 and older
- All postmenopausal women who have had a fragility fracture or who have one or more risk factors for osteoporosis (height loss, body mass index < 20 kg/m2, family history of osteoporosis, active smoking, excessive alcohol consumption)
- Adults who have a condition (eg, rheumatoid arthritis) or are taking a medication (eg, glucocorticoids in a daily dose ≥ 5 mg of prednisone or its equivalent for ≥ 3 months) associated with low bone mass or bone loss
- Anyone being considered for drug therapy for osteoporosis, discontinuing therapy for osteoporosis (including estrogen), or being treated for osteoporosis, to monitor the effect of treatment.
Assessing fracture risk. Although clinicians have traditionally relied on bone mineral density obtained by DXA to estimate fracture risk, the World Health Organization has developed a computer-based algorithm that calculates an individual’s 10-year fracture probability from easily obtained clinical risk factors with or without adding femoral-neck bone mineral density. The Fracture Risk Assessment tool, or FRAX, has attracted intense interest since its introduction in 2007 and has been endorsed by the USPSTF4 and by other scientific societies, including the NOF5 and the ISCD.8 In fact, the most recent USPSTF guidelines,4 which recommend screening all women age 65 and older, call for using FRAX to identify younger women at higher risk of fracture.
According to FRAX, a 65-year-old white woman who has no risk factors has a 9.3% chance of developing a major osteoporotic fracture in the next 10 years. And if a younger woman (between the ages of 50 and 64) has a fracture risk as high or higher than a 65-year-old white woman who has no risk factors, then she too should be screened by DXA.
The FRAX calculator is available online at www.shef.ac.uk/FRAX.
What are the current recommendations about follow-up DXA testing?
In eligible patients, the Centers for Medicare and Medicaid Services will pay for a DXA scan every 2 years. This interval is based on the concept that in an otherwise healthy person, it takes a minimum of 2 years to see a significant change in bone mineral density that can be attributed to a biological change in the bone and not just chance. The USPSTF4 and scientific societies such as the NOF5 generally agree with the Medicare guidelines of retesting every 2 years but recognize certain clinical situations that may warrant more frequent retesting (see below).
But the real question is how long the DXA screening interval can be extended so that meaningful information can still be obtained to help make management decisions and before a complication such as a fracture occurs. While there is convincing evidence to support the recommendations for an initial DXA test, data to answer the question of how long the resting interval should be are lacking.
Before the study by Gourlay et al,1 the only data on repeat DXA came from work by Hillier et al.9 But those investigators asked a different question. They were interested in how well repeated measurements predicted fractures. They used the same population that Gourlay et al did but evaluated fractures, not T scores. They concluded that in healthy, adult postmenopausal women, repeating the bone mineral density measurement up to 8 years later adds little value to initial measurement for predicting incident fractures.
Clinical factors also count
The T score should not be the only major factor determining the interval for bone mineral density testing in elderly women; clinical risk factors also should be kept in mind.
Gourlay et al concluded that age and T scores are the key predictive factors in determining the bone mineral density testing interval in elderly, postmenopausal women for screening purposes.1 In their statistical model, clinical risk factors such as fracture after age 50, current smoking, previous or current use of glucocorticoids, and self-reported rheumatoid arthritis did not influence the testing interval. They say that clinicians should not feel compelled to shorten the testing interval when these risk factors are present.
Readers may take this to mean that if these results were strictly applied to a 70-year-old white woman receiving oral glucocorticoids for rheumatoid arthritis and who has a baseline T score of −1.45, then her next test may be postponed by 15 years (given that both these factors did not influence the testing interval). Readers may also conclude that if this patient’s T score were −1.51, then her screening interval would be 5 years and not 15 years.
However, Gourlay et al say1 that clinicians can choose to shorten the testing interval if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in their analysis.
Soon after this study1 was published, Lewiecki et al10 and others11–13 published critical commentaries addressing controversial issues surrounding the study. They highlighted the importance of considering clinical risk factors for fracture in addition to the femoral neck and total hip T scores. In response to these comments, Gourlay et al clarified that their results were not generalizable to patients with secondary osteoporosis, such as those taking glucocorticoids or those who have rheumatic diseases.14
Readers should keep in mind that clinical risk factors make independent contributions to fracture risk (Figure 1).15
Readers should also recognize the following groups in whom the results of the study by Gourlay et al are not applicable since they were not included in their study:
- Men
- Women other than white women
- Women already diagnosed with osteoporosis and on bisphosphonates or any other osteoporosis treatment (except for estrogen). The findings also do not apply to:
- Patients who experience a significant decline in health status or who develop new clinical conditions (such as hyperparathyroidism, paraproteinemias, or type 2 diabetes) or who use medications such as glucocorticoids that cause rapid bone loss. Changes in clinical situations such as these may necessitate more frequent bone mineral density testing in spite of a “good” baseline T score.
- Perimenopausal women or women who received their first bone mineral density test before age 65. Perimenopause and menopause may trigger rapid bone loss, which may be as much as one T-score point (ie, 1 standard deviation) at the spine and femoral neck.16 Therefore, testing done during this time cannot be used as the basis of future monitoring.
The study did not address asymptomatic vertebral fractures and lumbar spine density
Gourlay et al1 did not take into account asymptomatic spinal fractures; they used only clinical vertebral fractures in their risk estimates of spinal fractures. Ascertainment of morphometric spinal fractures may be methodologically challenging, but if the study had included these fractures, the outcomes and conclusions could have been very different.
Vertebral fractures are present in as many as 14% to 33% of postmenopausal women17 and indicate osteoporosis (regardless of the bone mineral density). Moreover, most vertebral fractures are clinically silent and escape detection, and approximately only one in three radiographically defined vertebral fractures is reported clinically.18,19 Given the prevalence of these fractures, we and others10 have noted that the results of the Gourlay study may be biased toward longer screening intervals because they did not account for morphometric vertebral fractures.
Gourlay et al used T scores only of the femoral neck and total hip and not those of the lumbar spine. Some studies have found that hip measurements may be superior to spine measurements for overall osteoporotic fracture prediction.20,21 However, lumbar spine bone mineral density is predictive of fracture at other skeletal sites,22,23 is a widely accepted skeletal site measurement, and is used to diagnose osteoporosis. Moreover, the lumbar spine T score can be −2.5 or higher even if the total hip or femoral neck T score is lower than −2.5.
More fractures occur in people with osteopenia than with osteoporosis
Osteoporosis imparts a much higher risk of fracture than does osteopenia. However, if one recognizes the much greater prevalence of osteopenia (33.6 million people) compared with osteoporosis (10 million),2 it is not hard to appreciate that the number of fractures is higher in the osteopenic group than in those with osteoporosis based on T scores. Siris et al24 point out that at least half of osteoporotic fractures are in patients with osteopenia, who comprise a larger segment of the population than those with osteoporosis.
Some clinical trials have shown that bisphosphonates are not effective in preventing clinical fractures in women who do not have osteoporosis.25,26 However, clinicians must recognize that while bisphosphonates may not be as effective in preventing fractures in the osteopenic group with no other clinical risk factors, the presence of multiple clinical risk factors incrementally increases the fracture risk (which can be assessed via FRAX) and may require starting drug therapy earlier.
Women with vertebral fractures are considered to have clinical osteoporosis even if they have T scores in the osteopenic range, and must be considered for drug therapy.
The public health burden of fractures will not decrease unless individuals with low bone mineral density who are at an increased risk of fracture are identified and treated.24
Is DXA testing overused or underused? does it decrease the rate of fractures?
The study of Gourlay et al1 captured a lot of media attention, with many newspapers and blogs claiming that women may be getting tested too often.27,28 However, in reality, this test is highly underutilized. The 2011 Healthcare Effectiveness Data and Information Set report noted that 71.0% of women in Medicare health maintenance organizations and 75.0% of women in Medicare preferred provider organizations ever had a bone mineral density test for osteoporosis.29 While these numbers may not appear to be too far from the target, they are a poor gauge of DXA use as they include all types of bone mineral density tests in a woman’s lifetime, including even heel tests at health fairs.
Central DXA is used far less than one might expect. King and Fiorentino, in a recent analysis, showed that only about 14% of fee-for-service Medicare beneficiaries 65 years and older had one or more DXA tests in 2010.30 DXA retesting also does not seem to be an issue, with only 1 in 10 elderly women reporting having had a repeat test at 2-year intervals, and fewer than 1 in 100 tested reported testing more frequently.30
Some members of the public may have noticed the conclusions of a recent study1 that said that if an older postmenopausal woman has her bone mineral density measured to screen for osteoporosis and has a normal or only mildly low result, she does not need to come back for another measurement for approximately 15 years.
We believe this interpretation of the study’s findings is overly simplistic and may have the unfortunate result of causing some people to neglect their bone health. Moreover, the study looked mainly at baseline T scores as the determinant of the subsequent screening interval. However, clinicians must carefully consider a variety of clinical risk factors when deciding how often to obtain bone mineral density measurements. The ultimate goal is to not miss the window of opportunity for early detection and treatment when it would matter the most (ie, before fractures develop).
Here, we will review this recent study, its findings, and its implications.
OSTEOPOROSIS POSES AN ENORMOUS PUBLIC HEALTH PROBLEM
If we consider only the hip, an estimated 10 million people in the United States have osteoporosis (T score ≤ −2.5 or a preexisting fragility fracture), and 33.6 million have osteopenia (T score −1.01 to −2.49).2 The number of people with osteopenia can be assumed to be much higher if other skeletal sites are considered.
By increasing the risk of fragility fractures, osteoporosis poses an enormous public health problem. The surgeon general’s report points out that one of every two white women over age 50 will experience an osteoporosis-related fracture in her lifetime.3 Of all osteoporosis-related fractures, those of the hip carry the worse clinical outcome. Approximately one in five elderly people who experience an osteoporosis-related hip fracture need long-term nursing home care, and as many as 20% die within 1 year.3
In recognition of the burden of osteoporosis, the US Preventive Services Task Force (USPSTF)4 and other scientific bodies2,3 recommend an initial bone mineral density test for all women age 65 and older. Dual-energy x-ray absorptiometry (DXA) is considered the gold standard for bone mineral density testing. Although the patient population that should receive an initial bone mineral density test has been clearly identified (see below), guidelines on the optimal frequency of testing do not exist, as data have been lacking. Recognizing this knowledge gap, Gourlay et al1 attempted to answer the question of how often elderly postmenopausal women should be retested.
WHEN DO 10% OF ELDERLY POSTMENOPAUSAL WOMEN REACH A T SCORE OF −2.5?
Gourlay et al1 analyzed data from 4,957 women in the Study of Osteoporotic Fractures. These women were predominantly white, were at least 67 years old and ambulatory, and had normal bone mineral density or osteopenia and no history of hip or clinical vertebral fracture at baseline. They had been recruited between 1986 and 1988 at sites in Baltimore, MD, Minneapolis, MN, the Monongahela Valley near Pittsburgh, PA, and Portland, OR.
DXA of the hip had been performed at baseline and at multiple times thereafter. The average follow-up time was 8 years.
The primary outcome measured was how long it took for 10% of the patients to reach a T score of −2.5 or less at the femoral neck or total hip as they progressed from having normal bone mineral density to osteoporosis or from osteopenia to osteoporosis and before they developed a fracture or needed treatment for osteoporosis.
Clinical risk factors such as age, body mass index, estrogen use at baseline, fracture after age 50, current smoking, current or past use of glucocorticoids, and self-reported rheumatoid arthritis were included as covariates in time-to-event analyses.
ANSWER: 16.8 YEARS (IF NORMAL AT BASELINE)
The authors estimated that 10% of women would make the transition to osteoporosis before having a hip or clinical vertebral fracture in the following intervals:
- 16.8 years in women whose bone mineral density was normal at baseline (T score at femoral neck and total hip of −1.00 or higher)
- 17.3 years in women who had mild osteopenia at baseline (T score −1.01 to −1.49)
- 4.7 years in women with moderate osteopenia at baseline (T score −1.5 to −1.99)
- 1.1 years in women with advanced osteopenia at baseline (T score −2.00 to −2.49).
The authors also found that body mass index and current estrogen use were the only clinical risk factors that influenced these intervals; other clinical factors such as a fracture after age 50, current smoking, previous or current use of oral glucocorticoids, and self-reported rheumatoid arthritis did not.
They concluded that osteoporosis would develop in fewer than 10% of women if the rescreening interval was lengthened to 15 years for women with normal density or mild osteopenia, 5 years for women with moderate osteopenia, and 1 year for women with advanced osteopenia.
WHAT DOES THIS MEAN FOR THE PRACTICING CLINICIAN?
Who needs an initial DXA test according to current guidelines?
The USPSTF,4 the National Osteoporosis Foundation (NOF),5 the International Society for Clinical Densitometry (ISCD),6 and the American Association of Clinical Endocrinologists (AACE)7 propose that the following groups should undergo DXA:
- All women age 65 and older
- All postmenopausal women who have had a fragility fracture or who have one or more risk factors for osteoporosis (height loss, body mass index < 20 kg/m2, family history of osteoporosis, active smoking, excessive alcohol consumption)
- Adults who have a condition (eg, rheumatoid arthritis) or are taking a medication (eg, glucocorticoids in a daily dose ≥ 5 mg of prednisone or its equivalent for ≥ 3 months) associated with low bone mass or bone loss
- Anyone being considered for drug therapy for osteoporosis, discontinuing therapy for osteoporosis (including estrogen), or being treated for osteoporosis, to monitor the effect of treatment.
Assessing fracture risk. Although clinicians have traditionally relied on bone mineral density obtained by DXA to estimate fracture risk, the World Health Organization has developed a computer-based algorithm that calculates an individual’s 10-year fracture probability from easily obtained clinical risk factors with or without adding femoral-neck bone mineral density. The Fracture Risk Assessment tool, or FRAX, has attracted intense interest since its introduction in 2007 and has been endorsed by the USPSTF4 and by other scientific societies, including the NOF5 and the ISCD.8 In fact, the most recent USPSTF guidelines,4 which recommend screening all women age 65 and older, call for using FRAX to identify younger women at higher risk of fracture.
According to FRAX, a 65-year-old white woman who has no risk factors has a 9.3% chance of developing a major osteoporotic fracture in the next 10 years. And if a younger woman (between the ages of 50 and 64) has a fracture risk as high or higher than a 65-year-old white woman who has no risk factors, then she too should be screened by DXA.
The FRAX calculator is available online at www.shef.ac.uk/FRAX.
What are the current recommendations about follow-up DXA testing?
In eligible patients, the Centers for Medicare and Medicaid Services will pay for a DXA scan every 2 years. This interval is based on the concept that in an otherwise healthy person, it takes a minimum of 2 years to see a significant change in bone mineral density that can be attributed to a biological change in the bone and not just chance. The USPSTF4 and scientific societies such as the NOF5 generally agree with the Medicare guidelines of retesting every 2 years but recognize certain clinical situations that may warrant more frequent retesting (see below).
But the real question is how long the DXA screening interval can be extended so that meaningful information can still be obtained to help make management decisions and before a complication such as a fracture occurs. While there is convincing evidence to support the recommendations for an initial DXA test, data to answer the question of how long the resting interval should be are lacking.
Before the study by Gourlay et al,1 the only data on repeat DXA came from work by Hillier et al.9 But those investigators asked a different question. They were interested in how well repeated measurements predicted fractures. They used the same population that Gourlay et al did but evaluated fractures, not T scores. They concluded that in healthy, adult postmenopausal women, repeating the bone mineral density measurement up to 8 years later adds little value to initial measurement for predicting incident fractures.
Clinical factors also count
The T score should not be the only major factor determining the interval for bone mineral density testing in elderly women; clinical risk factors also should be kept in mind.
Gourlay et al concluded that age and T scores are the key predictive factors in determining the bone mineral density testing interval in elderly, postmenopausal women for screening purposes.1 In their statistical model, clinical risk factors such as fracture after age 50, current smoking, previous or current use of glucocorticoids, and self-reported rheumatoid arthritis did not influence the testing interval. They say that clinicians should not feel compelled to shorten the testing interval when these risk factors are present.
Readers may take this to mean that if these results were strictly applied to a 70-year-old white woman receiving oral glucocorticoids for rheumatoid arthritis and who has a baseline T score of −1.45, then her next test may be postponed by 15 years (given that both these factors did not influence the testing interval). Readers may also conclude that if this patient’s T score were −1.51, then her screening interval would be 5 years and not 15 years.
However, Gourlay et al say1 that clinicians can choose to shorten the testing interval if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in their analysis.
Soon after this study1 was published, Lewiecki et al10 and others11–13 published critical commentaries addressing controversial issues surrounding the study. They highlighted the importance of considering clinical risk factors for fracture in addition to the femoral neck and total hip T scores. In response to these comments, Gourlay et al clarified that their results were not generalizable to patients with secondary osteoporosis, such as those taking glucocorticoids or those who have rheumatic diseases.14
Readers should keep in mind that clinical risk factors make independent contributions to fracture risk (Figure 1).15
Readers should also recognize the following groups in whom the results of the study by Gourlay et al are not applicable since they were not included in their study:
- Men
- Women other than white women
- Women already diagnosed with osteoporosis and on bisphosphonates or any other osteoporosis treatment (except for estrogen). The findings also do not apply to:
- Patients who experience a significant decline in health status or who develop new clinical conditions (such as hyperparathyroidism, paraproteinemias, or type 2 diabetes) or who use medications such as glucocorticoids that cause rapid bone loss. Changes in clinical situations such as these may necessitate more frequent bone mineral density testing in spite of a “good” baseline T score.
- Perimenopausal women or women who received their first bone mineral density test before age 65. Perimenopause and menopause may trigger rapid bone loss, which may be as much as one T-score point (ie, 1 standard deviation) at the spine and femoral neck.16 Therefore, testing done during this time cannot be used as the basis of future monitoring.
The study did not address asymptomatic vertebral fractures and lumbar spine density
Gourlay et al1 did not take into account asymptomatic spinal fractures; they used only clinical vertebral fractures in their risk estimates of spinal fractures. Ascertainment of morphometric spinal fractures may be methodologically challenging, but if the study had included these fractures, the outcomes and conclusions could have been very different.
Vertebral fractures are present in as many as 14% to 33% of postmenopausal women17 and indicate osteoporosis (regardless of the bone mineral density). Moreover, most vertebral fractures are clinically silent and escape detection, and approximately only one in three radiographically defined vertebral fractures is reported clinically.18,19 Given the prevalence of these fractures, we and others10 have noted that the results of the Gourlay study may be biased toward longer screening intervals because they did not account for morphometric vertebral fractures.
Gourlay et al used T scores only of the femoral neck and total hip and not those of the lumbar spine. Some studies have found that hip measurements may be superior to spine measurements for overall osteoporotic fracture prediction.20,21 However, lumbar spine bone mineral density is predictive of fracture at other skeletal sites,22,23 is a widely accepted skeletal site measurement, and is used to diagnose osteoporosis. Moreover, the lumbar spine T score can be −2.5 or higher even if the total hip or femoral neck T score is lower than −2.5.
More fractures occur in people with osteopenia than with osteoporosis
Osteoporosis imparts a much higher risk of fracture than does osteopenia. However, if one recognizes the much greater prevalence of osteopenia (33.6 million people) compared with osteoporosis (10 million),2 it is not hard to appreciate that the number of fractures is higher in the osteopenic group than in those with osteoporosis based on T scores. Siris et al24 point out that at least half of osteoporotic fractures are in patients with osteopenia, who comprise a larger segment of the population than those with osteoporosis.
Some clinical trials have shown that bisphosphonates are not effective in preventing clinical fractures in women who do not have osteoporosis.25,26 However, clinicians must recognize that while bisphosphonates may not be as effective in preventing fractures in the osteopenic group with no other clinical risk factors, the presence of multiple clinical risk factors incrementally increases the fracture risk (which can be assessed via FRAX) and may require starting drug therapy earlier.
Women with vertebral fractures are considered to have clinical osteoporosis even if they have T scores in the osteopenic range, and must be considered for drug therapy.
The public health burden of fractures will not decrease unless individuals with low bone mineral density who are at an increased risk of fracture are identified and treated.24
Is DXA testing overused or underused? does it decrease the rate of fractures?
The study of Gourlay et al1 captured a lot of media attention, with many newspapers and blogs claiming that women may be getting tested too often.27,28 However, in reality, this test is highly underutilized. The 2011 Healthcare Effectiveness Data and Information Set report noted that 71.0% of women in Medicare health maintenance organizations and 75.0% of women in Medicare preferred provider organizations ever had a bone mineral density test for osteoporosis.29 While these numbers may not appear to be too far from the target, they are a poor gauge of DXA use as they include all types of bone mineral density tests in a woman’s lifetime, including even heel tests at health fairs.
Central DXA is used far less than one might expect. King and Fiorentino, in a recent analysis, showed that only about 14% of fee-for-service Medicare beneficiaries 65 years and older had one or more DXA tests in 2010.30 DXA retesting also does not seem to be an issue, with only 1 in 10 elderly women reporting having had a repeat test at 2-year intervals, and fewer than 1 in 100 tested reported testing more frequently.30
- Gourlay ML, Fine JP, Preisser JS, et al; Study of Osteoporotic Fractures research group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225–233.
- National Osteoporosis Foundation (NOF). America’s Bone Health: The State of Osteoporosis and Low Bone Mass in Our Nation. Washington, DC: National Osteoporosis Foundation; 2002.
- US Department of Health and Human Services. Bone Health and Osteoporosis: A Report of the Surgeon General. Rockville, MD: US Department of Health and Human Services, Office of the Surgeon General; 2004.
- US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356–364.
- National Osteoporosis Foundation. Clinician’s Guide to Prevention and Treatment of Osteoporosis. Washington, DC: National Osteoporosis Foundation; 2010.
- Baim S, Binkley N, Bilezikian JP, et al. Official Positions of the International Society for Clinical Densitometry and executive summary of the 2007 ISCD Position Development Conference. J Clin Densitom 2008; 11:75–91.
- Watts NB, Bilezikian JP, Camacho PM, et al; AACE Osteoporosis Task Force. American Association of Clinical Endocrinologists Medical Guidelines for Clinical Practice for the diagnosis and treatment of postmenopausal osteoporosis. Endocr Pract 2010; 16(suppl 3):1–37.
- The International Society for Clinical Densitometry (ISCD); the International Osteoporosis Foundation (IOF). 2010 Official Positions on FRAX. www.iscd.org/official-positions. Accessed February 1, 2013.
- Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155–160.
- Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739–742.
- Leslie WD, Morin SN, Lix LM. Bone-density testing interval and transition to osteoporosis. N Engl J Med 2012; 366:1547.
- Endocrine Society. The Endocrine Society Recommends Individualization of Bone Mineral Density Testing Frequency in Women Over the Age of 67: February 7, 2012. http://www.endo-society.org/advocacy/legislative/letters/upload/Endocrine-Society-Response-to-BMD-Testing-Final.pdf. Accessed January 29, 2013.
- The International Society for Clinical Densitometry (ISCD). ISCD response to NEJM article: January 20, 2012. http://www.american-bonehealth.org/images/stories/BMD_Testing_Interval_ISCD_Response_to_NEJM_Article.pdf. Accessed January 29, 2013.
- Gourlay ML, Preisser JS, Lui LY, Cauley JA, Ensrud BeStudy of Osteoporotic Fractures Research Group. BMD screening in older women: initial measurement and testing interval. J Bone Miner Res 2012; 27:743–746.
- Kanis JA, Oden A, Johansson H, Borgström F, Ström O, McCloskey E. FRAX and its applications to clinical practice. Bone 2009; 44:734–743.
- Recker RR. Early postmenopausal bone loss and what to do about it. Ann NY Acad Sci 2011; 1240:E26–E30.
- Genant HK, Jergas M, Palermo L, et al. Comparison of semiquantitative visual and quantitative morphometric assessment of prevalent and incident vertebral fractures in osteoporosis The Study of Osteoporotic Fractures Research Group. J Bone Miner Res 1996; 11:984–996.
- Black DM, Cummings SR, Karpf DB, et al. Randomised trial of effect of alendronate on risk of fracture in women with existing vertebral fractures. Fracture Intervention Trial Research Group. Lancet 1996; 348:1535–1541.
- Nevitt MC, Ettinger B, Black DM, et al. The association of radiographically detected vertebral fractures with back pain and function: a prospective study. Ann Intern Med 1998; 128:793–800.
- Leslie WD, Tsang JF, Caetano PA, Lix LM; Manitoba Bone Density Program. Effectiveness of bone density measurement for predicting osteoporotic fractures in clinical practice. J Clin Endocrinol Metab 2007; 92:77–81.
- Leslie WD, Lix LM, Tsang JF, Caetano PA; Manitoba Bone Density Program. Single-site vs multisite bone density measurement for fracture prediction. Arch Intern Med 2007; 167:1641–1647.
- Stone KL, Seeley DG, Lui LY, et al; Osteoporotic Fractures Research Group. BMD at multiple sites and risk of fracture of multiple types: long-term results from the Study of Osteoporotic Fractures. J Bone Miner Res 2003; 18:1947–1954.
- Black DM, Cummings SR, Genant HK, Nevitt MC, Palermo L, Browner W. Axial and appendicular bone density predict fractures in older women. J Bone Miner Res 1992; 7:633–638.
- Siris ES, Baim S, Nattiv A. Primary care use of FRAX: absolute fracture risk assessment in postmenopausal women and older men. Postgrad Med 2010; 122:82–90.
- Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:2077–2082.
- McClung MR, Geusens P, Miller PD, et al; Hip Intervention Program Study Group. Effect of risedronate on the risk of hip fracture in elderly women. Hip Intervention Program Study Group. N Engl J Med 2001; 344:333–340.
- Park A. How often do women really need bone density tests? Time: Health & Family. January 19, 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 29, 2013.
- Kolata G. Osteoporosis patients advised to delay bone density retests. The New York Times: Health. January 19, 2012. http://query.nytimes.com/gst/fullpage.html?res=9B01E1D61230F93AA25752C0A9649D8B63. Accessed January 29, 2013.
- National Committee for Quality Assurance. The State of Health Care Quality Report. http://www.ncqa.org/Portals/0/State%20of%20Health%20Care/2012/SOHC%20Report%20Web.pdf. Accessed February 1, 2013.
- King AB, Fiorentino DM. Medicare payment cuts for osteoporosis testing reduced use despite tests’ benefit in reducing fractures. Health Aff (Millwood) 2011; 30:2362–2370.
- Gourlay ML, Fine JP, Preisser JS, et al; Study of Osteoporotic Fractures research group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225–233.
- National Osteoporosis Foundation (NOF). America’s Bone Health: The State of Osteoporosis and Low Bone Mass in Our Nation. Washington, DC: National Osteoporosis Foundation; 2002.
- US Department of Health and Human Services. Bone Health and Osteoporosis: A Report of the Surgeon General. Rockville, MD: US Department of Health and Human Services, Office of the Surgeon General; 2004.
- US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356–364.
- National Osteoporosis Foundation. Clinician’s Guide to Prevention and Treatment of Osteoporosis. Washington, DC: National Osteoporosis Foundation; 2010.
- Baim S, Binkley N, Bilezikian JP, et al. Official Positions of the International Society for Clinical Densitometry and executive summary of the 2007 ISCD Position Development Conference. J Clin Densitom 2008; 11:75–91.
- Watts NB, Bilezikian JP, Camacho PM, et al; AACE Osteoporosis Task Force. American Association of Clinical Endocrinologists Medical Guidelines for Clinical Practice for the diagnosis and treatment of postmenopausal osteoporosis. Endocr Pract 2010; 16(suppl 3):1–37.
- The International Society for Clinical Densitometry (ISCD); the International Osteoporosis Foundation (IOF). 2010 Official Positions on FRAX. www.iscd.org/official-positions. Accessed February 1, 2013.
- Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155–160.
- Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739–742.
- Leslie WD, Morin SN, Lix LM. Bone-density testing interval and transition to osteoporosis. N Engl J Med 2012; 366:1547.
- Endocrine Society. The Endocrine Society Recommends Individualization of Bone Mineral Density Testing Frequency in Women Over the Age of 67: February 7, 2012. http://www.endo-society.org/advocacy/legislative/letters/upload/Endocrine-Society-Response-to-BMD-Testing-Final.pdf. Accessed January 29, 2013.
- The International Society for Clinical Densitometry (ISCD). ISCD response to NEJM article: January 20, 2012. http://www.american-bonehealth.org/images/stories/BMD_Testing_Interval_ISCD_Response_to_NEJM_Article.pdf. Accessed January 29, 2013.
- Gourlay ML, Preisser JS, Lui LY, Cauley JA, Ensrud BeStudy of Osteoporotic Fractures Research Group. BMD screening in older women: initial measurement and testing interval. J Bone Miner Res 2012; 27:743–746.
- Kanis JA, Oden A, Johansson H, Borgström F, Ström O, McCloskey E. FRAX and its applications to clinical practice. Bone 2009; 44:734–743.
- Recker RR. Early postmenopausal bone loss and what to do about it. Ann NY Acad Sci 2011; 1240:E26–E30.
- Genant HK, Jergas M, Palermo L, et al. Comparison of semiquantitative visual and quantitative morphometric assessment of prevalent and incident vertebral fractures in osteoporosis The Study of Osteoporotic Fractures Research Group. J Bone Miner Res 1996; 11:984–996.
- Black DM, Cummings SR, Karpf DB, et al. Randomised trial of effect of alendronate on risk of fracture in women with existing vertebral fractures. Fracture Intervention Trial Research Group. Lancet 1996; 348:1535–1541.
- Nevitt MC, Ettinger B, Black DM, et al. The association of radiographically detected vertebral fractures with back pain and function: a prospective study. Ann Intern Med 1998; 128:793–800.
- Leslie WD, Tsang JF, Caetano PA, Lix LM; Manitoba Bone Density Program. Effectiveness of bone density measurement for predicting osteoporotic fractures in clinical practice. J Clin Endocrinol Metab 2007; 92:77–81.
- Leslie WD, Lix LM, Tsang JF, Caetano PA; Manitoba Bone Density Program. Single-site vs multisite bone density measurement for fracture prediction. Arch Intern Med 2007; 167:1641–1647.
- Stone KL, Seeley DG, Lui LY, et al; Osteoporotic Fractures Research Group. BMD at multiple sites and risk of fracture of multiple types: long-term results from the Study of Osteoporotic Fractures. J Bone Miner Res 2003; 18:1947–1954.
- Black DM, Cummings SR, Genant HK, Nevitt MC, Palermo L, Browner W. Axial and appendicular bone density predict fractures in older women. J Bone Miner Res 1992; 7:633–638.
- Siris ES, Baim S, Nattiv A. Primary care use of FRAX: absolute fracture risk assessment in postmenopausal women and older men. Postgrad Med 2010; 122:82–90.
- Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:2077–2082.
- McClung MR, Geusens P, Miller PD, et al; Hip Intervention Program Study Group. Effect of risedronate on the risk of hip fracture in elderly women. Hip Intervention Program Study Group. N Engl J Med 2001; 344:333–340.
- Park A. How often do women really need bone density tests? Time: Health & Family. January 19, 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 29, 2013.
- Kolata G. Osteoporosis patients advised to delay bone density retests. The New York Times: Health. January 19, 2012. http://query.nytimes.com/gst/fullpage.html?res=9B01E1D61230F93AA25752C0A9649D8B63. Accessed January 29, 2013.
- National Committee for Quality Assurance. The State of Health Care Quality Report. http://www.ncqa.org/Portals/0/State%20of%20Health%20Care/2012/SOHC%20Report%20Web.pdf. Accessed February 1, 2013.
- King AB, Fiorentino DM. Medicare payment cuts for osteoporosis testing reduced use despite tests’ benefit in reducing fractures. Health Aff (Millwood) 2011; 30:2362–2370.
KEY POINTS
- The criteria for who should undergo bone mineral density measurement are well established, but data on repeat testing are scarce.
- Gourlay et al concluded that age and T scores are the key predictive factors in determining the bone mineral density testing interval, while clinical risk factors such as fracture after age 50, current smoking, previous or current use of glucocorticoids, and self-reported rheumatoid arthritis are not.
- The Fracture Risk Assessment tool (FRAX) is a useful clinical tool that calculates an individual’s 10-year risk of fracture. It is available at www.shef.ac.uk/FRAX
What should be the interval between bone density screenings?
In 2010, the United States Preventive Services Task Force recommended screening for osteoporosis by measuring bone mineral density in women age 65 and older and also in younger women if their fracture risk is equal to or greater than that of a 65-year-old white woman who has no additional risk factors.
But what should be the interval between screenings? The Task Force stated that evidence on the optimum screening interval is lacking, that 2 years may be the minimum interval due to precision error, but that longer intervals may be necessary to improve fracture risk prediction.1 They also cited a study showing that repeating the test up to 8 years after an initial test did not improve the ability of screening to predict fractures.2 This was recently confirmed in a study from Canada.3
GOURLAY ET AL: TEST AGAIN IN 1 TO 15 YEARS
In response to this information void, Gourlay and colleagues4 analyzed data from the Study of Osteoporotic Fractures. Because these investigators were interested in the interval between screening measurements of bone mineral density, they included only women who did not already have osteoporosis or take medication for osteoporosis. They wanted to know how long it took for 10% of women to develop osteoporosis, and found that this interval varied from 1 to 15 years depending on the initial bone density.
I did not think these results were surprising. The durations in which osteoporosis developed were similar to what one would predict from cross-sectional reference ranges. The average woman loses a little less than 1% of bone density per year after age 65. A T score of −1.0 is 22% higher than a T score of −2.5, so on average it would take more than 20 years to go from early osteopenia to osteoporosis.
AN ONGOING DEBATE ON SCREENING
The report generated a debate about the value and timing of repeated screening.5,6
In their article “More bone density testing is needed, not less,”5 Lewiecki et al criticized the Gourlay analysis because it did not include spine measurements or screen for asymptomatic vertebral fractures, and because it did not include enough clinical risk factors.5,6 They claimed that media attention suggested that dual-energy absorptiometry (DXA) was overused and expensive, citing three news reports. One of the news reports did misinterpret the Gourlay study and suggested that fewer women should be screened.7 The others, however, accurately described the findings that many women did not need to undergo DXA every 2 years.8,9
In this issue of the Cleveland Clinic Journal of Medicine, Doshi and colleagues express their opinion that the interval between bone mineral density testings should be guided by an assessment of clinical risk factors and not just T scores.10
Doshi et al are also concerned about erroneous conclusions drawn by the media. However, when I reviewed the news reports that they cited, I thought the reports were well written and conveyed the results appropriately. One report, by Alice Park,11 cautioned: “doctors need to remain flexible in advising women about when to get tested. A patient who has a normal T score but then develops cancer and loses a lot of weight, for example, may be more vulnerable to developing osteoporosis and therefore may need to get screened before the 15-year interval.”11 The other, by Gina Kolata, also explained that those taking high doses of corticosteroids for another medical condition would lose bone rapidly, but the findings “cover most normal women.”9 Neither report discouraged patients from getting screening in the first place.
Both Lewiecki et al and Doshi et al say that clinical factors should be considered, but do not specify which factors should be included in addition to the ones already evaluated by Gourlay et al (age, body mass index, estrogen use at baseline, any fracture after 50 years of age, current smoking, current or past use of oral glucocorticoids, and self-reported rheumatoid arthritis). These did not change the estimated time to develop osteoporosis for 90% of the study participants.
Furthermore, Gourlay et al had already noted that “clinicians may choose to reevaluate patients before our estimated screening intervals if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in our analyses.”4 Thus, patients with serious diseases should undergo DXA not for screening but for monitoring disease progression, and the Gourlay study results do not apply to them.
PATIENTS ON GLUCOCORTICOIDS: A SPECIAL SUBSET
Patients who are treated with glucocorticoids deserve further discussion. Consider the example described by Doshi et al of a woman with rheumatoid arthritis, taking prednisone, with a T score of −1.4. She would have to lose about 17% of her bone density to reach a T score at the osteoporosis level. One clinical trial in patients taking glucocorticoids, most of whom had rheumatoid arthritis, reported a loss of 2% after 2 years in the placebo group,12 so it is unlikely that this patient would have bone density in the osteoporosis range for at least several years.
However, clinicians know that these patients get fractures, especially in the spine, even with a normal bone density. Therefore, vertebral fracture assessment would be more important than bone density screening in this patient. Currently, there is uncertainty about the best time to initiate treatment in patients taking these glucocortical steroids, as well as the choice of initial medication. More research about long-term benefits of treatment are especially needed in this population.
VERTEBRAL FRACTURES: NO FIRM RECOMMENDATIONS
Doshi et al state that the Gourlay study was biased towards longer screening intervals because it included women with asymptomatic vertebral fractures. This does not make sense, because women who have untreated asymptomatic fractures would not be expected to lose bone at a slower rate. This does not mean that the asymptomatic fractures are trivial.
Instead of getting more frequent bone density measurements, I think it would be more logical to evaluate vertebral fractures using radiographs or vertebral fracture assessment, but we can’t make a firm recommendation without studies of the effectiveness of screening for vertebral fractures.
WHAT ABOUT OSTEOPENIA?
Critics of the Gourlay study point out that most fractures occur in the osteopenic population. This is true, but it does not mean that bone density should be measured more frequently. The bisphosphonates are not effective at preventing a first fracture unless the T score is lower than −2.5.13 Patients who have risk factors in addition to osteopenia may have a higher risk of fracture, but it is not clear if this can be treated with medication. For example, rodeo riders have a high fracture risk, but they would not benefit from taking alendronate. In some cases, such as people who smoke or drink alcohol to excess, treating the risk factor would be more appropriate.
As Doshi et al and others have noted, the study by Gourlay et al has limitations, and of course clinical judgment must be used in implementing the findings of any study. But doctors should not order unnecessary and expensive tests, and physicians who perform bone densitometry should not recommend frequent repeat testing that does not benefit the patient.
- US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356–364.
- Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155–160.
- Leslie WD, Morin SN, Lix LM; Manitoba Bone Density Program. Rate of bone density change does not enhance fracture prediction in routine clinical practice. J Clin Endocrinol Metab 2012; 97:1211–1218.
- Gourlay ML, Fine J P, Preisser JS, et al; Study of Osteoporotic Fractures Research Group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225–233.
- Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739–742.
- Yu EW, Finkelstein JS. Bone density screening intervals for osteoporosis: one size does not fit all. JAMA 2012; 307:2591–2592.
- Frier S. Women receive bone tests too often for osteoporosis, study finds. Bloomberg News; 2012. http://www.bloomberg.com/news/2012-01-18/many-women-screened-for-osteoporosis-don-t-need-it-researchers-report.html. Accessed January 3, 2013.
- Knox R. Many older women may not need frequent bone scans. National Public Radio; 2012. http://www.npr.org/blogs/health/2012/01/19/145419138/many-older-women-may-not-need-frequent-bone-scans?ps=sh_sthdl. Accessed January 3, 2013.
- Kolata G. Patients with normal bone density can delay retests, study suggests. The New York Times; 2012. http://www.nytimes.com/2012/01/19/health/bone-density-tests-for-osteoporosis-can-wait-study-says.html. Accessed January 3, 2013.
- Doshi KB, Khan LZ, Williams SE, Licata AA. Bone mineral density testing interval and transition to osteoporosis in older women: Is a T-score enough to determine a screening interval? Cleve Clin J Med 2013; 80:234–239.
- Park A. How often do women really need bone density tests? Time Healthland; 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 3, 2013.
- Adachi JD, Saag KG, Delmas PD, et al. Two-year effects of alendronate on bone mineral density and vertebral fracture in patients receiving glucocorticoids: a randomized, double-blind, placebo-controlled extension trial. Arthritis Rheum 2001; 44:202–211.
- Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:2077–2082.
In 2010, the United States Preventive Services Task Force recommended screening for osteoporosis by measuring bone mineral density in women age 65 and older and also in younger women if their fracture risk is equal to or greater than that of a 65-year-old white woman who has no additional risk factors.
But what should be the interval between screenings? The Task Force stated that evidence on the optimum screening interval is lacking, that 2 years may be the minimum interval due to precision error, but that longer intervals may be necessary to improve fracture risk prediction.1 They also cited a study showing that repeating the test up to 8 years after an initial test did not improve the ability of screening to predict fractures.2 This was recently confirmed in a study from Canada.3
GOURLAY ET AL: TEST AGAIN IN 1 TO 15 YEARS
In response to this information void, Gourlay and colleagues4 analyzed data from the Study of Osteoporotic Fractures. Because these investigators were interested in the interval between screening measurements of bone mineral density, they included only women who did not already have osteoporosis or take medication for osteoporosis. They wanted to know how long it took for 10% of women to develop osteoporosis, and found that this interval varied from 1 to 15 years depending on the initial bone density.
I did not think these results were surprising. The durations in which osteoporosis developed were similar to what one would predict from cross-sectional reference ranges. The average woman loses a little less than 1% of bone density per year after age 65. A T score of −1.0 is 22% higher than a T score of −2.5, so on average it would take more than 20 years to go from early osteopenia to osteoporosis.
AN ONGOING DEBATE ON SCREENING
The report generated a debate about the value and timing of repeated screening.5,6
In their article “More bone density testing is needed, not less,”5 Lewiecki et al criticized the Gourlay analysis because it did not include spine measurements or screen for asymptomatic vertebral fractures, and because it did not include enough clinical risk factors.5,6 They claimed that media attention suggested that dual-energy absorptiometry (DXA) was overused and expensive, citing three news reports. One of the news reports did misinterpret the Gourlay study and suggested that fewer women should be screened.7 The others, however, accurately described the findings that many women did not need to undergo DXA every 2 years.8,9
In this issue of the Cleveland Clinic Journal of Medicine, Doshi and colleagues express their opinion that the interval between bone mineral density testings should be guided by an assessment of clinical risk factors and not just T scores.10
Doshi et al are also concerned about erroneous conclusions drawn by the media. However, when I reviewed the news reports that they cited, I thought the reports were well written and conveyed the results appropriately. One report, by Alice Park,11 cautioned: “doctors need to remain flexible in advising women about when to get tested. A patient who has a normal T score but then develops cancer and loses a lot of weight, for example, may be more vulnerable to developing osteoporosis and therefore may need to get screened before the 15-year interval.”11 The other, by Gina Kolata, also explained that those taking high doses of corticosteroids for another medical condition would lose bone rapidly, but the findings “cover most normal women.”9 Neither report discouraged patients from getting screening in the first place.
Both Lewiecki et al and Doshi et al say that clinical factors should be considered, but do not specify which factors should be included in addition to the ones already evaluated by Gourlay et al (age, body mass index, estrogen use at baseline, any fracture after 50 years of age, current smoking, current or past use of oral glucocorticoids, and self-reported rheumatoid arthritis). These did not change the estimated time to develop osteoporosis for 90% of the study participants.
Furthermore, Gourlay et al had already noted that “clinicians may choose to reevaluate patients before our estimated screening intervals if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in our analyses.”4 Thus, patients with serious diseases should undergo DXA not for screening but for monitoring disease progression, and the Gourlay study results do not apply to them.
PATIENTS ON GLUCOCORTICOIDS: A SPECIAL SUBSET
Patients who are treated with glucocorticoids deserve further discussion. Consider the example described by Doshi et al of a woman with rheumatoid arthritis, taking prednisone, with a T score of −1.4. She would have to lose about 17% of her bone density to reach a T score at the osteoporosis level. One clinical trial in patients taking glucocorticoids, most of whom had rheumatoid arthritis, reported a loss of 2% after 2 years in the placebo group,12 so it is unlikely that this patient would have bone density in the osteoporosis range for at least several years.
However, clinicians know that these patients get fractures, especially in the spine, even with a normal bone density. Therefore, vertebral fracture assessment would be more important than bone density screening in this patient. Currently, there is uncertainty about the best time to initiate treatment in patients taking these glucocortical steroids, as well as the choice of initial medication. More research about long-term benefits of treatment are especially needed in this population.
VERTEBRAL FRACTURES: NO FIRM RECOMMENDATIONS
Doshi et al state that the Gourlay study was biased towards longer screening intervals because it included women with asymptomatic vertebral fractures. This does not make sense, because women who have untreated asymptomatic fractures would not be expected to lose bone at a slower rate. This does not mean that the asymptomatic fractures are trivial.
Instead of getting more frequent bone density measurements, I think it would be more logical to evaluate vertebral fractures using radiographs or vertebral fracture assessment, but we can’t make a firm recommendation without studies of the effectiveness of screening for vertebral fractures.
WHAT ABOUT OSTEOPENIA?
Critics of the Gourlay study point out that most fractures occur in the osteopenic population. This is true, but it does not mean that bone density should be measured more frequently. The bisphosphonates are not effective at preventing a first fracture unless the T score is lower than −2.5.13 Patients who have risk factors in addition to osteopenia may have a higher risk of fracture, but it is not clear if this can be treated with medication. For example, rodeo riders have a high fracture risk, but they would not benefit from taking alendronate. In some cases, such as people who smoke or drink alcohol to excess, treating the risk factor would be more appropriate.
As Doshi et al and others have noted, the study by Gourlay et al has limitations, and of course clinical judgment must be used in implementing the findings of any study. But doctors should not order unnecessary and expensive tests, and physicians who perform bone densitometry should not recommend frequent repeat testing that does not benefit the patient.
In 2010, the United States Preventive Services Task Force recommended screening for osteoporosis by measuring bone mineral density in women age 65 and older and also in younger women if their fracture risk is equal to or greater than that of a 65-year-old white woman who has no additional risk factors.
But what should be the interval between screenings? The Task Force stated that evidence on the optimum screening interval is lacking, that 2 years may be the minimum interval due to precision error, but that longer intervals may be necessary to improve fracture risk prediction.1 They also cited a study showing that repeating the test up to 8 years after an initial test did not improve the ability of screening to predict fractures.2 This was recently confirmed in a study from Canada.3
GOURLAY ET AL: TEST AGAIN IN 1 TO 15 YEARS
In response to this information void, Gourlay and colleagues4 analyzed data from the Study of Osteoporotic Fractures. Because these investigators were interested in the interval between screening measurements of bone mineral density, they included only women who did not already have osteoporosis or take medication for osteoporosis. They wanted to know how long it took for 10% of women to develop osteoporosis, and found that this interval varied from 1 to 15 years depending on the initial bone density.
I did not think these results were surprising. The durations in which osteoporosis developed were similar to what one would predict from cross-sectional reference ranges. The average woman loses a little less than 1% of bone density per year after age 65. A T score of −1.0 is 22% higher than a T score of −2.5, so on average it would take more than 20 years to go from early osteopenia to osteoporosis.
AN ONGOING DEBATE ON SCREENING
The report generated a debate about the value and timing of repeated screening.5,6
In their article “More bone density testing is needed, not less,”5 Lewiecki et al criticized the Gourlay analysis because it did not include spine measurements or screen for asymptomatic vertebral fractures, and because it did not include enough clinical risk factors.5,6 They claimed that media attention suggested that dual-energy absorptiometry (DXA) was overused and expensive, citing three news reports. One of the news reports did misinterpret the Gourlay study and suggested that fewer women should be screened.7 The others, however, accurately described the findings that many women did not need to undergo DXA every 2 years.8,9
In this issue of the Cleveland Clinic Journal of Medicine, Doshi and colleagues express their opinion that the interval between bone mineral density testings should be guided by an assessment of clinical risk factors and not just T scores.10
Doshi et al are also concerned about erroneous conclusions drawn by the media. However, when I reviewed the news reports that they cited, I thought the reports were well written and conveyed the results appropriately. One report, by Alice Park,11 cautioned: “doctors need to remain flexible in advising women about when to get tested. A patient who has a normal T score but then develops cancer and loses a lot of weight, for example, may be more vulnerable to developing osteoporosis and therefore may need to get screened before the 15-year interval.”11 The other, by Gina Kolata, also explained that those taking high doses of corticosteroids for another medical condition would lose bone rapidly, but the findings “cover most normal women.”9 Neither report discouraged patients from getting screening in the first place.
Both Lewiecki et al and Doshi et al say that clinical factors should be considered, but do not specify which factors should be included in addition to the ones already evaluated by Gourlay et al (age, body mass index, estrogen use at baseline, any fracture after 50 years of age, current smoking, current or past use of oral glucocorticoids, and self-reported rheumatoid arthritis). These did not change the estimated time to develop osteoporosis for 90% of the study participants.
Furthermore, Gourlay et al had already noted that “clinicians may choose to reevaluate patients before our estimated screening intervals if there is evidence of decreased activity or mobility, weight loss, or other risk factors not considered in our analyses.”4 Thus, patients with serious diseases should undergo DXA not for screening but for monitoring disease progression, and the Gourlay study results do not apply to them.
PATIENTS ON GLUCOCORTICOIDS: A SPECIAL SUBSET
Patients who are treated with glucocorticoids deserve further discussion. Consider the example described by Doshi et al of a woman with rheumatoid arthritis, taking prednisone, with a T score of −1.4. She would have to lose about 17% of her bone density to reach a T score at the osteoporosis level. One clinical trial in patients taking glucocorticoids, most of whom had rheumatoid arthritis, reported a loss of 2% after 2 years in the placebo group,12 so it is unlikely that this patient would have bone density in the osteoporosis range for at least several years.
However, clinicians know that these patients get fractures, especially in the spine, even with a normal bone density. Therefore, vertebral fracture assessment would be more important than bone density screening in this patient. Currently, there is uncertainty about the best time to initiate treatment in patients taking these glucocortical steroids, as well as the choice of initial medication. More research about long-term benefits of treatment are especially needed in this population.
VERTEBRAL FRACTURES: NO FIRM RECOMMENDATIONS
Doshi et al state that the Gourlay study was biased towards longer screening intervals because it included women with asymptomatic vertebral fractures. This does not make sense, because women who have untreated asymptomatic fractures would not be expected to lose bone at a slower rate. This does not mean that the asymptomatic fractures are trivial.
Instead of getting more frequent bone density measurements, I think it would be more logical to evaluate vertebral fractures using radiographs or vertebral fracture assessment, but we can’t make a firm recommendation without studies of the effectiveness of screening for vertebral fractures.
WHAT ABOUT OSTEOPENIA?
Critics of the Gourlay study point out that most fractures occur in the osteopenic population. This is true, but it does not mean that bone density should be measured more frequently. The bisphosphonates are not effective at preventing a first fracture unless the T score is lower than −2.5.13 Patients who have risk factors in addition to osteopenia may have a higher risk of fracture, but it is not clear if this can be treated with medication. For example, rodeo riders have a high fracture risk, but they would not benefit from taking alendronate. In some cases, such as people who smoke or drink alcohol to excess, treating the risk factor would be more appropriate.
As Doshi et al and others have noted, the study by Gourlay et al has limitations, and of course clinical judgment must be used in implementing the findings of any study. But doctors should not order unnecessary and expensive tests, and physicians who perform bone densitometry should not recommend frequent repeat testing that does not benefit the patient.
- US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356–364.
- Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155–160.
- Leslie WD, Morin SN, Lix LM; Manitoba Bone Density Program. Rate of bone density change does not enhance fracture prediction in routine clinical practice. J Clin Endocrinol Metab 2012; 97:1211–1218.
- Gourlay ML, Fine J P, Preisser JS, et al; Study of Osteoporotic Fractures Research Group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225–233.
- Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739–742.
- Yu EW, Finkelstein JS. Bone density screening intervals for osteoporosis: one size does not fit all. JAMA 2012; 307:2591–2592.
- Frier S. Women receive bone tests too often for osteoporosis, study finds. Bloomberg News; 2012. http://www.bloomberg.com/news/2012-01-18/many-women-screened-for-osteoporosis-don-t-need-it-researchers-report.html. Accessed January 3, 2013.
- Knox R. Many older women may not need frequent bone scans. National Public Radio; 2012. http://www.npr.org/blogs/health/2012/01/19/145419138/many-older-women-may-not-need-frequent-bone-scans?ps=sh_sthdl. Accessed January 3, 2013.
- Kolata G. Patients with normal bone density can delay retests, study suggests. The New York Times; 2012. http://www.nytimes.com/2012/01/19/health/bone-density-tests-for-osteoporosis-can-wait-study-says.html. Accessed January 3, 2013.
- Doshi KB, Khan LZ, Williams SE, Licata AA. Bone mineral density testing interval and transition to osteoporosis in older women: Is a T-score enough to determine a screening interval? Cleve Clin J Med 2013; 80:234–239.
- Park A. How often do women really need bone density tests? Time Healthland; 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 3, 2013.
- Adachi JD, Saag KG, Delmas PD, et al. Two-year effects of alendronate on bone mineral density and vertebral fracture in patients receiving glucocorticoids: a randomized, double-blind, placebo-controlled extension trial. Arthritis Rheum 2001; 44:202–211.
- Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:2077–2082.
- US Preventive Services Task Force. Screening for osteoporosis: US preventive services task force recommendation statement. Ann Intern Med 2011; 154:356–364.
- Hillier TA, Stone KL, Bauer DC, et al. Evaluating the value of repeat bone mineral density measurement and prediction of fractures in older women: the study of osteoporotic fractures. Arch Intern Med 2007; 167:155–160.
- Leslie WD, Morin SN, Lix LM; Manitoba Bone Density Program. Rate of bone density change does not enhance fracture prediction in routine clinical practice. J Clin Endocrinol Metab 2012; 97:1211–1218.
- Gourlay ML, Fine J P, Preisser JS, et al; Study of Osteoporotic Fractures Research Group. Bone-density testing interval and transition to osteoporosis in older women. N Engl J Med 2012; 366:225–233.
- Lewiecki EM, Laster AJ, Miller PD, Bilezikian JP. More bone density testing is needed, not less. J Bone Miner Res 2012; 27:739–742.
- Yu EW, Finkelstein JS. Bone density screening intervals for osteoporosis: one size does not fit all. JAMA 2012; 307:2591–2592.
- Frier S. Women receive bone tests too often for osteoporosis, study finds. Bloomberg News; 2012. http://www.bloomberg.com/news/2012-01-18/many-women-screened-for-osteoporosis-don-t-need-it-researchers-report.html. Accessed January 3, 2013.
- Knox R. Many older women may not need frequent bone scans. National Public Radio; 2012. http://www.npr.org/blogs/health/2012/01/19/145419138/many-older-women-may-not-need-frequent-bone-scans?ps=sh_sthdl. Accessed January 3, 2013.
- Kolata G. Patients with normal bone density can delay retests, study suggests. The New York Times; 2012. http://www.nytimes.com/2012/01/19/health/bone-density-tests-for-osteoporosis-can-wait-study-says.html. Accessed January 3, 2013.
- Doshi KB, Khan LZ, Williams SE, Licata AA. Bone mineral density testing interval and transition to osteoporosis in older women: Is a T-score enough to determine a screening interval? Cleve Clin J Med 2013; 80:234–239.
- Park A. How often do women really need bone density tests? Time Healthland; 2012. http://healthland.time.com/2012/01/19/most-women-may-be-getting-too-many-bone-density-tests/. Accessed January 3, 2013.
- Adachi JD, Saag KG, Delmas PD, et al. Two-year effects of alendronate on bone mineral density and vertebral fracture in patients receiving glucocorticoids: a randomized, double-blind, placebo-controlled extension trial. Arthritis Rheum 2001; 44:202–211.
- Cummings SR, Black DM, Thompson DE, et al. Effect of alendronate on risk of fracture in women with low bone density but without vertebral fractures: results from the Fracture Intervention Trial. JAMA 1998; 280:2077–2082.
A rapidly growing crusted nodule on the lip
A 76-year-old man presented with a rapidly growing, indurated, crusted nodule on his lower lip (Figure 1). This combination—a rapidly growing nodule on a sun-exposed surface in an older patient—pointed to a diagnosis of squamous cell carcinoma, and biopsy study of the affected area confirmed this diagnosis (Figure 2). Clinical examination, computed tomography, and ultrasonography of the neck revealed no lymph node involvement; the tumor was staged as T2N0M0. Mohs microscopically controlled surgery was performed to remove the tumor with clear margins, and the patient has done well.
DIFFERENTIAL DIAGNOSIS
Malignant melanoma, Merkel cell carcinoma, and deep fungal infection can also cause a rapidly growing crusted nodule on the lower lip, but this typically is not an initial presentation for these conditions.
Malignant melanoma
Malignant melanoma should be considered for any rapidly growing cutaneous tumor, especially on sun-damaged skin. Most melanoma lesions have variations in pigment and may contain shades of blue, brown, red, pink, and white. Amelanotic melanomas may mimic squamous cell carcinomas, but the histologic features of atypical keratinocytes in this patient ruled out that diagnosis. Biopsy of amelanotic melanoma reveals nests of melanocytes, usually associated with an in situ component in the overlying epithelium.
Merkel cell carcinoma
Merkel cell carcinoma can mimic squamous cell carcinoma or can even arise in association with squamous cell carcinoma. Histologic examination allows for differentiation. In difficult cases, cytokeratin staining with cytokeratin 20 and CAM 5.2 can clarify the diagnosis, because Merkel cell carcinomas exhibit a characteristic paranuclear dot-like pattern not evident in squamous cell carcinoma.
Deep fungal infection
Colonization by Candida organisms was noted within the overlying crust in this patient’s lesion, but fungal organisms were not noted in the epidermis or dermis. Pseudoepitheliomatous hyperplasia can be prominent in deep fungal infection, but the marked cytologic atypia in this case excluded deep fungal infection.
Mucosal neuroma
Mucosal neuromas are typically smooth-surfaced, soft lesions on the lip. They may be associated with multiple endocrine neoplasia syndromes. Biopsy study reveals that they are composed of delicate spindle cells that lack atypia.
SQUAMOUS CELL CARCINOMA
Squamous cell carcinoma is one of the most common types of nonmelanoma skin cancer and is associated with increased sun exposure and light skin pigmentation.1 Unlike basal cell carcinoma, squamous cell carcinoma has a considerable potential to metastasize.2–4 The current cancer staging manual of the American Joint Commission on Cancer refects recent evidence-based information on staging and prognosis.5–7 Squamous cell carcinoma of the lip carries an increased risk of metastasis,8 and an increase in the number or the size of involved lymph nodes carries a worse prognosis.9 The lower lip is most commonly involved because of increased sun exposure. The most common route for the spread of squamous cell carcinoma of the head and neck is via lymphatics, so careful evaluation of the head and neck is indicated.10
STAGING OUR PATIENT’S LESION
Current staging of cutaneous squamous cell carcinoma takes into account a variety of different features. Our patient’s tumor was not associated with invasion of the maxilla, mandible, orbit, or temporal bone. This alone would have qualified the tumor as a T1 lesion, but its size of 3 cm indicated a higher risk and qualified it as a T2 lesion.
The histologic features of our patient’s lesion also indicate higher risk according to the current staging system.5 The Breslow thickness on histologic examination (measured from the granular cell layer of the epidermis to the deepest portion of the tumor) was 4 mm and the Clark’s level was IV (extension to the reticular dermis). The high-risk features and the location on the lip warrant a classification as T2 and carry a worse prognosis.
Wedge excision of the lower lip and Mohs surgery are accepted treatment options. The patient chose Mohs surgery and has done well, with excellent cosmetic and functional outcome. Radiation therapy can be useful as an adjuvant when lymph nodes are involved,11 but this was not necessary in our patient. Careful long-term follow up is warranted, as these patients are at higher risk of developing other, separate tumors.
- Schwartz RA. Squamous cell carcinoma. In:Schwartz RA, editor. Skin Cancer: Recognition and Management, 2nd ed. Malden, MA: Blackwell Publishing; 2008:47–65.
- D’Souza J, Clark J. Management of the neck in metastatic cutaneous squamous cell carcinoma of the head and neck. Curr Opin Otolaryngol Head Neck Surg 2011; 19:99–105.
- Zbar RI, Canady JW. MOC-PSSM CME article: Nonmelanoma facial skin malignancy. Plast Reconstr Surg 2008; 121(suppl 1):1–9.
- Morselli P, Masciotra L, Pinto V, Zollino I, Brunelli G, Carinci F. Clinical parameters in T1N0M0 lower lip squamous cell carcinoma. J Craniofac Surg 2007; 18:1079–1082.
- American Joint Commission on Cancer. Cancer Staging Manual. 7th ed. http://www.cancerstaging.org. Accessed February 3, 2013.
- Lardaro T, Shea SM, Sharfman W, Liégeois N, Sober AJ. Improvements in the staging of cutaneous squamous-cell carcinoma in the 7th edition of the AJCC Cancer Staging Manual. Ann Surg Oncol 2010; 17:1979–1980.
- Cutaneous squamous cell carcinoma and other cutaneous carcinomas. In:Edge SB, Byrd DR, Compton CC, Fritz AG, Greene FL, Trotti A, editors. AJCC Cancer Staging Manual. 7th ed. New York, NY: Springer; 2010:301–314.
- Frierson HF, Cooper PH. Prognostic factors in squamous cell carcinoma of the lower lip. Hum Pathol 1986; 17:346–354.
- Civantos FJ, Moffat FL, goodwin WJ. Lymphatic mapping and sentinel lymphadenectomy for 106 head and neck lesions: contrasts between oral cavity and cutaneous malignancy. Laryngoscope 2006; 112(3 Pt 2 suppl 109):1–15.
- Rowe De, Carroll RJ, Day CL. Prognostic factors for local recurrence, metastasis, and survival rates in squamous cell carcinoma of the skin, ear, and lip. Implications for treatment modality selection. J Am Acad Dermatol 1992; 26:976–990.
- Veness MJ, Palme CE, Smith M, Cakir B, Morgan GJ, Kalnins I. Cutaneous head and neck squamous cell carcinoma metastatic to cervical lymph nodes (nonparotid): a better outcome with surgery and adjuvant radiotherapy. Laryngoscope 2003; 113:1827–1833.
A 76-year-old man presented with a rapidly growing, indurated, crusted nodule on his lower lip (Figure 1). This combination—a rapidly growing nodule on a sun-exposed surface in an older patient—pointed to a diagnosis of squamous cell carcinoma, and biopsy study of the affected area confirmed this diagnosis (Figure 2). Clinical examination, computed tomography, and ultrasonography of the neck revealed no lymph node involvement; the tumor was staged as T2N0M0. Mohs microscopically controlled surgery was performed to remove the tumor with clear margins, and the patient has done well.
DIFFERENTIAL DIAGNOSIS
Malignant melanoma, Merkel cell carcinoma, and deep fungal infection can also cause a rapidly growing crusted nodule on the lower lip, but this typically is not an initial presentation for these conditions.
Malignant melanoma
Malignant melanoma should be considered for any rapidly growing cutaneous tumor, especially on sun-damaged skin. Most melanoma lesions have variations in pigment and may contain shades of blue, brown, red, pink, and white. Amelanotic melanomas may mimic squamous cell carcinomas, but the histologic features of atypical keratinocytes in this patient ruled out that diagnosis. Biopsy of amelanotic melanoma reveals nests of melanocytes, usually associated with an in situ component in the overlying epithelium.
Merkel cell carcinoma
Merkel cell carcinoma can mimic squamous cell carcinoma or can even arise in association with squamous cell carcinoma. Histologic examination allows for differentiation. In difficult cases, cytokeratin staining with cytokeratin 20 and CAM 5.2 can clarify the diagnosis, because Merkel cell carcinomas exhibit a characteristic paranuclear dot-like pattern not evident in squamous cell carcinoma.
Deep fungal infection
Colonization by Candida organisms was noted within the overlying crust in this patient’s lesion, but fungal organisms were not noted in the epidermis or dermis. Pseudoepitheliomatous hyperplasia can be prominent in deep fungal infection, but the marked cytologic atypia in this case excluded deep fungal infection.
Mucosal neuroma
Mucosal neuromas are typically smooth-surfaced, soft lesions on the lip. They may be associated with multiple endocrine neoplasia syndromes. Biopsy study reveals that they are composed of delicate spindle cells that lack atypia.
SQUAMOUS CELL CARCINOMA
Squamous cell carcinoma is one of the most common types of nonmelanoma skin cancer and is associated with increased sun exposure and light skin pigmentation.1 Unlike basal cell carcinoma, squamous cell carcinoma has a considerable potential to metastasize.2–4 The current cancer staging manual of the American Joint Commission on Cancer refects recent evidence-based information on staging and prognosis.5–7 Squamous cell carcinoma of the lip carries an increased risk of metastasis,8 and an increase in the number or the size of involved lymph nodes carries a worse prognosis.9 The lower lip is most commonly involved because of increased sun exposure. The most common route for the spread of squamous cell carcinoma of the head and neck is via lymphatics, so careful evaluation of the head and neck is indicated.10
STAGING OUR PATIENT’S LESION
Current staging of cutaneous squamous cell carcinoma takes into account a variety of different features. Our patient’s tumor was not associated with invasion of the maxilla, mandible, orbit, or temporal bone. This alone would have qualified the tumor as a T1 lesion, but its size of 3 cm indicated a higher risk and qualified it as a T2 lesion.
The histologic features of our patient’s lesion also indicate higher risk according to the current staging system.5 The Breslow thickness on histologic examination (measured from the granular cell layer of the epidermis to the deepest portion of the tumor) was 4 mm and the Clark’s level was IV (extension to the reticular dermis). The high-risk features and the location on the lip warrant a classification as T2 and carry a worse prognosis.
Wedge excision of the lower lip and Mohs surgery are accepted treatment options. The patient chose Mohs surgery and has done well, with excellent cosmetic and functional outcome. Radiation therapy can be useful as an adjuvant when lymph nodes are involved,11 but this was not necessary in our patient. Careful long-term follow up is warranted, as these patients are at higher risk of developing other, separate tumors.
A 76-year-old man presented with a rapidly growing, indurated, crusted nodule on his lower lip (Figure 1). This combination—a rapidly growing nodule on a sun-exposed surface in an older patient—pointed to a diagnosis of squamous cell carcinoma, and biopsy study of the affected area confirmed this diagnosis (Figure 2). Clinical examination, computed tomography, and ultrasonography of the neck revealed no lymph node involvement; the tumor was staged as T2N0M0. Mohs microscopically controlled surgery was performed to remove the tumor with clear margins, and the patient has done well.
DIFFERENTIAL DIAGNOSIS
Malignant melanoma, Merkel cell carcinoma, and deep fungal infection can also cause a rapidly growing crusted nodule on the lower lip, but this typically is not an initial presentation for these conditions.
Malignant melanoma
Malignant melanoma should be considered for any rapidly growing cutaneous tumor, especially on sun-damaged skin. Most melanoma lesions have variations in pigment and may contain shades of blue, brown, red, pink, and white. Amelanotic melanomas may mimic squamous cell carcinomas, but the histologic features of atypical keratinocytes in this patient ruled out that diagnosis. Biopsy of amelanotic melanoma reveals nests of melanocytes, usually associated with an in situ component in the overlying epithelium.
Merkel cell carcinoma
Merkel cell carcinoma can mimic squamous cell carcinoma or can even arise in association with squamous cell carcinoma. Histologic examination allows for differentiation. In difficult cases, cytokeratin staining with cytokeratin 20 and CAM 5.2 can clarify the diagnosis, because Merkel cell carcinomas exhibit a characteristic paranuclear dot-like pattern not evident in squamous cell carcinoma.
Deep fungal infection
Colonization by Candida organisms was noted within the overlying crust in this patient’s lesion, but fungal organisms were not noted in the epidermis or dermis. Pseudoepitheliomatous hyperplasia can be prominent in deep fungal infection, but the marked cytologic atypia in this case excluded deep fungal infection.
Mucosal neuroma
Mucosal neuromas are typically smooth-surfaced, soft lesions on the lip. They may be associated with multiple endocrine neoplasia syndromes. Biopsy study reveals that they are composed of delicate spindle cells that lack atypia.
SQUAMOUS CELL CARCINOMA
Squamous cell carcinoma is one of the most common types of nonmelanoma skin cancer and is associated with increased sun exposure and light skin pigmentation.1 Unlike basal cell carcinoma, squamous cell carcinoma has a considerable potential to metastasize.2–4 The current cancer staging manual of the American Joint Commission on Cancer refects recent evidence-based information on staging and prognosis.5–7 Squamous cell carcinoma of the lip carries an increased risk of metastasis,8 and an increase in the number or the size of involved lymph nodes carries a worse prognosis.9 The lower lip is most commonly involved because of increased sun exposure. The most common route for the spread of squamous cell carcinoma of the head and neck is via lymphatics, so careful evaluation of the head and neck is indicated.10
STAGING OUR PATIENT’S LESION
Current staging of cutaneous squamous cell carcinoma takes into account a variety of different features. Our patient’s tumor was not associated with invasion of the maxilla, mandible, orbit, or temporal bone. This alone would have qualified the tumor as a T1 lesion, but its size of 3 cm indicated a higher risk and qualified it as a T2 lesion.
The histologic features of our patient’s lesion also indicate higher risk according to the current staging system.5 The Breslow thickness on histologic examination (measured from the granular cell layer of the epidermis to the deepest portion of the tumor) was 4 mm and the Clark’s level was IV (extension to the reticular dermis). The high-risk features and the location on the lip warrant a classification as T2 and carry a worse prognosis.
Wedge excision of the lower lip and Mohs surgery are accepted treatment options. The patient chose Mohs surgery and has done well, with excellent cosmetic and functional outcome. Radiation therapy can be useful as an adjuvant when lymph nodes are involved,11 but this was not necessary in our patient. Careful long-term follow up is warranted, as these patients are at higher risk of developing other, separate tumors.
- Schwartz RA. Squamous cell carcinoma. In:Schwartz RA, editor. Skin Cancer: Recognition and Management, 2nd ed. Malden, MA: Blackwell Publishing; 2008:47–65.
- D’Souza J, Clark J. Management of the neck in metastatic cutaneous squamous cell carcinoma of the head and neck. Curr Opin Otolaryngol Head Neck Surg 2011; 19:99–105.
- Zbar RI, Canady JW. MOC-PSSM CME article: Nonmelanoma facial skin malignancy. Plast Reconstr Surg 2008; 121(suppl 1):1–9.
- Morselli P, Masciotra L, Pinto V, Zollino I, Brunelli G, Carinci F. Clinical parameters in T1N0M0 lower lip squamous cell carcinoma. J Craniofac Surg 2007; 18:1079–1082.
- American Joint Commission on Cancer. Cancer Staging Manual. 7th ed. http://www.cancerstaging.org. Accessed February 3, 2013.
- Lardaro T, Shea SM, Sharfman W, Liégeois N, Sober AJ. Improvements in the staging of cutaneous squamous-cell carcinoma in the 7th edition of the AJCC Cancer Staging Manual. Ann Surg Oncol 2010; 17:1979–1980.
- Cutaneous squamous cell carcinoma and other cutaneous carcinomas. In:Edge SB, Byrd DR, Compton CC, Fritz AG, Greene FL, Trotti A, editors. AJCC Cancer Staging Manual. 7th ed. New York, NY: Springer; 2010:301–314.
- Frierson HF, Cooper PH. Prognostic factors in squamous cell carcinoma of the lower lip. Hum Pathol 1986; 17:346–354.
- Civantos FJ, Moffat FL, goodwin WJ. Lymphatic mapping and sentinel lymphadenectomy for 106 head and neck lesions: contrasts between oral cavity and cutaneous malignancy. Laryngoscope 2006; 112(3 Pt 2 suppl 109):1–15.
- Rowe De, Carroll RJ, Day CL. Prognostic factors for local recurrence, metastasis, and survival rates in squamous cell carcinoma of the skin, ear, and lip. Implications for treatment modality selection. J Am Acad Dermatol 1992; 26:976–990.
- Veness MJ, Palme CE, Smith M, Cakir B, Morgan GJ, Kalnins I. Cutaneous head and neck squamous cell carcinoma metastatic to cervical lymph nodes (nonparotid): a better outcome with surgery and adjuvant radiotherapy. Laryngoscope 2003; 113:1827–1833.
- Schwartz RA. Squamous cell carcinoma. In:Schwartz RA, editor. Skin Cancer: Recognition and Management, 2nd ed. Malden, MA: Blackwell Publishing; 2008:47–65.
- D’Souza J, Clark J. Management of the neck in metastatic cutaneous squamous cell carcinoma of the head and neck. Curr Opin Otolaryngol Head Neck Surg 2011; 19:99–105.
- Zbar RI, Canady JW. MOC-PSSM CME article: Nonmelanoma facial skin malignancy. Plast Reconstr Surg 2008; 121(suppl 1):1–9.
- Morselli P, Masciotra L, Pinto V, Zollino I, Brunelli G, Carinci F. Clinical parameters in T1N0M0 lower lip squamous cell carcinoma. J Craniofac Surg 2007; 18:1079–1082.
- American Joint Commission on Cancer. Cancer Staging Manual. 7th ed. http://www.cancerstaging.org. Accessed February 3, 2013.
- Lardaro T, Shea SM, Sharfman W, Liégeois N, Sober AJ. Improvements in the staging of cutaneous squamous-cell carcinoma in the 7th edition of the AJCC Cancer Staging Manual. Ann Surg Oncol 2010; 17:1979–1980.
- Cutaneous squamous cell carcinoma and other cutaneous carcinomas. In:Edge SB, Byrd DR, Compton CC, Fritz AG, Greene FL, Trotti A, editors. AJCC Cancer Staging Manual. 7th ed. New York, NY: Springer; 2010:301–314.
- Frierson HF, Cooper PH. Prognostic factors in squamous cell carcinoma of the lower lip. Hum Pathol 1986; 17:346–354.
- Civantos FJ, Moffat FL, goodwin WJ. Lymphatic mapping and sentinel lymphadenectomy for 106 head and neck lesions: contrasts between oral cavity and cutaneous malignancy. Laryngoscope 2006; 112(3 Pt 2 suppl 109):1–15.
- Rowe De, Carroll RJ, Day CL. Prognostic factors for local recurrence, metastasis, and survival rates in squamous cell carcinoma of the skin, ear, and lip. Implications for treatment modality selection. J Am Acad Dermatol 1992; 26:976–990.
- Veness MJ, Palme CE, Smith M, Cakir B, Morgan GJ, Kalnins I. Cutaneous head and neck squamous cell carcinoma metastatic to cervical lymph nodes (nonparotid): a better outcome with surgery and adjuvant radiotherapy. Laryngoscope 2003; 113:1827–1833.
Implications of a prominent R wave in V1
A 19-year-old woman with no significant cardiac or pulmonary history presented with exertional dyspnea, which had begun a few months earlier. Auscultation revealed a loud pulmonary component of the second heart sound and a diastolic murmur heard along the upper left sternal border. Her 12-lead electrocardiogram is shown in Figure 1.
Q: Which of the following can cause prominent R waves in lead V1?
- Normal variant in young adults
- Wolff-Parkinson-White syndrome
- Posterior wall myocardial infarction
- Right ventricular hypertrophy
- All of the above
A: The correct answer is all of the above.
The patient’s electrocardiogram shows a right atrial abnormality and right ventricular hypertrophy. Right atrial enlargement is evidenced by a prominent initial P wave in V1 with an amplitude of at least 1.5 mm (0.15 mV). A P wave taller than 2.5 mm (0.25 mV) in lead II may also suggest a right atrial abnormality.1
Multiple criteria exist for the diagnosis of right ventricular hypertrophy. Tall R waves in V1 with an R/S ratio greater than 1 (ie, the R wave amplitude is more than the S wave depth) is commonly used.2 Deep S waves with an R/S ratio less than 1 in V6 is another criterion. Tall R waves of amplitude greater than 7 mm in V1 by themselves may represent right ventricular hypertrophy. Most of the electrocardiographic criteria are specific but not sensitive for this diagnosis.3
Other causes of tall R waves in V1 are given in Table 1.
Q: Which of the following diseases can present with an electrocardiographic pattern of right ventricular hypertrophy in young patients?
- Pulmonary hypertension
- Atrial septal defect
- Tetralogy of Fallot
- Pulmonary stenosis
- All of the above
A: The correct answer is all of the above.4
Our patient underwent multiple investigations. On echocardiography, her estimated right ventricular pressure was 80 mm Hg, and on cardiac catheterization her mean pulmonary arterial pressure was 55 mm Hg and her pulmonary capillary wedge pressure was 6 mm Hg. She was diagnosed with pulmonary arterial hypertension, which was the cause of her right ventricular hypertrophy. She eventually underwent bilateral lung transplantation.
- Hancock EW, Deal BJ, Mirvis DM, et al; American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; American College of Cardiology Foundation; Heart Rhythm Society. AHA/ ACCF/HRS recommendations for the standardization and interpretation of the electrocardiogram: part V: electrocardiogram changes associated with cardiac chamber hypertrophy: a scientific statement from the American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; the American College of Cardiology Foundation; and the Heart Rhythm Society: endorsed by the International Society for Computerized Electrocardiology. Circulation 2009; 119:e251–e261.
- Milnor WR. Electrocardiogram and vectorcardiogram in right ventricular hypertrophy and right bundle-branch block. Circulation 1957; 16:348–367.
- Lehtonen J, Sutinen S, Ikäheimo M, Pääkkö P. Electrocardiographic criteria for the diagnosis of right ventricular hypertrophy verified at autopsy. Chest 1988; 93:839–842.
- Webb G, Gatzoulis MA. Atrial septal defects in the adult: recent progress and overview. Circulation 2006; 114:1645–1653.
A 19-year-old woman with no significant cardiac or pulmonary history presented with exertional dyspnea, which had begun a few months earlier. Auscultation revealed a loud pulmonary component of the second heart sound and a diastolic murmur heard along the upper left sternal border. Her 12-lead electrocardiogram is shown in Figure 1.
Q: Which of the following can cause prominent R waves in lead V1?
- Normal variant in young adults
- Wolff-Parkinson-White syndrome
- Posterior wall myocardial infarction
- Right ventricular hypertrophy
- All of the above
A: The correct answer is all of the above.
The patient’s electrocardiogram shows a right atrial abnormality and right ventricular hypertrophy. Right atrial enlargement is evidenced by a prominent initial P wave in V1 with an amplitude of at least 1.5 mm (0.15 mV). A P wave taller than 2.5 mm (0.25 mV) in lead II may also suggest a right atrial abnormality.1
Multiple criteria exist for the diagnosis of right ventricular hypertrophy. Tall R waves in V1 with an R/S ratio greater than 1 (ie, the R wave amplitude is more than the S wave depth) is commonly used.2 Deep S waves with an R/S ratio less than 1 in V6 is another criterion. Tall R waves of amplitude greater than 7 mm in V1 by themselves may represent right ventricular hypertrophy. Most of the electrocardiographic criteria are specific but not sensitive for this diagnosis.3
Other causes of tall R waves in V1 are given in Table 1.
Q: Which of the following diseases can present with an electrocardiographic pattern of right ventricular hypertrophy in young patients?
- Pulmonary hypertension
- Atrial septal defect
- Tetralogy of Fallot
- Pulmonary stenosis
- All of the above
A: The correct answer is all of the above.4
Our patient underwent multiple investigations. On echocardiography, her estimated right ventricular pressure was 80 mm Hg, and on cardiac catheterization her mean pulmonary arterial pressure was 55 mm Hg and her pulmonary capillary wedge pressure was 6 mm Hg. She was diagnosed with pulmonary arterial hypertension, which was the cause of her right ventricular hypertrophy. She eventually underwent bilateral lung transplantation.
A 19-year-old woman with no significant cardiac or pulmonary history presented with exertional dyspnea, which had begun a few months earlier. Auscultation revealed a loud pulmonary component of the second heart sound and a diastolic murmur heard along the upper left sternal border. Her 12-lead electrocardiogram is shown in Figure 1.
Q: Which of the following can cause prominent R waves in lead V1?
- Normal variant in young adults
- Wolff-Parkinson-White syndrome
- Posterior wall myocardial infarction
- Right ventricular hypertrophy
- All of the above
A: The correct answer is all of the above.
The patient’s electrocardiogram shows a right atrial abnormality and right ventricular hypertrophy. Right atrial enlargement is evidenced by a prominent initial P wave in V1 with an amplitude of at least 1.5 mm (0.15 mV). A P wave taller than 2.5 mm (0.25 mV) in lead II may also suggest a right atrial abnormality.1
Multiple criteria exist for the diagnosis of right ventricular hypertrophy. Tall R waves in V1 with an R/S ratio greater than 1 (ie, the R wave amplitude is more than the S wave depth) is commonly used.2 Deep S waves with an R/S ratio less than 1 in V6 is another criterion. Tall R waves of amplitude greater than 7 mm in V1 by themselves may represent right ventricular hypertrophy. Most of the electrocardiographic criteria are specific but not sensitive for this diagnosis.3
Other causes of tall R waves in V1 are given in Table 1.
Q: Which of the following diseases can present with an electrocardiographic pattern of right ventricular hypertrophy in young patients?
- Pulmonary hypertension
- Atrial septal defect
- Tetralogy of Fallot
- Pulmonary stenosis
- All of the above
A: The correct answer is all of the above.4
Our patient underwent multiple investigations. On echocardiography, her estimated right ventricular pressure was 80 mm Hg, and on cardiac catheterization her mean pulmonary arterial pressure was 55 mm Hg and her pulmonary capillary wedge pressure was 6 mm Hg. She was diagnosed with pulmonary arterial hypertension, which was the cause of her right ventricular hypertrophy. She eventually underwent bilateral lung transplantation.
- Hancock EW, Deal BJ, Mirvis DM, et al; American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; American College of Cardiology Foundation; Heart Rhythm Society. AHA/ ACCF/HRS recommendations for the standardization and interpretation of the electrocardiogram: part V: electrocardiogram changes associated with cardiac chamber hypertrophy: a scientific statement from the American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; the American College of Cardiology Foundation; and the Heart Rhythm Society: endorsed by the International Society for Computerized Electrocardiology. Circulation 2009; 119:e251–e261.
- Milnor WR. Electrocardiogram and vectorcardiogram in right ventricular hypertrophy and right bundle-branch block. Circulation 1957; 16:348–367.
- Lehtonen J, Sutinen S, Ikäheimo M, Pääkkö P. Electrocardiographic criteria for the diagnosis of right ventricular hypertrophy verified at autopsy. Chest 1988; 93:839–842.
- Webb G, Gatzoulis MA. Atrial septal defects in the adult: recent progress and overview. Circulation 2006; 114:1645–1653.
- Hancock EW, Deal BJ, Mirvis DM, et al; American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; American College of Cardiology Foundation; Heart Rhythm Society. AHA/ ACCF/HRS recommendations for the standardization and interpretation of the electrocardiogram: part V: electrocardiogram changes associated with cardiac chamber hypertrophy: a scientific statement from the American Heart Association Electrocardiography and Arrhythmias Committee, Council on Clinical Cardiology; the American College of Cardiology Foundation; and the Heart Rhythm Society: endorsed by the International Society for Computerized Electrocardiology. Circulation 2009; 119:e251–e261.
- Milnor WR. Electrocardiogram and vectorcardiogram in right ventricular hypertrophy and right bundle-branch block. Circulation 1957; 16:348–367.
- Lehtonen J, Sutinen S, Ikäheimo M, Pääkkö P. Electrocardiographic criteria for the diagnosis of right ventricular hypertrophy verified at autopsy. Chest 1988; 93:839–842.
- Webb G, Gatzoulis MA. Atrial septal defects in the adult: recent progress and overview. Circulation 2006; 114:1645–1653.
Resistance of man and bug
Why individual clinicians make specific decisions usually can be sorted out. But our behavior as a group is more difficult to understand and, even when there are pressing and convincing reasons to change, behavior is difficult to alter.
In this issue, Drs. Federico Perez and David Van Duin discuss the emergence of carbapenem-resistant bacteria, dubbed “superbugs” by the media. Antibiotic resistance is not new; it was reported in Staphylococcus species within several years of the introduction of penicillin.1 However, it has been increasing in prevalence and molecular complexity after years of relatively promiscuous antibiotic use. As the percentage of inpatients with immunosuppression and frailty increases in this environment of known antibiotic resistance, initial empiric antibiotic choices will by necessity include drugs likely to further promote development of resistant strains. But why do physicians still prescribe antibiotics for uncomplicated upper respiratory tract infections and asymptomatic bacteriuria, despite numerous studies and guidelines suggesting this practice has little benefit? Is it because patients expect a prescription in return for their copayment? Is it the path of least resistance? Or do physicians not accept the data showing that it is unnecessary?
Dr. Gerald Appel discusses diabetic nephropathy, an area that involves resistance of another kind, ie, the apparent resistance of physicians and patients to achieving evidence-based treatment targets. We hold controlled trials as the Holy Grail of evidence-based medicine, yet we seem to have an aversion to following guidelines based on trial-derived evidence. (I do not refer here to blind guideline adherence, ignoring individual patient characteristics.)
So how can physicians’ behavior be altered and our resistance to change be reduced? Experiments are under way, such as paying physicians based on their performance, linking patients’ insurance rates to achieving selected outcomes, and linking physician practice self-review to certification. Perhaps naively, I continue to believe that the most effective impetus to changing personal practice is the dissemination of data from high-quality trials (tempered by our accumulated experience and keeping our eyes wide open), coupled with our desire to do the best for our patients.
- Barber M. Coagulase-positive staphylococci resistant to penicillin. J Pathol Bacteriol 1947; 59:373–384.
Why individual clinicians make specific decisions usually can be sorted out. But our behavior as a group is more difficult to understand and, even when there are pressing and convincing reasons to change, behavior is difficult to alter.
In this issue, Drs. Federico Perez and David Van Duin discuss the emergence of carbapenem-resistant bacteria, dubbed “superbugs” by the media. Antibiotic resistance is not new; it was reported in Staphylococcus species within several years of the introduction of penicillin.1 However, it has been increasing in prevalence and molecular complexity after years of relatively promiscuous antibiotic use. As the percentage of inpatients with immunosuppression and frailty increases in this environment of known antibiotic resistance, initial empiric antibiotic choices will by necessity include drugs likely to further promote development of resistant strains. But why do physicians still prescribe antibiotics for uncomplicated upper respiratory tract infections and asymptomatic bacteriuria, despite numerous studies and guidelines suggesting this practice has little benefit? Is it because patients expect a prescription in return for their copayment? Is it the path of least resistance? Or do physicians not accept the data showing that it is unnecessary?
Dr. Gerald Appel discusses diabetic nephropathy, an area that involves resistance of another kind, ie, the apparent resistance of physicians and patients to achieving evidence-based treatment targets. We hold controlled trials as the Holy Grail of evidence-based medicine, yet we seem to have an aversion to following guidelines based on trial-derived evidence. (I do not refer here to blind guideline adherence, ignoring individual patient characteristics.)
So how can physicians’ behavior be altered and our resistance to change be reduced? Experiments are under way, such as paying physicians based on their performance, linking patients’ insurance rates to achieving selected outcomes, and linking physician practice self-review to certification. Perhaps naively, I continue to believe that the most effective impetus to changing personal practice is the dissemination of data from high-quality trials (tempered by our accumulated experience and keeping our eyes wide open), coupled with our desire to do the best for our patients.
Why individual clinicians make specific decisions usually can be sorted out. But our behavior as a group is more difficult to understand and, even when there are pressing and convincing reasons to change, behavior is difficult to alter.
In this issue, Drs. Federico Perez and David Van Duin discuss the emergence of carbapenem-resistant bacteria, dubbed “superbugs” by the media. Antibiotic resistance is not new; it was reported in Staphylococcus species within several years of the introduction of penicillin.1 However, it has been increasing in prevalence and molecular complexity after years of relatively promiscuous antibiotic use. As the percentage of inpatients with immunosuppression and frailty increases in this environment of known antibiotic resistance, initial empiric antibiotic choices will by necessity include drugs likely to further promote development of resistant strains. But why do physicians still prescribe antibiotics for uncomplicated upper respiratory tract infections and asymptomatic bacteriuria, despite numerous studies and guidelines suggesting this practice has little benefit? Is it because patients expect a prescription in return for their copayment? Is it the path of least resistance? Or do physicians not accept the data showing that it is unnecessary?
Dr. Gerald Appel discusses diabetic nephropathy, an area that involves resistance of another kind, ie, the apparent resistance of physicians and patients to achieving evidence-based treatment targets. We hold controlled trials as the Holy Grail of evidence-based medicine, yet we seem to have an aversion to following guidelines based on trial-derived evidence. (I do not refer here to blind guideline adherence, ignoring individual patient characteristics.)
So how can physicians’ behavior be altered and our resistance to change be reduced? Experiments are under way, such as paying physicians based on their performance, linking patients’ insurance rates to achieving selected outcomes, and linking physician practice self-review to certification. Perhaps naively, I continue to believe that the most effective impetus to changing personal practice is the dissemination of data from high-quality trials (tempered by our accumulated experience and keeping our eyes wide open), coupled with our desire to do the best for our patients.
- Barber M. Coagulase-positive staphylococci resistant to penicillin. J Pathol Bacteriol 1947; 59:373–384.
- Barber M. Coagulase-positive staphylococci resistant to penicillin. J Pathol Bacteriol 1947; 59:373–384.
Detecting and controlling diabetic nephropathy: What do we know?
Diabetes is on the rise, and so is diabetic nephropathy. In view of this epidemic, physicians should consider strategies to detect and control kidney disease in their diabetic patients.
This article will focus on kidney disease in adult-onset type 2 diabetes. Although it has different pathogenetic mechanisms than type 1 diabetes, the clinical course of the two conditions is very similar in terms of the prevalence of proteinuria after diagnosis, the progression to renal failure after the onset of proteinuria, and treatment options.1
DIABETES AND DIABETIC KIDNEY DISEASE ARE ON THE RISE
The incidence of diabetes increases with age, and with the aging of the baby boomers, its prevalence is growing dramatically. The 2005– 2008 National Health and Nutrition Examination Survey estimated the prevalence as 3.7% in adults age 20 to 44, 13.7% at age 45 to 64, and 26.9% in people age 65 and older. The obesity epidemic is also contributing to the increase in diabetes in all age groups.
Diabetic kidney disease has increased in the United States from about 4 million cases 20 years ago to about 7 million in 2005–2008.2 Diabetes is the major cause of end-stage renal disease in the developed world, accounting for 40% to 50% of cases. Other major causes are hypertension (27%) and glomerulonephritis (13%).3
Physicians in nearly every field of medicine now care for patients with diabetic nephropathy. The classic presentation—a patient who has impaired vision, fluid retention with edema, and hypertension—is commonly seen in dialysis units and ophthalmology and cardiovascular clinics.
CLINICAL PROGRESSION
Early in the course of diabetic nephropathy, blood pressure is normal and microalbuminuria is not evident, but many patients have a high glomerular filtration rate (GFR), indicating temporarily “enhanced” renal function or hyperfiltration. The next stage is characterized by microalbuminuria, correlating with glomerular mesangial expansion: the GFR falls back into the normal range and blood pressure starts to increase. Finally, macroalbuminuria occurs, accompanied by rising blood pressure and a declining GFR, correlating with the histologic appearance of glomerulosclerosis and Kimmelstiel-Wilson nodules.4
Hypertension develops in 5% of patients by 10 years after type 1 diabetes is diagnosed, 33% by 20 years, and 70% by 40 years. In contrast, 40% of patients with type 2 diabetes have high blood pressure at diagnosis.
Unfortunately, in most cases, this progression is a one-way street, so it is critical to intervene to try to slow the progression early in the course of the disease process.
SCREENING FOR DIABETIC NEPHROPATHY
Nephropathy screening guidelines for patients with diabetes are provided in Table 1.5
Blood pressure should be monitored at each office visit (Table 1). The goal for adults with diabetes should be to reduce blood pressure to 130/80 mm Hg. Reduction beyond this level may be associated with an increased mortality rate.6 Very high blood pressure (> 180 mm Hg systolic) should be lowered slowly. Lowering blood pressure delays the progression from microalbuminuria (30–299 mg/day or 20–199 μg/min) to macroalbuminuria (> 300 mg/day or > 200 μg/min) and slows the progression to renal failure.
Urinary albumin. Proteinuria takes 5 to 10 years to develop after the onset of diabetes. Because it is possible for patients with type 2 diabetes to have had the disease for some time before being diagnosed, urinary albumin screening should be performed at diagnosis and annually thereafter. Patients with type 1 are usually diagnosed with diabetes at or near onset of disease; therefore, annual screening for urinary albumin can begin 5 years after diagnosis.5
Proteinuria can be measured in different ways (Table 2). The basic screening test for clinical proteinuria is the urine dipstick, which is very sensitive to albumin and relatively insensitive to other proteins. “Trace-positive” results are common in healthy people, so proteinuria is not confirmed unless a patient has repeatedly positive results.
Microalbuminuria is important to measure, especially if it helps determine therapy. It is not detectable by the urinary dipstick, but can be measured in the following ways:
- Measurement of the albumin-creatinine ratio in a random spot collection
- 24-hour collection (creatinine should simultaneously be measured and creatinine clearance calculated)
- Timed collection (4 hours or overnight).
The first method is preferred, and any positive test result must be confirmed by repeat analyses of urinary albumin before a patient is diagnosed with microalbuminuria.
Occasionally a patient presenting with proteinuria but normal blood sugar and hemoglobin A1c will have a biopsy that reveals morphologic changes of classic diabetic nephropathy. Most such patients have a history of hyperglycemia, indicating that they actually have been diabetic.
Proteinuria—the best marker of disease progression
Proteinuria is the strongest predictor of renal outcomes. The Reduction in End Points in Noninsulin-Dependent Diabetes Mellitus With the Angiotensin II Antagonist Losartan (RENAAL) study was a randomized, placebo-controlled trial in more than 1,500 patients with type 2 diabetes to test the effects of losartan on renal outcome. Those with high albuminuria (> 3.0 g albumin/g creatinine) at baseline were five times more likely to reach a renal end point and were eight times more likely to have progression to end-stage renal disease than patients with low albuminuria (< 1.5 g/g).7 The degree of albuminuria after 6 months of treatment showed similar predictive trends, indicating that monitoring and treating proteinuria are extremely important goals.
STRATEGY 1 TO LIMIT RENAL INJURY: REDUCE BLOOD PRESSURE
Blood pressure control improves renal and cardiovascular function.
As early as 1983, Parving et al,8 in a study of only 10 insulin-dependent diabetic patients, showed strong evidence that early aggressive antihypertensive treatment improved the course of diabetic nephropathy. During the mean pretreatment period of 29 months, the GFR decreased significantly and the urinary albumin excretion rate and arterial blood pressure rose significantly. During the mean 39-month period of antihypertensive treatment with metoprolol, hydralazine, and furosemide or a thiazide, mean arterial blood pressure fell from 144/97 to 128/84 mm Hg and urinary albumin excretion from 977 to 433 μg/ min. The rate of decline in GFR slowed from 0.91 mL/min/month before treatment to 0.39 mL/min/month during treatment.
The Action in Diabetes and Vascular Disease: Preterax and Diamicron MR Controlled Evaluation (ADVANCE) trial9 enrolled more than 11,000 patients internationally with type 2 diabetes at high risk for cardiovascular events. In addition to standard therapy, blood pressure was intensively controlled in one group with a combination of the angiotensin-converting enzyme (ACE) inhibitor perindopril and the diuretic indapamide. The intensive-therapy group achieved blood pressures less than 140/80 mm Hg and had a mean reduction of systolic blood pressure of 5.6 mm Hg and diastolic blood pressure of 2.2 mm Hg vs controls. Despite these apparently modest reductions, the intensively controlled group had a significant 9% reduction of the primary outcome of combined macrovascular events (cardiovascular death, myocardial infarction, and stroke) and microvascular events (new or worsening nephropathy, or retinopathy).10
A meta-analysis of studies of patients with type 2 diabetes found reduced nephropathy with systolic blood pressure control to less than 130 mm Hg.11
The United Kingdom Prospective Diabetes Study (UKPDS) is a series of studies of diabetes. The original study in 1998 enrolled 5,102 patients with newly diagnosed type 2 diabetes.12 The more than 1,000 patients with hypertension were randomized to either tight blood pressure control or regular care. The intensive treatment group had a mean blood pressure reduction of 9 mm Hg systolic and 3 mm Hg diastolic, along with major reductions in all diabetes end points, diabetes deaths, microvascular disease, and stroke over a median follow-up of 8.4 years.
Continuous blood pressure control is critical
Tight blood pressure control must be maintained to have continued benefit. During the 10 years following the UKPDS, no attempts were made to maintain the previously assigned therapies. A follow-up study13 of 884 UKPDS patients found that blood pressures were the same again between the two groups 2 years after the trial was stopped, and no beneficial legacy effect from previous blood pressure control was evident on end points.
Control below 120 mm Hg systolic not needed
Blood pressure control slows kidney disease and prevents major macrovascular disease, but there is no evidence that lowering systolic blood pressure below 120 mm Hg provides additional benefit. In the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial,14 more than 10,000 patients with type 2 diabetes and existing cardiovascular disease or additional cardiovascular risk factors were randomized to a goal of systolic blood pressure less than 120 mm Hg or less than 140 mm Hg (actual mean systolic pressures were 119 vs 134 mm Hg, respectively). Over nearly 5 years, there was no difference in cardiovascular events or deaths between the two groups.15
Since 1997, six international organizations have revised their recommended blood pressure goals in diabetes mellitus and renal diseases. Randomized clinical trials and observational studies have demonstrated the importance of blood pressure control to the level of 125/75 to 140/80 mm Hg. The National Kidney Foundation, the American Diabetes Association, and the Canadian Hypertension Society have developed consensus guidelines for blood pressure control to less than 130/80 mm Hg.16–21 Table 3 summarizes blood pressure goals for patients with diabetes.
STRATEGY 2: CONTROL BLOOD SUGAR
Recommendations for blood sugar goals are more controversial.
The Diabetes Control and Complications Trial22 provided early evidence that tight blood sugar control slows the development of microalbuminuria and macroalbuminuria. The study randomized more than 1,400 patients with type 1 diabetes to either standard therapy (1 or 2 daily insulin injections) or intensive therapy (an external insulin pump or 3 or more insulin injections guided by frequent blood glucose monitoring) to keep blood glucose levels close to normal. About half the patients had mild retinopathy at baseline and the others had no retinopathy. After 6.5 years, intensive therapy was found to significantly delay the onset and slow the progression of diabetic retinopathy and nephropathy.
The Kumamoto Study23 randomized 110 patients with type 2 diabetes and either no retinopathy (primary prevention cohort) or simple retinopathy (secondary prevention cohort) to receive either multiple insulin injections or conventional insulin therapy over 8 years. Intensive therapy led to lower rates of retinopathy (7.7% vs 32% in primary prevention and 19% vs 44% in secondary prevention) and progressive nephropathy (7% vs 28% in primary prevention at 6 years and 11% vs 32% in secondary prevention).
In addition to studying the effects of blood pressure control, the UKPDS also studied the effects of intensive blood glucose control.24,25 Nearly 4,000 patients with newly diagnosed type 2 diabetes were randomized to intensive treatment with a sulfonylurea or insulin, or to conventional treatment with diet. Over 10 years, the mean hemoglobin A1c was reduced to 7.0% in the intensive group and 7.9% in the conventional group. The risk of any diabetes-related end point was 12% lower in the intensive group, 10% lower for diabetes-related death, and 6% lower for all-cause mortality. There was also a 25% reduction in microvascular disease (retinopathy and nephropathy). However, the intensive group had more hypoglycemic episodes than the conventional group and a tendency to some increase in macrovascular events. A legacy effect was evident: patients who had intensive treatment had less microvascular disease progression years after stopping therapy.
Tight glycemic control reduces nephropathy, but does it increase cardiovascular risk?
Earlier trials provided strong evidence that blood glucose control prevents or slows retinopathy and nephropathy. The critical question is, “At what expense?” Although diabetes is the most common cause of kidney failure in the United States, most people with diabetes do not die of kidney failure, but of cardiovascular disease. Two recent large trials had different results regarding glycemic control below hemoglobin A1c of 7.0% and macrovascular risk, creating a controversy about what recommendations are best.
The ADVANCE trial, enrolling 11,140 patients with type 2 diabetes, was largely conducted in Australia and used the sulfonylurea glipizide for glycemic control. Compared with the group that received standard therapy (n=5,569), the intensive-treatment group (n=5,571) achieved mean hemoglobin A1c levels of 6.5% compared with 7.3% in the standard group, and had less nephropathy, less microalbuminuria, less doubling of creatinine, and a lower rate of end-stage renal disease (4% vs 5% in the standard therapy group). No difference between the two groups was found in retinopathy. Rates of all-cause mortality did not differ between the groups.9
The ACCORD trial had more than 10,000 subjects with type 2 diabetes and took place mostly in the United States. Using mainly rosiglitazone for intensive therapy, the intensive group achieved hemoglobin A1c levels of 6.4% vs 7.5% in the standard-therapy group. The trial was stopped early, at 3.7 years, because of a higher risk of death and cardiovascular events in the group with intensive glycemic control. However, the intensive-therapy group did have a significant decrease in microvascular renal outcomes and a reduction in the progression of retinopathy.14,26
In summary, tighter glycemic control improves microvascular complications—both retinopathy and nephropathy—in patients with type 2 diabetes. The benefit of intensive therapy on macrovascular complications (stroke, myocardial infarction) in long-standing diabetes has not been convincingly demonstrated in randomized trials. The UKPDS suggested that maintaining a hemoglobin A1c of 7% in patients newly diagnosed with type 2 diabetes confers long-term cardiovascular benefits. The target hemoglobin A1c for type 2 diabetes should be tailored to the patient: 7% is a reasonable goal for most patients, but the goal should be higher for the elderly and frail. Reducing the risk of cardiovascular death is still best done by controlling blood pressure, reducing lipids, quitting smoking, and losing weight.
STRATEGY 3: INHIBIT THE RENIN-ANGIOTENSIN-ALDOSTERONE AXIS
Components of the renin-angiotensin-aldo-sterone system are present not only in the circulation but also in many tissues, including the heart, brain, kidney, blood vessels, and adrenal glands. The role of renin-angiotensin-aldosterone system blockers in treating and preventing diabetic nephropathy has become controversial in recent years with findings from new studies.
The renin-angiotensin-aldosterone system is important in the development or maintenance of high blood pressure and the resultant damage to the brain, heart, and kidney. Drug development has focused on inhibiting steps in the biochemical pathway. ACE inhibitors block the formation of angiotensin II—the most biologically potent angiotensin peptide—and are among the most commonly used drugs to treat hypertension and concomitant conditions, such as renal insufficiency, proteinuria, and heart failure. Angiotensin receptor blockers (ARBs) interact with the angiotensin AT1 receptor and block most of its actions. They are approved by the US Food and Drug Administration (FDA) for the treatment of hypertension, and they help prevent left ventricular hypertrophy and mesangial sclerosis. Large studies have shown that ACE inhibitors and ARBs offer similar cardiovascular benefit.
The glomerulus has the only capillary bed with a blood supply that drains into an efferent arteriole instead of a venule, providing high resistance to aid filtration. Efferent arterioles are rich in AT1 receptors. In the presence of angiotensin II they constrict, increasing pressure in the glomerulus, which can lead to proteinuria and glomerulosclerosis. ACE inhibitors and ARBs relax the efferent arteriole, allowing increased blood flow through the glomerulus. This reduction in intraglomerular pressure is associated with less proteinuria and less glomerulosclerosis.
Diabetes promotes renal disease in many ways. Glucose and advanced glycation end products can lead to increased blood flow and increased pressure in the glomerulus. Through a variety of pathways, hyperglycemia, acting on angiotensin II, leads to NF-kapa beta production, profibrotic cytokines, increased matrix, and eventual fibrosis. ACE inhibitors and ARBs counteract many of these.
ACE inhibitors and ARBs slow nephropathy progression beyond blood pressure control
Several major clinical trials27–32 examined the effects of either ACE inhibitors or ARBs in slowing the progression of diabetic nephropathy and have had consistently positive results.
The Collaborative Study Group30 was a 3-year randomized trial in 419 patients with type 1 diabetes, using the ACE inhibitor captopril vs placebo. Captopril was associated with less decline in kidney function and a 50% reduction in the risk of the combined end points of death, dialysis, and transplantation that was independent of the small difference in blood pressures between the two groups.
The Irbesartan Diabetic Nephropathy Trial (IDNT)31 studied the effect of the ARB irbesartan vs the calcium channel blocker amlodipine vs placebo over 2.6 years in 1,715 patients with type 2 diabetes. Irbesartan was found to be significantly more effective in protecting against the progression of nephropathy, independent of reduction in blood pressure.
The RENAAL trial,32 published in 2001, was a 3-year, randomized, double-blind study comparing the ARB losartan at increasing dosages with placebo (both taken in addition to conventional antihypertensive treatment) in 1,513 patients with type 2 diabetes and nephropathy. The blood pressure goal was 140/90 mm Hg in both groups, but the losartan group had a lower rate of doubling of serum creatinine, end-stage renal disease, and combined end-stage renal disease or death.
‘Aldosterone escape’ motivates the search for new therapies
An important reason for developing more ways to block the renin-angiotensin-aldosterone system is because of “aldosterone escape,” the phenomenon of angiotensin II or aldosterone returning to pretreatment levels despite continued ACE inhibition.
Biollaz et al,33 in a 1982 study of 19 patients with hypertension, showed that despite reducing blood pressure and keeping the blood level of ACE very low with twice-daily enalapril 20 mg, blood and urine levels of angiotensin II steadily rose back to baseline levels within a few months.
A growing body of evidence suggests that despite effective inhibition of angiotensin II activity, non-ACE synthetic pathways still permit angiotensin II generation via serine proteases such as chymase, cathepsin G, and tissue plasminogen activator.
Thus, efforts have been made to block the renin-angiotensin system in other places. In addition to ACE inhibitors and ARBs, two aldosterone receptor antagonists are available, spironolactone and eplerenone, both used to treat heart failure. A direct renin inhibitor, aliskiren, is also available.
Combination therapy—less proteinuria, but…
A number of studies have shown that combination treatment with agents having different targets in the renin-angiotensin-aldosterone system leads to larger reductions in albuminuria than does single-agent therapy.
Mogensen et al34 studied the effect of the ACE inhibitor lisinopril (20 mg per day) plus the ARB candesartan (16 mg per day) in subjects with microalbuminuria, hypertension, and type 2 diabetes. Combined treatment was more effective in reducing proteinuria.
Epstein et al35 studied the effects of the ACE inhibitor enalapril (20 mg/day) combined with either of two doses of the selective aldosterone receptor antagonist eplerenone (50 or 100 mg/day) or placebo. Both eplerenone dosages, when added to the enalapril treatment, significantly reduced albuminuria from baseline as early as week 4 (P < .001), but placebo treatment added to the enalapril did not result in any significant decrease in urinary albumin excretion. Systolic blood pressure decreased significantly in all treatment groups and by about the same amount.
The Aliskiren Combined With Losartan in Type 2 Diabetes and Nephropathy (AVOID) trial36 randomized more than 600 patients with type 2 diabetes and nephropathy to aliskiren (a renin inhibitor) or placebo added to the ARB losartan. Again, combination treatment was more renoprotective, independent of blood pressure lowering.
Worse outcomes with combination therapy?
More recent studies have indicated that although combination therapy reduces proteinuria to a greater extent than monotherapy, overall it worsens major renal and cardiovascular outcomes. The multicenter Ongoing Telmisartan Alone and in Combination With Ramipril Global Endpoint Trial (ONTARGET)37 randomized more than 25,000 patients age 55 and older with established atherosclerotic vascular disease or with diabetes and end-organ damage to receive either the ARB telmisartan 80 mg daily, the ACE inhibitor ramipril 10 mg daily, or both. Mean follow-up was 56 months. The combination-treatment group had higher rates of death and renal disease than the single-therapy groups (which did not differ from one another).
Why the combination therapy had poorer outcomes is under debate. Patients may get sudden drops in blood pressure that are not detected with only periodic monitoring. Renal failure was mostly acute rather than chronic, and the estimated GFR declined more in the combined therapy group than in the single-therapy groups.
The Aliskiren Trial in Type 2 Diabetes Using Cardiovascular and Renal Disease Endpoints (ALTITUDE) was designed to test the effect of the direct renin inhibitor aliskiren or placebo, both arms combined with either an ACE inhibitor or an ARB in patients with type 2 diabetes at high risk for cardiovascular and renal events. The trial was terminated early because of more strokes and deaths in the combination therapy arms. The results led the FDA to issue black box warnings against using aliskiren with these other classes of agents, and all studies testing similar combinations have been stopped. (In one study that was stopped and has not yet been published, 100 patients with proteinuria were treated with either aliskiren, the ARB losartan, or both, to evaluate the effects of aldosterone escape. Results showed no differences: about one-third of each group had this phenomenon.)
My personal recommendation is as follows: for younger patients with proteinuria, at lower risk for cardiovascular events and with disease due not to diabetes but to immunoglobulin A nephropathy or another proteinuric kidney disease, treat with both an ACE inhibitor and ARB. But the combination should not be used for patients at high risk of cardiovascular disease, which includes almost all patients with diabetes.
If more aggressive renin-angiotensin system blockade is needed against diabetic nephropathy, adding a diuretic increases the impact of blocking the renin-angiotensin-aldosterone system on both proteinuria and progression of renal disease. The aldosterone blocker spironolactone 25 mg can be added if potassium levels are carefully monitored.
ACE inhibitor plus calcium channel blocker is safer than ACE inhibitor plus diuretic
The Avoiding Cardiovascular Events Through Combination Therapy in Patients Living With Systolic Hypertension (ACCOMPLISH) trial38 randomized more than 11,000 high-risk patients with hypertension to receive an ACE inhibitor (benazepril) plus either a calcium channel blocker (amlodipine) or thiazide diuretic (hydrochlorothiazide). Blood pressures were identical between the two groups, but the trial was terminated early, at 36 months, because of a higher risk of the combined end point of cardiovascular death, myocardial infarction, stroke, and other major cardiac events in the ACE inhibitor-thiazide group.
Although some experts believe this study is definitive and indicates that high blood pressure should never be treated with an ACE inhibitor-thiazide combination, I believe that caution is needed in interpreting these findings. This regimen should be avoided in older patients with diabetes at high risk for cardiovascular disease, but otherwise, getting blood pressure under control is critical, and this combination can be used if it works and the patient is tolerating it well.
In summary, the choice of blood pressure-lowering medications is based on reducing cardiovascular events and slowing the progression of kidney disease. Either an ACE inhibitor or an ARB is the first choice for patients with diabetes, hypertension, and any degree of proteinuria. Many experts recommend beginning one of these agents even if proteinuria is not present. However, the combination of an ACE inhibitor and ARB should not be used in diabetic patients, especially if they have cardiovascular disease, until further data clarify the results of the ONTARGET and ALTITUDE trials.
STRATEGY 4: METABOLIC MANIPULATION WITH NOVEL AGENTS
Several new agents have recently been studied for the treatment of diabetic nephropathy, including aminoguanidine, which reduces levels of advanced glycation end-products, and sulodexide, which blocks basement membrane permeability. Neither agent has been shown to be safe and effective in diabetic nephropathy. The newest agent is bardoxolone methyl. It induces the Keap1–Nrf2 pathway, which up-regulates cytoprotective factors, suppressing inflammatory and other cytokines that are major mediators of progression of chronic kidney disease.39
Pergola et al,40 in a phase 2, double-blind trial, randomized 227 adults with diabetic kidney disease and a low estimated GFR (20–45 mL/min/1.73 m2) to receive placebo or bardoxolone 25, 75, or 150 mg daily. Drug treatment was associated with improvement in the estimated GFR, a finding that persisted throughout the 52 weeks of treatment. Surprisingly, proteinuria did not decrease with drug treatment.
As of this writing, a large multicenter controlled randomized trial has been halted because of concerns by the data safety monitoring board, which found increased rates of death and fluid retention with the drug. A number of recent trials have shown a beneficial effect of sodium bicarbonate therapy in patients with late-stage chronic kidney disease. They have shown slowing of the progression of GFR decline in a number of renal diseases, including diabetes.
- Ritz E, Orth SR. Nephropathy in patients with type 2 diabetes mellitus. N Engl J Med 1999; 341:1127–1133.
- de Boer IH, Rue TC, Hall YN, Heagerty PJ, Weiss NS, Himmelfarb J. Temporal trends in the prevalence of diabetic kidney disease in the United States. JAMA 2011; 305:2532–2539.
- United States Renal Data System (USRDS) 2000 Annual Data Report. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases – Division of Kidney, Urologic and Hematologic Diseases. USRDS Coordinating Center operated by the Minneapolis Medical Research Foundation. www.usrds.org
- Macisaac RJ, Jerums G. Diabetic kidney disease with and without albuminuria. Curr Opin Nephrol Hypertens 2011; 20:246–257.
- Molitch ME, DeFronzo RA, Franz MJ, et al; American Diabetes Association. Nephropathy in diabetes. Diabetes Care 2004; 27(suppl 1):S79–S83.
- Vamos EP, Harris M, Millett C, et al. Association of systolic and diastolic blood pressure and all cause mortality in people with newly diagnosed type 2 diabetes: retrospective cohort study. BMJ 2012; 345:e5567.
- de Zeeuw D, Remuzzi G, Parving HH, et al. Proteinuria, a target for renoprotection in patients with type 2 diabetic nephropathy: lessons from RENAAL. Kidney Int 2004; 65:2309–2320.
- Parving HH, Andersen AR, Smidt UM, Svendsen PA. Early aggressive antihypertensive treatment reduces rate of decline in kidney function in diabetic nephropathy. Lancet 1983; 1:1175–1179.
- ADVANCE Collaborative Group; Patel A, MacMahon S, Chalmers J, et al. Intensive blood glucose control and vascular outcomes in patients with type 2 diabetes. N Engl J Med 2008; 358:2560–2572.
- ADVANCE Collaborative Group; Patel A, MacMahon S, Chalmers J, et al. Effects of a fixed combination of perindopril and indapamide on macrovascular and microvascular outcomes in patients with type 2 diabetes mellitus (the ADVANCE trial): a randomised controlled trial. Lancet 2007; 370:829–840.
- Bangalore S, Kumar S, Lobach I, Messerli FH. Blood pressure targets in subjects with type 2 diabetes mellitus/impaired fasting glucose: observations from traditional and bayesian random-effects meta-analyses of randomized trials. Circulation 2011; 123:2799–2810.
- UK Prospective Diabetes Study Group. Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes: UKPDS 38. BMJ 1998; 317:703–713.
- Holman RR, Paul SK, Bethel MA, Matthews DR, Neil HA. 10-year follow-up of intensive glucose control in type 2 diabetes. N Engl J Med 2008; 359:1577–1589.
- Action to Control Cardiovascular Risk in Diabetes Study Group; Gerstein HC, Miller ME, Byington RP, et al. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med 2008; 358:2545–2559.
- ACCORD Study Group; Cushman WC, Evans GW, Byington RP, et al. Effects of intensive blood-pressure control in type 2 diabetes mellitus. N Engl J Med 2010; 362:1575–1585.
- American Diabetes Association. Standards of medical care in diabetes—2012. Diabetes Care 2012; 35(suppl 1):S11–S63. (Erratum in: Diabetes Care 2012; 35:660.)
- Bakris GL, Williams M, Dworkin L, et al. Preserving renal function in adults with hypertension and diabetes: a consensus approach. National Kidney Foundation Hypertension and Diabetes Executive Committees Working Group. Am J Kidney Dis 2000; 36:646–661.
- Ramsay L, Williams B, Johnston G, et al. Guidelines for management of hypertension: report of the third working party of the British Hypertension Society. J Hum Hypertens 1999; 13:569–592.
- Feldman RD, Campbell N, Larochelle P, et al. 1999 Canadian recommendations for the management of hypertension. Task Force for the Development of the 1999 Canadian Recommendations for the Management of Hypertension. CMAJ 1999; 161(suppl):12:S1–S17.
- Chalmers J, MacMahon S, Mancia G, et al. 1999 World Health Organization-International Society of Hypertension Guidelines for the management of hypertension. Guidelines Sub-committee of the World Health Organization. Clin Exp Hypertens 1999; 21:1009–1060.
- The seventh report of the Joint National Committee on Prevention, Detection Evaluation, and Treatment of High Blood Pressure. Hypertension 2003; 42:1206–1252.
- The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. The Diabetes Control and Complications Trial Research Group. N Engl J Med 1993; 329:977–986.
- Shichiri M, Kishikawa H, Ohkubo Y, Wake N. Long-term results of the Kumamoto Study on optimal diabetes control in type 2 diabetic patients. Diabetes Care 2000; 23(suppl 2):B21–B29.
- Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). UK Prospective Diabetes Study (UKPDS) Group. Lancet 1998; 352:837–853. Erratum in: Lancet 1999; 354:602.
- Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes: UKPDS 38. UK Prospective Diabetes Study Group. BMJ 1998; 317:703–713. Erratum in: BMJ 1999; 318:29.
- Ismail-Beigi F, Craven T, Banerji MA, et al; ACCORD trial group. Effect of intensive treatment of hyperglycaemia on microvascular outcomes in type 2 diabetes: an analysis of the ACCORD randomised trial. Lancet 2010; 376:419–430. Erratum in: Lancet 2010; 376:1466.
- Effects of ramipril on cardiovascular and microvascular outcomes in people with diabetes mellitus: results of the HOPE study and MICRO-HOPE substudy. Heart Outcomes Prevention Evaluation Study Investigators. Lancet 2000: 355:253–259. Erratum in: Lancet2000; 356:860.
- Parving HH, Lehnert H, Bröchner-Mortensen J, et al; Irbesartan in Patients with Type 2 Diabetes and Microalbuminuria Study Group. The effect of irbesartan on the development of diabetic nephropathy in patients with type 2 diabetes. N Engl J Med 2001; 345:870–878.
- Viberti G, Wheeldon NM; MicroAlbuminuria Reduction With VALsartan (MARVAL) Study Investigators. Microalbuminuria reduction with valsartan in patients with type 2 diabetes mellitus: a blood pressure-independent effect. Circulation 2002; 106:672–678.
- Lewis EJ, Hunsicker LG, Bain R P, Rohde RD. The effect of angiotensin-converting-enzyme inhibition on diabetic nephropathy. The Collaborative Study Group. N Engl J Med 1993; 329:1456–1462.
- Lewis EJ, Hunsicker LG, Clarke WR, et al; Collaborative Study Group. Renoprotective effect of the angiotensin-receptor antagonist irbesartan in patients with nephropathy due to type 2 diabetes. N Engl J Med 2001; 345:851–860.
- Brenner BM, Cooper ME, de Zeeuw D, et al; RENAAL Study Investigators. Effects of losartan on renal and cardiovascular outcomes in patients with type 2 diabetes and nephropathy. N Engl J Med 2001; 345:861–869.
- Biollaz J, Brunner HR, Gavras I, Waeber B, Gavras H. Antihypertensive therapy with MK 421: angiotensin II--renin relationships to evaluate efficacy of converting enzyme blockade. J Cardiovasc Pharmacol 1982; 4:966–972.
- Mogensen CE, Neldam S, Tikkanen I, et al. Randomised controlled trial of dual blockade of renin-angiotensin system in patients with hypertension, microalbuminuria, and non-insulin dependent diabetes: the candesartan and lisinopril microalbuminuria (CALM) study. BMJ 2000; 321:1440–1444.
- Epstein M, Williams GH, Weinberger M, et al. Selective aldosterone blockade with eplerenone reduces albuminuria in patients with type 2 diabetes. Clin J Am Soc Nephrol 2006; 1:940–951.
- Parving HH, Persson F, Lewis JB, Lewis EJ, Hollenberg NK; AVOID Study Investigators. Aliskiren combined with losartan in type 2 diabetes and nephropathy. N Engl J Med 2008; 358:2433–2446.
- Mann JF, Schmieder RE, McQueen M, et al; ONTARGET investigators. Renal outcomes with telmisartan, ramipril, or both, in people at high vascular risk (the ONTARGET study): a multicentre, randomised, double-blind, controlled trial. Lancet 2008; 372:547–553.
- Jamerson K, Weber MA, Bakris GL, et al; ACCOMPLISH Trial Investigators. Benazepril plus amlodipine or hydrochlorothiazide for hypertension in high-risk patients. N Engl J Med 2008; 359:2417–2428.
- Kim HJ, Vaziri ND. Contribution of impaired Nrf2-Keap1 pathway to oxidative stress and inflammation in chronic renal failure. Am J Physiol Renal Physiol 2010; 298:F662–F671.
- Pergola PE, Raskin P, Toto RD, et al; BEAM Study Investigators Bardoxolone methyl and kidney function in CKD with type 2 diabetes. N Engl J Med 2011; 365:327–336.
Diabetes is on the rise, and so is diabetic nephropathy. In view of this epidemic, physicians should consider strategies to detect and control kidney disease in their diabetic patients.
This article will focus on kidney disease in adult-onset type 2 diabetes. Although it has different pathogenetic mechanisms than type 1 diabetes, the clinical course of the two conditions is very similar in terms of the prevalence of proteinuria after diagnosis, the progression to renal failure after the onset of proteinuria, and treatment options.1
DIABETES AND DIABETIC KIDNEY DISEASE ARE ON THE RISE
The incidence of diabetes increases with age, and with the aging of the baby boomers, its prevalence is growing dramatically. The 2005– 2008 National Health and Nutrition Examination Survey estimated the prevalence as 3.7% in adults age 20 to 44, 13.7% at age 45 to 64, and 26.9% in people age 65 and older. The obesity epidemic is also contributing to the increase in diabetes in all age groups.
Diabetic kidney disease has increased in the United States from about 4 million cases 20 years ago to about 7 million in 2005–2008.2 Diabetes is the major cause of end-stage renal disease in the developed world, accounting for 40% to 50% of cases. Other major causes are hypertension (27%) and glomerulonephritis (13%).3
Physicians in nearly every field of medicine now care for patients with diabetic nephropathy. The classic presentation—a patient who has impaired vision, fluid retention with edema, and hypertension—is commonly seen in dialysis units and ophthalmology and cardiovascular clinics.
CLINICAL PROGRESSION
Early in the course of diabetic nephropathy, blood pressure is normal and microalbuminuria is not evident, but many patients have a high glomerular filtration rate (GFR), indicating temporarily “enhanced” renal function or hyperfiltration. The next stage is characterized by microalbuminuria, correlating with glomerular mesangial expansion: the GFR falls back into the normal range and blood pressure starts to increase. Finally, macroalbuminuria occurs, accompanied by rising blood pressure and a declining GFR, correlating with the histologic appearance of glomerulosclerosis and Kimmelstiel-Wilson nodules.4
Hypertension develops in 5% of patients by 10 years after type 1 diabetes is diagnosed, 33% by 20 years, and 70% by 40 years. In contrast, 40% of patients with type 2 diabetes have high blood pressure at diagnosis.
Unfortunately, in most cases, this progression is a one-way street, so it is critical to intervene to try to slow the progression early in the course of the disease process.
SCREENING FOR DIABETIC NEPHROPATHY
Nephropathy screening guidelines for patients with diabetes are provided in Table 1.5
Blood pressure should be monitored at each office visit (Table 1). The goal for adults with diabetes should be to reduce blood pressure to 130/80 mm Hg. Reduction beyond this level may be associated with an increased mortality rate.6 Very high blood pressure (> 180 mm Hg systolic) should be lowered slowly. Lowering blood pressure delays the progression from microalbuminuria (30–299 mg/day or 20–199 μg/min) to macroalbuminuria (> 300 mg/day or > 200 μg/min) and slows the progression to renal failure.
Urinary albumin. Proteinuria takes 5 to 10 years to develop after the onset of diabetes. Because it is possible for patients with type 2 diabetes to have had the disease for some time before being diagnosed, urinary albumin screening should be performed at diagnosis and annually thereafter. Patients with type 1 are usually diagnosed with diabetes at or near onset of disease; therefore, annual screening for urinary albumin can begin 5 years after diagnosis.5
Proteinuria can be measured in different ways (Table 2). The basic screening test for clinical proteinuria is the urine dipstick, which is very sensitive to albumin and relatively insensitive to other proteins. “Trace-positive” results are common in healthy people, so proteinuria is not confirmed unless a patient has repeatedly positive results.
Microalbuminuria is important to measure, especially if it helps determine therapy. It is not detectable by the urinary dipstick, but can be measured in the following ways:
- Measurement of the albumin-creatinine ratio in a random spot collection
- 24-hour collection (creatinine should simultaneously be measured and creatinine clearance calculated)
- Timed collection (4 hours or overnight).
The first method is preferred, and any positive test result must be confirmed by repeat analyses of urinary albumin before a patient is diagnosed with microalbuminuria.
Occasionally a patient presenting with proteinuria but normal blood sugar and hemoglobin A1c will have a biopsy that reveals morphologic changes of classic diabetic nephropathy. Most such patients have a history of hyperglycemia, indicating that they actually have been diabetic.
Proteinuria—the best marker of disease progression
Proteinuria is the strongest predictor of renal outcomes. The Reduction in End Points in Noninsulin-Dependent Diabetes Mellitus With the Angiotensin II Antagonist Losartan (RENAAL) study was a randomized, placebo-controlled trial in more than 1,500 patients with type 2 diabetes to test the effects of losartan on renal outcome. Those with high albuminuria (> 3.0 g albumin/g creatinine) at baseline were five times more likely to reach a renal end point and were eight times more likely to have progression to end-stage renal disease than patients with low albuminuria (< 1.5 g/g).7 The degree of albuminuria after 6 months of treatment showed similar predictive trends, indicating that monitoring and treating proteinuria are extremely important goals.
STRATEGY 1 TO LIMIT RENAL INJURY: REDUCE BLOOD PRESSURE
Blood pressure control improves renal and cardiovascular function.
As early as 1983, Parving et al,8 in a study of only 10 insulin-dependent diabetic patients, showed strong evidence that early aggressive antihypertensive treatment improved the course of diabetic nephropathy. During the mean pretreatment period of 29 months, the GFR decreased significantly and the urinary albumin excretion rate and arterial blood pressure rose significantly. During the mean 39-month period of antihypertensive treatment with metoprolol, hydralazine, and furosemide or a thiazide, mean arterial blood pressure fell from 144/97 to 128/84 mm Hg and urinary albumin excretion from 977 to 433 μg/ min. The rate of decline in GFR slowed from 0.91 mL/min/month before treatment to 0.39 mL/min/month during treatment.
The Action in Diabetes and Vascular Disease: Preterax and Diamicron MR Controlled Evaluation (ADVANCE) trial9 enrolled more than 11,000 patients internationally with type 2 diabetes at high risk for cardiovascular events. In addition to standard therapy, blood pressure was intensively controlled in one group with a combination of the angiotensin-converting enzyme (ACE) inhibitor perindopril and the diuretic indapamide. The intensive-therapy group achieved blood pressures less than 140/80 mm Hg and had a mean reduction of systolic blood pressure of 5.6 mm Hg and diastolic blood pressure of 2.2 mm Hg vs controls. Despite these apparently modest reductions, the intensively controlled group had a significant 9% reduction of the primary outcome of combined macrovascular events (cardiovascular death, myocardial infarction, and stroke) and microvascular events (new or worsening nephropathy, or retinopathy).10
A meta-analysis of studies of patients with type 2 diabetes found reduced nephropathy with systolic blood pressure control to less than 130 mm Hg.11
The United Kingdom Prospective Diabetes Study (UKPDS) is a series of studies of diabetes. The original study in 1998 enrolled 5,102 patients with newly diagnosed type 2 diabetes.12 The more than 1,000 patients with hypertension were randomized to either tight blood pressure control or regular care. The intensive treatment group had a mean blood pressure reduction of 9 mm Hg systolic and 3 mm Hg diastolic, along with major reductions in all diabetes end points, diabetes deaths, microvascular disease, and stroke over a median follow-up of 8.4 years.
Continuous blood pressure control is critical
Tight blood pressure control must be maintained to have continued benefit. During the 10 years following the UKPDS, no attempts were made to maintain the previously assigned therapies. A follow-up study13 of 884 UKPDS patients found that blood pressures were the same again between the two groups 2 years after the trial was stopped, and no beneficial legacy effect from previous blood pressure control was evident on end points.
Control below 120 mm Hg systolic not needed
Blood pressure control slows kidney disease and prevents major macrovascular disease, but there is no evidence that lowering systolic blood pressure below 120 mm Hg provides additional benefit. In the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial,14 more than 10,000 patients with type 2 diabetes and existing cardiovascular disease or additional cardiovascular risk factors were randomized to a goal of systolic blood pressure less than 120 mm Hg or less than 140 mm Hg (actual mean systolic pressures were 119 vs 134 mm Hg, respectively). Over nearly 5 years, there was no difference in cardiovascular events or deaths between the two groups.15
Since 1997, six international organizations have revised their recommended blood pressure goals in diabetes mellitus and renal diseases. Randomized clinical trials and observational studies have demonstrated the importance of blood pressure control to the level of 125/75 to 140/80 mm Hg. The National Kidney Foundation, the American Diabetes Association, and the Canadian Hypertension Society have developed consensus guidelines for blood pressure control to less than 130/80 mm Hg.16–21 Table 3 summarizes blood pressure goals for patients with diabetes.
STRATEGY 2: CONTROL BLOOD SUGAR
Recommendations for blood sugar goals are more controversial.
The Diabetes Control and Complications Trial22 provided early evidence that tight blood sugar control slows the development of microalbuminuria and macroalbuminuria. The study randomized more than 1,400 patients with type 1 diabetes to either standard therapy (1 or 2 daily insulin injections) or intensive therapy (an external insulin pump or 3 or more insulin injections guided by frequent blood glucose monitoring) to keep blood glucose levels close to normal. About half the patients had mild retinopathy at baseline and the others had no retinopathy. After 6.5 years, intensive therapy was found to significantly delay the onset and slow the progression of diabetic retinopathy and nephropathy.
The Kumamoto Study23 randomized 110 patients with type 2 diabetes and either no retinopathy (primary prevention cohort) or simple retinopathy (secondary prevention cohort) to receive either multiple insulin injections or conventional insulin therapy over 8 years. Intensive therapy led to lower rates of retinopathy (7.7% vs 32% in primary prevention and 19% vs 44% in secondary prevention) and progressive nephropathy (7% vs 28% in primary prevention at 6 years and 11% vs 32% in secondary prevention).
In addition to studying the effects of blood pressure control, the UKPDS also studied the effects of intensive blood glucose control.24,25 Nearly 4,000 patients with newly diagnosed type 2 diabetes were randomized to intensive treatment with a sulfonylurea or insulin, or to conventional treatment with diet. Over 10 years, the mean hemoglobin A1c was reduced to 7.0% in the intensive group and 7.9% in the conventional group. The risk of any diabetes-related end point was 12% lower in the intensive group, 10% lower for diabetes-related death, and 6% lower for all-cause mortality. There was also a 25% reduction in microvascular disease (retinopathy and nephropathy). However, the intensive group had more hypoglycemic episodes than the conventional group and a tendency to some increase in macrovascular events. A legacy effect was evident: patients who had intensive treatment had less microvascular disease progression years after stopping therapy.
Tight glycemic control reduces nephropathy, but does it increase cardiovascular risk?
Earlier trials provided strong evidence that blood glucose control prevents or slows retinopathy and nephropathy. The critical question is, “At what expense?” Although diabetes is the most common cause of kidney failure in the United States, most people with diabetes do not die of kidney failure, but of cardiovascular disease. Two recent large trials had different results regarding glycemic control below hemoglobin A1c of 7.0% and macrovascular risk, creating a controversy about what recommendations are best.
The ADVANCE trial, enrolling 11,140 patients with type 2 diabetes, was largely conducted in Australia and used the sulfonylurea glipizide for glycemic control. Compared with the group that received standard therapy (n=5,569), the intensive-treatment group (n=5,571) achieved mean hemoglobin A1c levels of 6.5% compared with 7.3% in the standard group, and had less nephropathy, less microalbuminuria, less doubling of creatinine, and a lower rate of end-stage renal disease (4% vs 5% in the standard therapy group). No difference between the two groups was found in retinopathy. Rates of all-cause mortality did not differ between the groups.9
The ACCORD trial had more than 10,000 subjects with type 2 diabetes and took place mostly in the United States. Using mainly rosiglitazone for intensive therapy, the intensive group achieved hemoglobin A1c levels of 6.4% vs 7.5% in the standard-therapy group. The trial was stopped early, at 3.7 years, because of a higher risk of death and cardiovascular events in the group with intensive glycemic control. However, the intensive-therapy group did have a significant decrease in microvascular renal outcomes and a reduction in the progression of retinopathy.14,26
In summary, tighter glycemic control improves microvascular complications—both retinopathy and nephropathy—in patients with type 2 diabetes. The benefit of intensive therapy on macrovascular complications (stroke, myocardial infarction) in long-standing diabetes has not been convincingly demonstrated in randomized trials. The UKPDS suggested that maintaining a hemoglobin A1c of 7% in patients newly diagnosed with type 2 diabetes confers long-term cardiovascular benefits. The target hemoglobin A1c for type 2 diabetes should be tailored to the patient: 7% is a reasonable goal for most patients, but the goal should be higher for the elderly and frail. Reducing the risk of cardiovascular death is still best done by controlling blood pressure, reducing lipids, quitting smoking, and losing weight.
STRATEGY 3: INHIBIT THE RENIN-ANGIOTENSIN-ALDOSTERONE AXIS
Components of the renin-angiotensin-aldo-sterone system are present not only in the circulation but also in many tissues, including the heart, brain, kidney, blood vessels, and adrenal glands. The role of renin-angiotensin-aldosterone system blockers in treating and preventing diabetic nephropathy has become controversial in recent years with findings from new studies.
The renin-angiotensin-aldosterone system is important in the development or maintenance of high blood pressure and the resultant damage to the brain, heart, and kidney. Drug development has focused on inhibiting steps in the biochemical pathway. ACE inhibitors block the formation of angiotensin II—the most biologically potent angiotensin peptide—and are among the most commonly used drugs to treat hypertension and concomitant conditions, such as renal insufficiency, proteinuria, and heart failure. Angiotensin receptor blockers (ARBs) interact with the angiotensin AT1 receptor and block most of its actions. They are approved by the US Food and Drug Administration (FDA) for the treatment of hypertension, and they help prevent left ventricular hypertrophy and mesangial sclerosis. Large studies have shown that ACE inhibitors and ARBs offer similar cardiovascular benefit.
The glomerulus has the only capillary bed with a blood supply that drains into an efferent arteriole instead of a venule, providing high resistance to aid filtration. Efferent arterioles are rich in AT1 receptors. In the presence of angiotensin II they constrict, increasing pressure in the glomerulus, which can lead to proteinuria and glomerulosclerosis. ACE inhibitors and ARBs relax the efferent arteriole, allowing increased blood flow through the glomerulus. This reduction in intraglomerular pressure is associated with less proteinuria and less glomerulosclerosis.
Diabetes promotes renal disease in many ways. Glucose and advanced glycation end products can lead to increased blood flow and increased pressure in the glomerulus. Through a variety of pathways, hyperglycemia, acting on angiotensin II, leads to NF-kapa beta production, profibrotic cytokines, increased matrix, and eventual fibrosis. ACE inhibitors and ARBs counteract many of these.
ACE inhibitors and ARBs slow nephropathy progression beyond blood pressure control
Several major clinical trials27–32 examined the effects of either ACE inhibitors or ARBs in slowing the progression of diabetic nephropathy and have had consistently positive results.
The Collaborative Study Group30 was a 3-year randomized trial in 419 patients with type 1 diabetes, using the ACE inhibitor captopril vs placebo. Captopril was associated with less decline in kidney function and a 50% reduction in the risk of the combined end points of death, dialysis, and transplantation that was independent of the small difference in blood pressures between the two groups.
The Irbesartan Diabetic Nephropathy Trial (IDNT)31 studied the effect of the ARB irbesartan vs the calcium channel blocker amlodipine vs placebo over 2.6 years in 1,715 patients with type 2 diabetes. Irbesartan was found to be significantly more effective in protecting against the progression of nephropathy, independent of reduction in blood pressure.
The RENAAL trial,32 published in 2001, was a 3-year, randomized, double-blind study comparing the ARB losartan at increasing dosages with placebo (both taken in addition to conventional antihypertensive treatment) in 1,513 patients with type 2 diabetes and nephropathy. The blood pressure goal was 140/90 mm Hg in both groups, but the losartan group had a lower rate of doubling of serum creatinine, end-stage renal disease, and combined end-stage renal disease or death.
‘Aldosterone escape’ motivates the search for new therapies
An important reason for developing more ways to block the renin-angiotensin-aldosterone system is because of “aldosterone escape,” the phenomenon of angiotensin II or aldosterone returning to pretreatment levels despite continued ACE inhibition.
Biollaz et al,33 in a 1982 study of 19 patients with hypertension, showed that despite reducing blood pressure and keeping the blood level of ACE very low with twice-daily enalapril 20 mg, blood and urine levels of angiotensin II steadily rose back to baseline levels within a few months.
A growing body of evidence suggests that despite effective inhibition of angiotensin II activity, non-ACE synthetic pathways still permit angiotensin II generation via serine proteases such as chymase, cathepsin G, and tissue plasminogen activator.
Thus, efforts have been made to block the renin-angiotensin system in other places. In addition to ACE inhibitors and ARBs, two aldosterone receptor antagonists are available, spironolactone and eplerenone, both used to treat heart failure. A direct renin inhibitor, aliskiren, is also available.
Combination therapy—less proteinuria, but…
A number of studies have shown that combination treatment with agents having different targets in the renin-angiotensin-aldosterone system leads to larger reductions in albuminuria than does single-agent therapy.
Mogensen et al34 studied the effect of the ACE inhibitor lisinopril (20 mg per day) plus the ARB candesartan (16 mg per day) in subjects with microalbuminuria, hypertension, and type 2 diabetes. Combined treatment was more effective in reducing proteinuria.
Epstein et al35 studied the effects of the ACE inhibitor enalapril (20 mg/day) combined with either of two doses of the selective aldosterone receptor antagonist eplerenone (50 or 100 mg/day) or placebo. Both eplerenone dosages, when added to the enalapril treatment, significantly reduced albuminuria from baseline as early as week 4 (P < .001), but placebo treatment added to the enalapril did not result in any significant decrease in urinary albumin excretion. Systolic blood pressure decreased significantly in all treatment groups and by about the same amount.
The Aliskiren Combined With Losartan in Type 2 Diabetes and Nephropathy (AVOID) trial36 randomized more than 600 patients with type 2 diabetes and nephropathy to aliskiren (a renin inhibitor) or placebo added to the ARB losartan. Again, combination treatment was more renoprotective, independent of blood pressure lowering.
Worse outcomes with combination therapy?
More recent studies have indicated that although combination therapy reduces proteinuria to a greater extent than monotherapy, overall it worsens major renal and cardiovascular outcomes. The multicenter Ongoing Telmisartan Alone and in Combination With Ramipril Global Endpoint Trial (ONTARGET)37 randomized more than 25,000 patients age 55 and older with established atherosclerotic vascular disease or with diabetes and end-organ damage to receive either the ARB telmisartan 80 mg daily, the ACE inhibitor ramipril 10 mg daily, or both. Mean follow-up was 56 months. The combination-treatment group had higher rates of death and renal disease than the single-therapy groups (which did not differ from one another).
Why the combination therapy had poorer outcomes is under debate. Patients may get sudden drops in blood pressure that are not detected with only periodic monitoring. Renal failure was mostly acute rather than chronic, and the estimated GFR declined more in the combined therapy group than in the single-therapy groups.
The Aliskiren Trial in Type 2 Diabetes Using Cardiovascular and Renal Disease Endpoints (ALTITUDE) was designed to test the effect of the direct renin inhibitor aliskiren or placebo, both arms combined with either an ACE inhibitor or an ARB in patients with type 2 diabetes at high risk for cardiovascular and renal events. The trial was terminated early because of more strokes and deaths in the combination therapy arms. The results led the FDA to issue black box warnings against using aliskiren with these other classes of agents, and all studies testing similar combinations have been stopped. (In one study that was stopped and has not yet been published, 100 patients with proteinuria were treated with either aliskiren, the ARB losartan, or both, to evaluate the effects of aldosterone escape. Results showed no differences: about one-third of each group had this phenomenon.)
My personal recommendation is as follows: for younger patients with proteinuria, at lower risk for cardiovascular events and with disease due not to diabetes but to immunoglobulin A nephropathy or another proteinuric kidney disease, treat with both an ACE inhibitor and ARB. But the combination should not be used for patients at high risk of cardiovascular disease, which includes almost all patients with diabetes.
If more aggressive renin-angiotensin system blockade is needed against diabetic nephropathy, adding a diuretic increases the impact of blocking the renin-angiotensin-aldosterone system on both proteinuria and progression of renal disease. The aldosterone blocker spironolactone 25 mg can be added if potassium levels are carefully monitored.
ACE inhibitor plus calcium channel blocker is safer than ACE inhibitor plus diuretic
The Avoiding Cardiovascular Events Through Combination Therapy in Patients Living With Systolic Hypertension (ACCOMPLISH) trial38 randomized more than 11,000 high-risk patients with hypertension to receive an ACE inhibitor (benazepril) plus either a calcium channel blocker (amlodipine) or thiazide diuretic (hydrochlorothiazide). Blood pressures were identical between the two groups, but the trial was terminated early, at 36 months, because of a higher risk of the combined end point of cardiovascular death, myocardial infarction, stroke, and other major cardiac events in the ACE inhibitor-thiazide group.
Although some experts believe this study is definitive and indicates that high blood pressure should never be treated with an ACE inhibitor-thiazide combination, I believe that caution is needed in interpreting these findings. This regimen should be avoided in older patients with diabetes at high risk for cardiovascular disease, but otherwise, getting blood pressure under control is critical, and this combination can be used if it works and the patient is tolerating it well.
In summary, the choice of blood pressure-lowering medications is based on reducing cardiovascular events and slowing the progression of kidney disease. Either an ACE inhibitor or an ARB is the first choice for patients with diabetes, hypertension, and any degree of proteinuria. Many experts recommend beginning one of these agents even if proteinuria is not present. However, the combination of an ACE inhibitor and ARB should not be used in diabetic patients, especially if they have cardiovascular disease, until further data clarify the results of the ONTARGET and ALTITUDE trials.
STRATEGY 4: METABOLIC MANIPULATION WITH NOVEL AGENTS
Several new agents have recently been studied for the treatment of diabetic nephropathy, including aminoguanidine, which reduces levels of advanced glycation end-products, and sulodexide, which blocks basement membrane permeability. Neither agent has been shown to be safe and effective in diabetic nephropathy. The newest agent is bardoxolone methyl. It induces the Keap1–Nrf2 pathway, which up-regulates cytoprotective factors, suppressing inflammatory and other cytokines that are major mediators of progression of chronic kidney disease.39
Pergola et al,40 in a phase 2, double-blind trial, randomized 227 adults with diabetic kidney disease and a low estimated GFR (20–45 mL/min/1.73 m2) to receive placebo or bardoxolone 25, 75, or 150 mg daily. Drug treatment was associated with improvement in the estimated GFR, a finding that persisted throughout the 52 weeks of treatment. Surprisingly, proteinuria did not decrease with drug treatment.
As of this writing, a large multicenter controlled randomized trial has been halted because of concerns by the data safety monitoring board, which found increased rates of death and fluid retention with the drug. A number of recent trials have shown a beneficial effect of sodium bicarbonate therapy in patients with late-stage chronic kidney disease. They have shown slowing of the progression of GFR decline in a number of renal diseases, including diabetes.
Diabetes is on the rise, and so is diabetic nephropathy. In view of this epidemic, physicians should consider strategies to detect and control kidney disease in their diabetic patients.
This article will focus on kidney disease in adult-onset type 2 diabetes. Although it has different pathogenetic mechanisms than type 1 diabetes, the clinical course of the two conditions is very similar in terms of the prevalence of proteinuria after diagnosis, the progression to renal failure after the onset of proteinuria, and treatment options.1
DIABETES AND DIABETIC KIDNEY DISEASE ARE ON THE RISE
The incidence of diabetes increases with age, and with the aging of the baby boomers, its prevalence is growing dramatically. The 2005– 2008 National Health and Nutrition Examination Survey estimated the prevalence as 3.7% in adults age 20 to 44, 13.7% at age 45 to 64, and 26.9% in people age 65 and older. The obesity epidemic is also contributing to the increase in diabetes in all age groups.
Diabetic kidney disease has increased in the United States from about 4 million cases 20 years ago to about 7 million in 2005–2008.2 Diabetes is the major cause of end-stage renal disease in the developed world, accounting for 40% to 50% of cases. Other major causes are hypertension (27%) and glomerulonephritis (13%).3
Physicians in nearly every field of medicine now care for patients with diabetic nephropathy. The classic presentation—a patient who has impaired vision, fluid retention with edema, and hypertension—is commonly seen in dialysis units and ophthalmology and cardiovascular clinics.
CLINICAL PROGRESSION
Early in the course of diabetic nephropathy, blood pressure is normal and microalbuminuria is not evident, but many patients have a high glomerular filtration rate (GFR), indicating temporarily “enhanced” renal function or hyperfiltration. The next stage is characterized by microalbuminuria, correlating with glomerular mesangial expansion: the GFR falls back into the normal range and blood pressure starts to increase. Finally, macroalbuminuria occurs, accompanied by rising blood pressure and a declining GFR, correlating with the histologic appearance of glomerulosclerosis and Kimmelstiel-Wilson nodules.4
Hypertension develops in 5% of patients by 10 years after type 1 diabetes is diagnosed, 33% by 20 years, and 70% by 40 years. In contrast, 40% of patients with type 2 diabetes have high blood pressure at diagnosis.
Unfortunately, in most cases, this progression is a one-way street, so it is critical to intervene to try to slow the progression early in the course of the disease process.
SCREENING FOR DIABETIC NEPHROPATHY
Nephropathy screening guidelines for patients with diabetes are provided in Table 1.5
Blood pressure should be monitored at each office visit (Table 1). The goal for adults with diabetes should be to reduce blood pressure to 130/80 mm Hg. Reduction beyond this level may be associated with an increased mortality rate.6 Very high blood pressure (> 180 mm Hg systolic) should be lowered slowly. Lowering blood pressure delays the progression from microalbuminuria (30–299 mg/day or 20–199 μg/min) to macroalbuminuria (> 300 mg/day or > 200 μg/min) and slows the progression to renal failure.
Urinary albumin. Proteinuria takes 5 to 10 years to develop after the onset of diabetes. Because it is possible for patients with type 2 diabetes to have had the disease for some time before being diagnosed, urinary albumin screening should be performed at diagnosis and annually thereafter. Patients with type 1 are usually diagnosed with diabetes at or near onset of disease; therefore, annual screening for urinary albumin can begin 5 years after diagnosis.5
Proteinuria can be measured in different ways (Table 2). The basic screening test for clinical proteinuria is the urine dipstick, which is very sensitive to albumin and relatively insensitive to other proteins. “Trace-positive” results are common in healthy people, so proteinuria is not confirmed unless a patient has repeatedly positive results.
Microalbuminuria is important to measure, especially if it helps determine therapy. It is not detectable by the urinary dipstick, but can be measured in the following ways:
- Measurement of the albumin-creatinine ratio in a random spot collection
- 24-hour collection (creatinine should simultaneously be measured and creatinine clearance calculated)
- Timed collection (4 hours or overnight).
The first method is preferred, and any positive test result must be confirmed by repeat analyses of urinary albumin before a patient is diagnosed with microalbuminuria.
Occasionally a patient presenting with proteinuria but normal blood sugar and hemoglobin A1c will have a biopsy that reveals morphologic changes of classic diabetic nephropathy. Most such patients have a history of hyperglycemia, indicating that they actually have been diabetic.
Proteinuria—the best marker of disease progression
Proteinuria is the strongest predictor of renal outcomes. The Reduction in End Points in Noninsulin-Dependent Diabetes Mellitus With the Angiotensin II Antagonist Losartan (RENAAL) study was a randomized, placebo-controlled trial in more than 1,500 patients with type 2 diabetes to test the effects of losartan on renal outcome. Those with high albuminuria (> 3.0 g albumin/g creatinine) at baseline were five times more likely to reach a renal end point and were eight times more likely to have progression to end-stage renal disease than patients with low albuminuria (< 1.5 g/g).7 The degree of albuminuria after 6 months of treatment showed similar predictive trends, indicating that monitoring and treating proteinuria are extremely important goals.
STRATEGY 1 TO LIMIT RENAL INJURY: REDUCE BLOOD PRESSURE
Blood pressure control improves renal and cardiovascular function.
As early as 1983, Parving et al,8 in a study of only 10 insulin-dependent diabetic patients, showed strong evidence that early aggressive antihypertensive treatment improved the course of diabetic nephropathy. During the mean pretreatment period of 29 months, the GFR decreased significantly and the urinary albumin excretion rate and arterial blood pressure rose significantly. During the mean 39-month period of antihypertensive treatment with metoprolol, hydralazine, and furosemide or a thiazide, mean arterial blood pressure fell from 144/97 to 128/84 mm Hg and urinary albumin excretion from 977 to 433 μg/ min. The rate of decline in GFR slowed from 0.91 mL/min/month before treatment to 0.39 mL/min/month during treatment.
The Action in Diabetes and Vascular Disease: Preterax and Diamicron MR Controlled Evaluation (ADVANCE) trial9 enrolled more than 11,000 patients internationally with type 2 diabetes at high risk for cardiovascular events. In addition to standard therapy, blood pressure was intensively controlled in one group with a combination of the angiotensin-converting enzyme (ACE) inhibitor perindopril and the diuretic indapamide. The intensive-therapy group achieved blood pressures less than 140/80 mm Hg and had a mean reduction of systolic blood pressure of 5.6 mm Hg and diastolic blood pressure of 2.2 mm Hg vs controls. Despite these apparently modest reductions, the intensively controlled group had a significant 9% reduction of the primary outcome of combined macrovascular events (cardiovascular death, myocardial infarction, and stroke) and microvascular events (new or worsening nephropathy, or retinopathy).10
A meta-analysis of studies of patients with type 2 diabetes found reduced nephropathy with systolic blood pressure control to less than 130 mm Hg.11
The United Kingdom Prospective Diabetes Study (UKPDS) is a series of studies of diabetes. The original study in 1998 enrolled 5,102 patients with newly diagnosed type 2 diabetes.12 The more than 1,000 patients with hypertension were randomized to either tight blood pressure control or regular care. The intensive treatment group had a mean blood pressure reduction of 9 mm Hg systolic and 3 mm Hg diastolic, along with major reductions in all diabetes end points, diabetes deaths, microvascular disease, and stroke over a median follow-up of 8.4 years.
Continuous blood pressure control is critical
Tight blood pressure control must be maintained to have continued benefit. During the 10 years following the UKPDS, no attempts were made to maintain the previously assigned therapies. A follow-up study13 of 884 UKPDS patients found that blood pressures were the same again between the two groups 2 years after the trial was stopped, and no beneficial legacy effect from previous blood pressure control was evident on end points.
Control below 120 mm Hg systolic not needed
Blood pressure control slows kidney disease and prevents major macrovascular disease, but there is no evidence that lowering systolic blood pressure below 120 mm Hg provides additional benefit. In the Action to Control Cardiovascular Risk in Diabetes (ACCORD) trial,14 more than 10,000 patients with type 2 diabetes and existing cardiovascular disease or additional cardiovascular risk factors were randomized to a goal of systolic blood pressure less than 120 mm Hg or less than 140 mm Hg (actual mean systolic pressures were 119 vs 134 mm Hg, respectively). Over nearly 5 years, there was no difference in cardiovascular events or deaths between the two groups.15
Since 1997, six international organizations have revised their recommended blood pressure goals in diabetes mellitus and renal diseases. Randomized clinical trials and observational studies have demonstrated the importance of blood pressure control to the level of 125/75 to 140/80 mm Hg. The National Kidney Foundation, the American Diabetes Association, and the Canadian Hypertension Society have developed consensus guidelines for blood pressure control to less than 130/80 mm Hg.16–21 Table 3 summarizes blood pressure goals for patients with diabetes.
STRATEGY 2: CONTROL BLOOD SUGAR
Recommendations for blood sugar goals are more controversial.
The Diabetes Control and Complications Trial22 provided early evidence that tight blood sugar control slows the development of microalbuminuria and macroalbuminuria. The study randomized more than 1,400 patients with type 1 diabetes to either standard therapy (1 or 2 daily insulin injections) or intensive therapy (an external insulin pump or 3 or more insulin injections guided by frequent blood glucose monitoring) to keep blood glucose levels close to normal. About half the patients had mild retinopathy at baseline and the others had no retinopathy. After 6.5 years, intensive therapy was found to significantly delay the onset and slow the progression of diabetic retinopathy and nephropathy.
The Kumamoto Study23 randomized 110 patients with type 2 diabetes and either no retinopathy (primary prevention cohort) or simple retinopathy (secondary prevention cohort) to receive either multiple insulin injections or conventional insulin therapy over 8 years. Intensive therapy led to lower rates of retinopathy (7.7% vs 32% in primary prevention and 19% vs 44% in secondary prevention) and progressive nephropathy (7% vs 28% in primary prevention at 6 years and 11% vs 32% in secondary prevention).
In addition to studying the effects of blood pressure control, the UKPDS also studied the effects of intensive blood glucose control.24,25 Nearly 4,000 patients with newly diagnosed type 2 diabetes were randomized to intensive treatment with a sulfonylurea or insulin, or to conventional treatment with diet. Over 10 years, the mean hemoglobin A1c was reduced to 7.0% in the intensive group and 7.9% in the conventional group. The risk of any diabetes-related end point was 12% lower in the intensive group, 10% lower for diabetes-related death, and 6% lower for all-cause mortality. There was also a 25% reduction in microvascular disease (retinopathy and nephropathy). However, the intensive group had more hypoglycemic episodes than the conventional group and a tendency to some increase in macrovascular events. A legacy effect was evident: patients who had intensive treatment had less microvascular disease progression years after stopping therapy.
Tight glycemic control reduces nephropathy, but does it increase cardiovascular risk?
Earlier trials provided strong evidence that blood glucose control prevents or slows retinopathy and nephropathy. The critical question is, “At what expense?” Although diabetes is the most common cause of kidney failure in the United States, most people with diabetes do not die of kidney failure, but of cardiovascular disease. Two recent large trials had different results regarding glycemic control below hemoglobin A1c of 7.0% and macrovascular risk, creating a controversy about what recommendations are best.
The ADVANCE trial, enrolling 11,140 patients with type 2 diabetes, was largely conducted in Australia and used the sulfonylurea glipizide for glycemic control. Compared with the group that received standard therapy (n=5,569), the intensive-treatment group (n=5,571) achieved mean hemoglobin A1c levels of 6.5% compared with 7.3% in the standard group, and had less nephropathy, less microalbuminuria, less doubling of creatinine, and a lower rate of end-stage renal disease (4% vs 5% in the standard therapy group). No difference between the two groups was found in retinopathy. Rates of all-cause mortality did not differ between the groups.9
The ACCORD trial had more than 10,000 subjects with type 2 diabetes and took place mostly in the United States. Using mainly rosiglitazone for intensive therapy, the intensive group achieved hemoglobin A1c levels of 6.4% vs 7.5% in the standard-therapy group. The trial was stopped early, at 3.7 years, because of a higher risk of death and cardiovascular events in the group with intensive glycemic control. However, the intensive-therapy group did have a significant decrease in microvascular renal outcomes and a reduction in the progression of retinopathy.14,26
In summary, tighter glycemic control improves microvascular complications—both retinopathy and nephropathy—in patients with type 2 diabetes. The benefit of intensive therapy on macrovascular complications (stroke, myocardial infarction) in long-standing diabetes has not been convincingly demonstrated in randomized trials. The UKPDS suggested that maintaining a hemoglobin A1c of 7% in patients newly diagnosed with type 2 diabetes confers long-term cardiovascular benefits. The target hemoglobin A1c for type 2 diabetes should be tailored to the patient: 7% is a reasonable goal for most patients, but the goal should be higher for the elderly and frail. Reducing the risk of cardiovascular death is still best done by controlling blood pressure, reducing lipids, quitting smoking, and losing weight.
STRATEGY 3: INHIBIT THE RENIN-ANGIOTENSIN-ALDOSTERONE AXIS
Components of the renin-angiotensin-aldo-sterone system are present not only in the circulation but also in many tissues, including the heart, brain, kidney, blood vessels, and adrenal glands. The role of renin-angiotensin-aldosterone system blockers in treating and preventing diabetic nephropathy has become controversial in recent years with findings from new studies.
The renin-angiotensin-aldosterone system is important in the development or maintenance of high blood pressure and the resultant damage to the brain, heart, and kidney. Drug development has focused on inhibiting steps in the biochemical pathway. ACE inhibitors block the formation of angiotensin II—the most biologically potent angiotensin peptide—and are among the most commonly used drugs to treat hypertension and concomitant conditions, such as renal insufficiency, proteinuria, and heart failure. Angiotensin receptor blockers (ARBs) interact with the angiotensin AT1 receptor and block most of its actions. They are approved by the US Food and Drug Administration (FDA) for the treatment of hypertension, and they help prevent left ventricular hypertrophy and mesangial sclerosis. Large studies have shown that ACE inhibitors and ARBs offer similar cardiovascular benefit.
The glomerulus has the only capillary bed with a blood supply that drains into an efferent arteriole instead of a venule, providing high resistance to aid filtration. Efferent arterioles are rich in AT1 receptors. In the presence of angiotensin II they constrict, increasing pressure in the glomerulus, which can lead to proteinuria and glomerulosclerosis. ACE inhibitors and ARBs relax the efferent arteriole, allowing increased blood flow through the glomerulus. This reduction in intraglomerular pressure is associated with less proteinuria and less glomerulosclerosis.
Diabetes promotes renal disease in many ways. Glucose and advanced glycation end products can lead to increased blood flow and increased pressure in the glomerulus. Through a variety of pathways, hyperglycemia, acting on angiotensin II, leads to NF-kapa beta production, profibrotic cytokines, increased matrix, and eventual fibrosis. ACE inhibitors and ARBs counteract many of these.
ACE inhibitors and ARBs slow nephropathy progression beyond blood pressure control
Several major clinical trials27–32 examined the effects of either ACE inhibitors or ARBs in slowing the progression of diabetic nephropathy and have had consistently positive results.
The Collaborative Study Group30 was a 3-year randomized trial in 419 patients with type 1 diabetes, using the ACE inhibitor captopril vs placebo. Captopril was associated with less decline in kidney function and a 50% reduction in the risk of the combined end points of death, dialysis, and transplantation that was independent of the small difference in blood pressures between the two groups.
The Irbesartan Diabetic Nephropathy Trial (IDNT)31 studied the effect of the ARB irbesartan vs the calcium channel blocker amlodipine vs placebo over 2.6 years in 1,715 patients with type 2 diabetes. Irbesartan was found to be significantly more effective in protecting against the progression of nephropathy, independent of reduction in blood pressure.
The RENAAL trial,32 published in 2001, was a 3-year, randomized, double-blind study comparing the ARB losartan at increasing dosages with placebo (both taken in addition to conventional antihypertensive treatment) in 1,513 patients with type 2 diabetes and nephropathy. The blood pressure goal was 140/90 mm Hg in both groups, but the losartan group had a lower rate of doubling of serum creatinine, end-stage renal disease, and combined end-stage renal disease or death.
‘Aldosterone escape’ motivates the search for new therapies
An important reason for developing more ways to block the renin-angiotensin-aldosterone system is because of “aldosterone escape,” the phenomenon of angiotensin II or aldosterone returning to pretreatment levels despite continued ACE inhibition.
Biollaz et al,33 in a 1982 study of 19 patients with hypertension, showed that despite reducing blood pressure and keeping the blood level of ACE very low with twice-daily enalapril 20 mg, blood and urine levels of angiotensin II steadily rose back to baseline levels within a few months.
A growing body of evidence suggests that despite effective inhibition of angiotensin II activity, non-ACE synthetic pathways still permit angiotensin II generation via serine proteases such as chymase, cathepsin G, and tissue plasminogen activator.
Thus, efforts have been made to block the renin-angiotensin system in other places. In addition to ACE inhibitors and ARBs, two aldosterone receptor antagonists are available, spironolactone and eplerenone, both used to treat heart failure. A direct renin inhibitor, aliskiren, is also available.
Combination therapy—less proteinuria, but…
A number of studies have shown that combination treatment with agents having different targets in the renin-angiotensin-aldosterone system leads to larger reductions in albuminuria than does single-agent therapy.
Mogensen et al34 studied the effect of the ACE inhibitor lisinopril (20 mg per day) plus the ARB candesartan (16 mg per day) in subjects with microalbuminuria, hypertension, and type 2 diabetes. Combined treatment was more effective in reducing proteinuria.
Epstein et al35 studied the effects of the ACE inhibitor enalapril (20 mg/day) combined with either of two doses of the selective aldosterone receptor antagonist eplerenone (50 or 100 mg/day) or placebo. Both eplerenone dosages, when added to the enalapril treatment, significantly reduced albuminuria from baseline as early as week 4 (P < .001), but placebo treatment added to the enalapril did not result in any significant decrease in urinary albumin excretion. Systolic blood pressure decreased significantly in all treatment groups and by about the same amount.
The Aliskiren Combined With Losartan in Type 2 Diabetes and Nephropathy (AVOID) trial36 randomized more than 600 patients with type 2 diabetes and nephropathy to aliskiren (a renin inhibitor) or placebo added to the ARB losartan. Again, combination treatment was more renoprotective, independent of blood pressure lowering.
Worse outcomes with combination therapy?
More recent studies have indicated that although combination therapy reduces proteinuria to a greater extent than monotherapy, overall it worsens major renal and cardiovascular outcomes. The multicenter Ongoing Telmisartan Alone and in Combination With Ramipril Global Endpoint Trial (ONTARGET)37 randomized more than 25,000 patients age 55 and older with established atherosclerotic vascular disease or with diabetes and end-organ damage to receive either the ARB telmisartan 80 mg daily, the ACE inhibitor ramipril 10 mg daily, or both. Mean follow-up was 56 months. The combination-treatment group had higher rates of death and renal disease than the single-therapy groups (which did not differ from one another).
Why the combination therapy had poorer outcomes is under debate. Patients may get sudden drops in blood pressure that are not detected with only periodic monitoring. Renal failure was mostly acute rather than chronic, and the estimated GFR declined more in the combined therapy group than in the single-therapy groups.
The Aliskiren Trial in Type 2 Diabetes Using Cardiovascular and Renal Disease Endpoints (ALTITUDE) was designed to test the effect of the direct renin inhibitor aliskiren or placebo, both arms combined with either an ACE inhibitor or an ARB in patients with type 2 diabetes at high risk for cardiovascular and renal events. The trial was terminated early because of more strokes and deaths in the combination therapy arms. The results led the FDA to issue black box warnings against using aliskiren with these other classes of agents, and all studies testing similar combinations have been stopped. (In one study that was stopped and has not yet been published, 100 patients with proteinuria were treated with either aliskiren, the ARB losartan, or both, to evaluate the effects of aldosterone escape. Results showed no differences: about one-third of each group had this phenomenon.)
My personal recommendation is as follows: for younger patients with proteinuria, at lower risk for cardiovascular events and with disease due not to diabetes but to immunoglobulin A nephropathy or another proteinuric kidney disease, treat with both an ACE inhibitor and ARB. But the combination should not be used for patients at high risk of cardiovascular disease, which includes almost all patients with diabetes.
If more aggressive renin-angiotensin system blockade is needed against diabetic nephropathy, adding a diuretic increases the impact of blocking the renin-angiotensin-aldosterone system on both proteinuria and progression of renal disease. The aldosterone blocker spironolactone 25 mg can be added if potassium levels are carefully monitored.
ACE inhibitor plus calcium channel blocker is safer than ACE inhibitor plus diuretic
The Avoiding Cardiovascular Events Through Combination Therapy in Patients Living With Systolic Hypertension (ACCOMPLISH) trial38 randomized more than 11,000 high-risk patients with hypertension to receive an ACE inhibitor (benazepril) plus either a calcium channel blocker (amlodipine) or thiazide diuretic (hydrochlorothiazide). Blood pressures were identical between the two groups, but the trial was terminated early, at 36 months, because of a higher risk of the combined end point of cardiovascular death, myocardial infarction, stroke, and other major cardiac events in the ACE inhibitor-thiazide group.
Although some experts believe this study is definitive and indicates that high blood pressure should never be treated with an ACE inhibitor-thiazide combination, I believe that caution is needed in interpreting these findings. This regimen should be avoided in older patients with diabetes at high risk for cardiovascular disease, but otherwise, getting blood pressure under control is critical, and this combination can be used if it works and the patient is tolerating it well.
In summary, the choice of blood pressure-lowering medications is based on reducing cardiovascular events and slowing the progression of kidney disease. Either an ACE inhibitor or an ARB is the first choice for patients with diabetes, hypertension, and any degree of proteinuria. Many experts recommend beginning one of these agents even if proteinuria is not present. However, the combination of an ACE inhibitor and ARB should not be used in diabetic patients, especially if they have cardiovascular disease, until further data clarify the results of the ONTARGET and ALTITUDE trials.
STRATEGY 4: METABOLIC MANIPULATION WITH NOVEL AGENTS
Several new agents have recently been studied for the treatment of diabetic nephropathy, including aminoguanidine, which reduces levels of advanced glycation end-products, and sulodexide, which blocks basement membrane permeability. Neither agent has been shown to be safe and effective in diabetic nephropathy. The newest agent is bardoxolone methyl. It induces the Keap1–Nrf2 pathway, which up-regulates cytoprotective factors, suppressing inflammatory and other cytokines that are major mediators of progression of chronic kidney disease.39
Pergola et al,40 in a phase 2, double-blind trial, randomized 227 adults with diabetic kidney disease and a low estimated GFR (20–45 mL/min/1.73 m2) to receive placebo or bardoxolone 25, 75, or 150 mg daily. Drug treatment was associated with improvement in the estimated GFR, a finding that persisted throughout the 52 weeks of treatment. Surprisingly, proteinuria did not decrease with drug treatment.
As of this writing, a large multicenter controlled randomized trial has been halted because of concerns by the data safety monitoring board, which found increased rates of death and fluid retention with the drug. A number of recent trials have shown a beneficial effect of sodium bicarbonate therapy in patients with late-stage chronic kidney disease. They have shown slowing of the progression of GFR decline in a number of renal diseases, including diabetes.
- Ritz E, Orth SR. Nephropathy in patients with type 2 diabetes mellitus. N Engl J Med 1999; 341:1127–1133.
- de Boer IH, Rue TC, Hall YN, Heagerty PJ, Weiss NS, Himmelfarb J. Temporal trends in the prevalence of diabetic kidney disease in the United States. JAMA 2011; 305:2532–2539.
- United States Renal Data System (USRDS) 2000 Annual Data Report. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases – Division of Kidney, Urologic and Hematologic Diseases. USRDS Coordinating Center operated by the Minneapolis Medical Research Foundation. www.usrds.org
- Macisaac RJ, Jerums G. Diabetic kidney disease with and without albuminuria. Curr Opin Nephrol Hypertens 2011; 20:246–257.
- Molitch ME, DeFronzo RA, Franz MJ, et al; American Diabetes Association. Nephropathy in diabetes. Diabetes Care 2004; 27(suppl 1):S79–S83.
- Vamos EP, Harris M, Millett C, et al. Association of systolic and diastolic blood pressure and all cause mortality in people with newly diagnosed type 2 diabetes: retrospective cohort study. BMJ 2012; 345:e5567.
- de Zeeuw D, Remuzzi G, Parving HH, et al. Proteinuria, a target for renoprotection in patients with type 2 diabetic nephropathy: lessons from RENAAL. Kidney Int 2004; 65:2309–2320.
- Parving HH, Andersen AR, Smidt UM, Svendsen PA. Early aggressive antihypertensive treatment reduces rate of decline in kidney function in diabetic nephropathy. Lancet 1983; 1:1175–1179.
- ADVANCE Collaborative Group; Patel A, MacMahon S, Chalmers J, et al. Intensive blood glucose control and vascular outcomes in patients with type 2 diabetes. N Engl J Med 2008; 358:2560–2572.
- ADVANCE Collaborative Group; Patel A, MacMahon S, Chalmers J, et al. Effects of a fixed combination of perindopril and indapamide on macrovascular and microvascular outcomes in patients with type 2 diabetes mellitus (the ADVANCE trial): a randomised controlled trial. Lancet 2007; 370:829–840.
- Bangalore S, Kumar S, Lobach I, Messerli FH. Blood pressure targets in subjects with type 2 diabetes mellitus/impaired fasting glucose: observations from traditional and bayesian random-effects meta-analyses of randomized trials. Circulation 2011; 123:2799–2810.
- UK Prospective Diabetes Study Group. Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes: UKPDS 38. BMJ 1998; 317:703–713.
- Holman RR, Paul SK, Bethel MA, Matthews DR, Neil HA. 10-year follow-up of intensive glucose control in type 2 diabetes. N Engl J Med 2008; 359:1577–1589.
- Action to Control Cardiovascular Risk in Diabetes Study Group; Gerstein HC, Miller ME, Byington RP, et al. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med 2008; 358:2545–2559.
- ACCORD Study Group; Cushman WC, Evans GW, Byington RP, et al. Effects of intensive blood-pressure control in type 2 diabetes mellitus. N Engl J Med 2010; 362:1575–1585.
- American Diabetes Association. Standards of medical care in diabetes—2012. Diabetes Care 2012; 35(suppl 1):S11–S63. (Erratum in: Diabetes Care 2012; 35:660.)
- Bakris GL, Williams M, Dworkin L, et al. Preserving renal function in adults with hypertension and diabetes: a consensus approach. National Kidney Foundation Hypertension and Diabetes Executive Committees Working Group. Am J Kidney Dis 2000; 36:646–661.
- Ramsay L, Williams B, Johnston G, et al. Guidelines for management of hypertension: report of the third working party of the British Hypertension Society. J Hum Hypertens 1999; 13:569–592.
- Feldman RD, Campbell N, Larochelle P, et al. 1999 Canadian recommendations for the management of hypertension. Task Force for the Development of the 1999 Canadian Recommendations for the Management of Hypertension. CMAJ 1999; 161(suppl):12:S1–S17.
- Chalmers J, MacMahon S, Mancia G, et al. 1999 World Health Organization-International Society of Hypertension Guidelines for the management of hypertension. Guidelines Sub-committee of the World Health Organization. Clin Exp Hypertens 1999; 21:1009–1060.
- The seventh report of the Joint National Committee on Prevention, Detection Evaluation, and Treatment of High Blood Pressure. Hypertension 2003; 42:1206–1252.
- The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. The Diabetes Control and Complications Trial Research Group. N Engl J Med 1993; 329:977–986.
- Shichiri M, Kishikawa H, Ohkubo Y, Wake N. Long-term results of the Kumamoto Study on optimal diabetes control in type 2 diabetic patients. Diabetes Care 2000; 23(suppl 2):B21–B29.
- Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). UK Prospective Diabetes Study (UKPDS) Group. Lancet 1998; 352:837–853. Erratum in: Lancet 1999; 354:602.
- Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes: UKPDS 38. UK Prospective Diabetes Study Group. BMJ 1998; 317:703–713. Erratum in: BMJ 1999; 318:29.
- Ismail-Beigi F, Craven T, Banerji MA, et al; ACCORD trial group. Effect of intensive treatment of hyperglycaemia on microvascular outcomes in type 2 diabetes: an analysis of the ACCORD randomised trial. Lancet 2010; 376:419–430. Erratum in: Lancet 2010; 376:1466.
- Effects of ramipril on cardiovascular and microvascular outcomes in people with diabetes mellitus: results of the HOPE study and MICRO-HOPE substudy. Heart Outcomes Prevention Evaluation Study Investigators. Lancet 2000: 355:253–259. Erratum in: Lancet2000; 356:860.
- Parving HH, Lehnert H, Bröchner-Mortensen J, et al; Irbesartan in Patients with Type 2 Diabetes and Microalbuminuria Study Group. The effect of irbesartan on the development of diabetic nephropathy in patients with type 2 diabetes. N Engl J Med 2001; 345:870–878.
- Viberti G, Wheeldon NM; MicroAlbuminuria Reduction With VALsartan (MARVAL) Study Investigators. Microalbuminuria reduction with valsartan in patients with type 2 diabetes mellitus: a blood pressure-independent effect. Circulation 2002; 106:672–678.
- Lewis EJ, Hunsicker LG, Bain R P, Rohde RD. The effect of angiotensin-converting-enzyme inhibition on diabetic nephropathy. The Collaborative Study Group. N Engl J Med 1993; 329:1456–1462.
- Lewis EJ, Hunsicker LG, Clarke WR, et al; Collaborative Study Group. Renoprotective effect of the angiotensin-receptor antagonist irbesartan in patients with nephropathy due to type 2 diabetes. N Engl J Med 2001; 345:851–860.
- Brenner BM, Cooper ME, de Zeeuw D, et al; RENAAL Study Investigators. Effects of losartan on renal and cardiovascular outcomes in patients with type 2 diabetes and nephropathy. N Engl J Med 2001; 345:861–869.
- Biollaz J, Brunner HR, Gavras I, Waeber B, Gavras H. Antihypertensive therapy with MK 421: angiotensin II--renin relationships to evaluate efficacy of converting enzyme blockade. J Cardiovasc Pharmacol 1982; 4:966–972.
- Mogensen CE, Neldam S, Tikkanen I, et al. Randomised controlled trial of dual blockade of renin-angiotensin system in patients with hypertension, microalbuminuria, and non-insulin dependent diabetes: the candesartan and lisinopril microalbuminuria (CALM) study. BMJ 2000; 321:1440–1444.
- Epstein M, Williams GH, Weinberger M, et al. Selective aldosterone blockade with eplerenone reduces albuminuria in patients with type 2 diabetes. Clin J Am Soc Nephrol 2006; 1:940–951.
- Parving HH, Persson F, Lewis JB, Lewis EJ, Hollenberg NK; AVOID Study Investigators. Aliskiren combined with losartan in type 2 diabetes and nephropathy. N Engl J Med 2008; 358:2433–2446.
- Mann JF, Schmieder RE, McQueen M, et al; ONTARGET investigators. Renal outcomes with telmisartan, ramipril, or both, in people at high vascular risk (the ONTARGET study): a multicentre, randomised, double-blind, controlled trial. Lancet 2008; 372:547–553.
- Jamerson K, Weber MA, Bakris GL, et al; ACCOMPLISH Trial Investigators. Benazepril plus amlodipine or hydrochlorothiazide for hypertension in high-risk patients. N Engl J Med 2008; 359:2417–2428.
- Kim HJ, Vaziri ND. Contribution of impaired Nrf2-Keap1 pathway to oxidative stress and inflammation in chronic renal failure. Am J Physiol Renal Physiol 2010; 298:F662–F671.
- Pergola PE, Raskin P, Toto RD, et al; BEAM Study Investigators Bardoxolone methyl and kidney function in CKD with type 2 diabetes. N Engl J Med 2011; 365:327–336.
- Ritz E, Orth SR. Nephropathy in patients with type 2 diabetes mellitus. N Engl J Med 1999; 341:1127–1133.
- de Boer IH, Rue TC, Hall YN, Heagerty PJ, Weiss NS, Himmelfarb J. Temporal trends in the prevalence of diabetic kidney disease in the United States. JAMA 2011; 305:2532–2539.
- United States Renal Data System (USRDS) 2000 Annual Data Report. National Institutes of Health, National Institute of Diabetes and Digestive and Kidney Diseases – Division of Kidney, Urologic and Hematologic Diseases. USRDS Coordinating Center operated by the Minneapolis Medical Research Foundation. www.usrds.org
- Macisaac RJ, Jerums G. Diabetic kidney disease with and without albuminuria. Curr Opin Nephrol Hypertens 2011; 20:246–257.
- Molitch ME, DeFronzo RA, Franz MJ, et al; American Diabetes Association. Nephropathy in diabetes. Diabetes Care 2004; 27(suppl 1):S79–S83.
- Vamos EP, Harris M, Millett C, et al. Association of systolic and diastolic blood pressure and all cause mortality in people with newly diagnosed type 2 diabetes: retrospective cohort study. BMJ 2012; 345:e5567.
- de Zeeuw D, Remuzzi G, Parving HH, et al. Proteinuria, a target for renoprotection in patients with type 2 diabetic nephropathy: lessons from RENAAL. Kidney Int 2004; 65:2309–2320.
- Parving HH, Andersen AR, Smidt UM, Svendsen PA. Early aggressive antihypertensive treatment reduces rate of decline in kidney function in diabetic nephropathy. Lancet 1983; 1:1175–1179.
- ADVANCE Collaborative Group; Patel A, MacMahon S, Chalmers J, et al. Intensive blood glucose control and vascular outcomes in patients with type 2 diabetes. N Engl J Med 2008; 358:2560–2572.
- ADVANCE Collaborative Group; Patel A, MacMahon S, Chalmers J, et al. Effects of a fixed combination of perindopril and indapamide on macrovascular and microvascular outcomes in patients with type 2 diabetes mellitus (the ADVANCE trial): a randomised controlled trial. Lancet 2007; 370:829–840.
- Bangalore S, Kumar S, Lobach I, Messerli FH. Blood pressure targets in subjects with type 2 diabetes mellitus/impaired fasting glucose: observations from traditional and bayesian random-effects meta-analyses of randomized trials. Circulation 2011; 123:2799–2810.
- UK Prospective Diabetes Study Group. Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes: UKPDS 38. BMJ 1998; 317:703–713.
- Holman RR, Paul SK, Bethel MA, Matthews DR, Neil HA. 10-year follow-up of intensive glucose control in type 2 diabetes. N Engl J Med 2008; 359:1577–1589.
- Action to Control Cardiovascular Risk in Diabetes Study Group; Gerstein HC, Miller ME, Byington RP, et al. Effects of intensive glucose lowering in type 2 diabetes. N Engl J Med 2008; 358:2545–2559.
- ACCORD Study Group; Cushman WC, Evans GW, Byington RP, et al. Effects of intensive blood-pressure control in type 2 diabetes mellitus. N Engl J Med 2010; 362:1575–1585.
- American Diabetes Association. Standards of medical care in diabetes—2012. Diabetes Care 2012; 35(suppl 1):S11–S63. (Erratum in: Diabetes Care 2012; 35:660.)
- Bakris GL, Williams M, Dworkin L, et al. Preserving renal function in adults with hypertension and diabetes: a consensus approach. National Kidney Foundation Hypertension and Diabetes Executive Committees Working Group. Am J Kidney Dis 2000; 36:646–661.
- Ramsay L, Williams B, Johnston G, et al. Guidelines for management of hypertension: report of the third working party of the British Hypertension Society. J Hum Hypertens 1999; 13:569–592.
- Feldman RD, Campbell N, Larochelle P, et al. 1999 Canadian recommendations for the management of hypertension. Task Force for the Development of the 1999 Canadian Recommendations for the Management of Hypertension. CMAJ 1999; 161(suppl):12:S1–S17.
- Chalmers J, MacMahon S, Mancia G, et al. 1999 World Health Organization-International Society of Hypertension Guidelines for the management of hypertension. Guidelines Sub-committee of the World Health Organization. Clin Exp Hypertens 1999; 21:1009–1060.
- The seventh report of the Joint National Committee on Prevention, Detection Evaluation, and Treatment of High Blood Pressure. Hypertension 2003; 42:1206–1252.
- The effect of intensive treatment of diabetes on the development and progression of long-term complications in insulin-dependent diabetes mellitus. The Diabetes Control and Complications Trial Research Group. N Engl J Med 1993; 329:977–986.
- Shichiri M, Kishikawa H, Ohkubo Y, Wake N. Long-term results of the Kumamoto Study on optimal diabetes control in type 2 diabetic patients. Diabetes Care 2000; 23(suppl 2):B21–B29.
- Intensive blood-glucose control with sulphonylureas or insulin compared with conventional treatment and risk of complications in patients with type 2 diabetes (UKPDS 33). UK Prospective Diabetes Study (UKPDS) Group. Lancet 1998; 352:837–853. Erratum in: Lancet 1999; 354:602.
- Tight blood pressure control and risk of macrovascular and microvascular complications in type 2 diabetes: UKPDS 38. UK Prospective Diabetes Study Group. BMJ 1998; 317:703–713. Erratum in: BMJ 1999; 318:29.
- Ismail-Beigi F, Craven T, Banerji MA, et al; ACCORD trial group. Effect of intensive treatment of hyperglycaemia on microvascular outcomes in type 2 diabetes: an analysis of the ACCORD randomised trial. Lancet 2010; 376:419–430. Erratum in: Lancet 2010; 376:1466.
- Effects of ramipril on cardiovascular and microvascular outcomes in people with diabetes mellitus: results of the HOPE study and MICRO-HOPE substudy. Heart Outcomes Prevention Evaluation Study Investigators. Lancet 2000: 355:253–259. Erratum in: Lancet2000; 356:860.
- Parving HH, Lehnert H, Bröchner-Mortensen J, et al; Irbesartan in Patients with Type 2 Diabetes and Microalbuminuria Study Group. The effect of irbesartan on the development of diabetic nephropathy in patients with type 2 diabetes. N Engl J Med 2001; 345:870–878.
- Viberti G, Wheeldon NM; MicroAlbuminuria Reduction With VALsartan (MARVAL) Study Investigators. Microalbuminuria reduction with valsartan in patients with type 2 diabetes mellitus: a blood pressure-independent effect. Circulation 2002; 106:672–678.
- Lewis EJ, Hunsicker LG, Bain R P, Rohde RD. The effect of angiotensin-converting-enzyme inhibition on diabetic nephropathy. The Collaborative Study Group. N Engl J Med 1993; 329:1456–1462.
- Lewis EJ, Hunsicker LG, Clarke WR, et al; Collaborative Study Group. Renoprotective effect of the angiotensin-receptor antagonist irbesartan in patients with nephropathy due to type 2 diabetes. N Engl J Med 2001; 345:851–860.
- Brenner BM, Cooper ME, de Zeeuw D, et al; RENAAL Study Investigators. Effects of losartan on renal and cardiovascular outcomes in patients with type 2 diabetes and nephropathy. N Engl J Med 2001; 345:861–869.
- Biollaz J, Brunner HR, Gavras I, Waeber B, Gavras H. Antihypertensive therapy with MK 421: angiotensin II--renin relationships to evaluate efficacy of converting enzyme blockade. J Cardiovasc Pharmacol 1982; 4:966–972.
- Mogensen CE, Neldam S, Tikkanen I, et al. Randomised controlled trial of dual blockade of renin-angiotensin system in patients with hypertension, microalbuminuria, and non-insulin dependent diabetes: the candesartan and lisinopril microalbuminuria (CALM) study. BMJ 2000; 321:1440–1444.
- Epstein M, Williams GH, Weinberger M, et al. Selective aldosterone blockade with eplerenone reduces albuminuria in patients with type 2 diabetes. Clin J Am Soc Nephrol 2006; 1:940–951.
- Parving HH, Persson F, Lewis JB, Lewis EJ, Hollenberg NK; AVOID Study Investigators. Aliskiren combined with losartan in type 2 diabetes and nephropathy. N Engl J Med 2008; 358:2433–2446.
- Mann JF, Schmieder RE, McQueen M, et al; ONTARGET investigators. Renal outcomes with telmisartan, ramipril, or both, in people at high vascular risk (the ONTARGET study): a multicentre, randomised, double-blind, controlled trial. Lancet 2008; 372:547–553.
- Jamerson K, Weber MA, Bakris GL, et al; ACCOMPLISH Trial Investigators. Benazepril plus amlodipine or hydrochlorothiazide for hypertension in high-risk patients. N Engl J Med 2008; 359:2417–2428.
- Kim HJ, Vaziri ND. Contribution of impaired Nrf2-Keap1 pathway to oxidative stress and inflammation in chronic renal failure. Am J Physiol Renal Physiol 2010; 298:F662–F671.
- Pergola PE, Raskin P, Toto RD, et al; BEAM Study Investigators Bardoxolone methyl and kidney function in CKD with type 2 diabetes. N Engl J Med 2011; 365:327–336.
KEY POINTS
- The progression from no proteinuria to microalbuminuria to clinical proteinuria parallels glomerular changes of thickening of the basement membrane, mesangial expansion, and the development of Kimmelstiel-Wilson nodules and sclerosis.
- Blood pressure control to 130/80 mm Hg slows microvascular and macrovascular disease, but the goal should not be lower in older patients with diabetes.
- Glycemic control slows microvascular disease: the goal for most patients for hemoglobin A1c is 7.0%. Tighter control may increase cardiovascular risk.
- Either an angiotensin-converting enzyme inhibitor or an angiotensin receptor blocker is the first-line treatment for diabetic nephropathy; combining the two is no longer recommended.
- If more aggressive treatment is needed, a diuretic or spironolactone (with potassium monitoring) can be added.
- The role of sodium bicarbonate and new agents such as blockers of transcription factors is still emerging.
Carbapenem-resistant Enterobacteriaceae: A menace to our most vulnerable patients
The past 10 years have brought a formidable challenge to the clinical arena, as carbapenems, until now the most reliable antibiotics against Klebsiella species, Escherichia coli, and other Enterobacteriaceae, are becoming increasingly ineffective.
Infections caused by carbapenem-resistant Enterobacteriaceae (CRE) pose a serious threat to hospitalized patients. Moreover, CRE often demonstrate resistance to many other classes of antibiotics, thus limiting our therapeutic options. Furthermore, few new antibiotics are in line to replace carbapenems. This public health crisis demands redefined and refocused efforts in the diagnosis, treatment, and control of infections in hospitalized patients.
Here, we present an overview of CRE and discuss avenues to escape a new era of untreatable infections.
INCREASED USE OF CARBAPENEMS AND EMERGENCE OF RESISTANCE
Developed in the 1980s, carbapenems are derivatives of thyanamycin. Imipenem and meropenem, the first members of the class, had a broad spectrum of antimicrobial activity that included coverage of Pseudomonas aeruginosa, adequately positioning them for the treatment of nosocomial infections. Back then, nearly all Enterobacteriaceae were susceptible to carbapenems.1
In the 1990s, Enterobacteriaceae started to develop resistance to cephalosporins—till then, the first-line antibiotics for these organisms—by acquiring extended-spectrum betalactamases, which inactivate those agents. Consequently, the use of cephalosporins had to be restricted, while carbapenems, which remained impervious to these enzymes, had to be used more.2 In pivotal international studies in the treatment of infections caused by strains of K pneumoniae that produced these inactivating enzymes, outcomes were better with carbapenems than with cephalosporins and fluoroquinolones.3,4
Ertapenem, a carbapenem without antipseudomonal activity and highly bound to protein, was released in 2001. Its prolonged half-life permitted once-daily dosing, which positioned it as an option for treating infections in community dwellers.5 Doripenem is the newest member of the class of carbapenems, and its spectrum of activity is similar to that of imipenem and meropenem and includes P aeruginosa.6 The use of carbapenems, measured in a representative sample of 35 university hospitals in the United States, increased by 59% between 2002 and 2006.7
In the early 2000s, carbapenem resistance in K pneumoniae and other Enterobacteriaceae was rare in North America. But then, after initial outbreaks occurred in hospitals in the Northeast (especially New York City), CRE began to spread throughout the United States. By 2009–2010, the National Healthcare Safety Network from the Centers for Disease Control and Prevention (CDC) revealed that 12.8% of K pneumoniae isolates associated with bloodstream infections were resistant to carbapenems.8
In March 2013, the CDC disclosed that 3.9% of short-stay acute-care hospitals and 17.8% of long-term acute-care hospitals reported at least one CRE health care-associated infection in 2012. CRE had extended to 42 states, and the proportion of Enterobacteriaceae that are CRE had increased fourfold over the past 10 years.9
Coinciding with the increased use of carbapenems, multiple factors and modifiers likely contributed to the dramatic increase in CRE. These include use of other antibiotics in humans and animals, their relative penetration and selective effect on the gut microbiota, case-mix and infection control practices in different health care settings, and travel patterns.
POWERFUL ENZYMES THAT TRAVEL FAR
Bacterial acquisition of carbapenemases, enzymes that inactivate carbapenems, is crucial to the emergence of CRE. The enzyme in the sentinel carbapenem-resistant K pneumoniae isolate found in 1996 in North Carolina was designated K pneumoniae carbapenemase (KPC-1). This mechanism also conferred resistance to all cephalosporins, aztreonam, and beta-lactamase inhibitors such as clavulanic acid and tazobactam.10
KPC-2 (later determined to be identical to KPC-1) was found in K pneumoniae from Baltimore, and KPC-3 caused an early outbreak in New York City.11,12 To date, 12 additional variants of blaKPC, the gene encoding for the KPC enzyme, have been described.13
The genes encoding carbapenemases are usually found on plasmids or other common mobile genetic elements.14 These genetic elements allow the organism to acquire genes conferring resistance to other classes of antimicrobials, such as aminoglycoside-modifying enzymes and fluoroquinolone-resistance determinants, and beta-lactamases.15,16 The result is that CRE isolates are increasingly multidrug-resistant (ie, resistant to three or more classes of antimicrobials), extensively drug-resistant (ie, resistant to all but one or two classes), or pandrug-resistant (ie, resistant to all available classes of antibiotics).17 Thus, up to 98% of KPC-producing K pneumoniae are resistant to trimethoprim-sulfamethoxazole, 90% are resistant to fluoroquinolones, and 60% are resistant to gentamicin or amikacin.15
The mobility of these genetic elements has also allowed for dispersion into diverse Enterobacteriaceae such as E coli, Klebsiella oxytoca, Enterobacter, Serratia, and Salmonella species. Furthermore, KPC has been described in non-Enterobacteriaceae such as Acinetobacter baumannii and P aeruginosa.
Extending globally, KPC is now endemic in the Mediterranean basin, including Israel, Greece, and Italy; in South America, especially Colombia, Argentina, and Brazil; and in China.18 Most interesting is the intercontinental transfer of these strains: it has been documented that the index patient with KPC-producing K pneumoniae in Medellin, Colombia, came from Israel to undergo liver transplantation.19 Likewise, KPC-producing K pneumoniae in France and Israel could be linked epidemiologically and genetically to the predominant US strain.20,21
Even more explosive has been the surge of another carbapenemase, the Ambler Class B New Delhi metallo-beta-lactamase, or NDM-1. Initially reported in a urinary isolate of K pneumoniae from a Swedish patient who had been hospitalized in New Delhi in 2008, NDM-1 was soon found throughout India, in Pakistan, and in the United Kingdom.22 Interestingly, several of the UK patients with NDM-1-harboring bacteria had received organ transplants in the Indian subcontinent. Reports from elsewhere in Europe, Australia, and Africa followed suit, usually with a connection to the Indian subcontinent epicenter. In contrast, several other cases in Europe were traced to the Balkans, where there appears to be another focus of NDM-1.23
Penetration of NDM-1 into North America has begun, with cases and outbreaks reported in several US and Canadian regions, and in a military medical facility in Afghanistan. In several of these instances, there has been a documented link with travel and hospitalizations overseas.24–27 However, no such link with travel could be established in a recent outbreak in Ontario.27
In addition, resistance to carbapenems may result from other enzymes (Table 1), or from combinations of changes in outer membrane porins and the production of extended spectrum beta-lactamases or other cephalosporinases.28
DEADLY IMPACT ON THE MOST VULNERABLE
Regardless of the resistance pattern, Enterobacteriaceae are an important cause of health care-associated infections, including urinary and bloodstream infections in patients with indwelling catheters, pneumonia (often in association with mechanical ventilation), and, less frequently, infections of skin and soft tissues and the central nervous system.29–31
Several studies have examined the clinical characteristics and outcomes of patients with CRE infections. Those typically affected are elderly and debilitated and have multiple comorbidities, including diabetes mellitus and immunosuppression. They are heavily exposed to health care with frequent antecedent hospitalizations and invasive procedures. Furthermore, they are often severely ill and require intensive care. Patients infected with carbapenem-resistant K pneumoniae, compared with those with carbapenem-susceptible strains, are more likely to have undergone organ or stem cell transplantation or mechanical ventilation, and to have had a longer hospital stay before infection.
They also experience a high mortality rate, which ranges from 30% in patients with nonbacteremic infections to 72% in series of patients with liver transplants or bloodstream infections.32–37
More recently, CRE has been reported in other vulnerable populations, such as children with critical illness or cancer and in burn patients.38–40
Elderly and critically ill patients with bacteremia originating from a high-risk source (eg, pneumonia) typically face the most adverse outcomes. With increasing drug resistance, inadequate initial antimicrobial therapy is more commonly seen and may account for some of these poor outcomes.37,41
LONG-TERM CARE FACILITIES IN THE EYE OF THE STORM
A growing body of evidence suggests that long-term care facilities play a crucial role in the spread of CRE.
In an investigation into carbapenem-resistant A baumanii and K pneumoniae in a hospital system,36 75% of patients with carbapenem-resistant K pneumoniae were admitted from long-term care facilities, and only 1 of 13 patients was discharged home.
In a series of patients with carbapenem-resistant K pneumoniae bloodstream infections, 42% survived their index hospital stay. Of these patients, only 32% were discharged home, and readmissions were very common.32
Admission from a long-term care facility or transfer from another hospital is significantly associated with carbapenem resistance in patients with Enterobacteriaceae.42 Similarly, in Israel, a large reservoir of CRE was found in postacute care facilities.43
It is clear that long-term care residents are at increased risk of colonization and infection with CRE. However, further studies are needed to evaluate whether this simply refects an overlap in risk factors, or whether significant patient-to-patient transmission occurs in these settings.
INFECTION CONTROL TAKES CENTER STAGE
It is important to note that risk factors for CRE match those of various nosocomial infections, including other resistant gram-negative bacilli, methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, Candida species, and Clostridium difficile; in fact, CRE often coexist with other multidrug-resistant organisms.44,45
Common risk factors include residence in a long-term care facility, an intensive care unit stay, use of lines and catheters, and antibiotic exposure. This commonality of risk factors implies that systematic infection-prevention measures will have an impact on the prevalence and incidence rates of multidrug-resistant organism infections across the board, CRE included. It should be emphasized that strict compliance with hand hygiene is still the foundation of any infection-prevention strategy.
Infection prevention and the control of transmission of CRE in long-term care facilities pose unique challenges. Guidelines from the Society for Healthcare Epidemiology and the Association for Professionals in Infection Control recommend the use of contact precautions for patients with multidrug-resistant organisms, including CRE, who are ill and totally dependent on health care workers for activities of daily living or whose secretions or drainage cannot be contained. These same guidelines advise against attempting to eradicate multidrug-resistant organism colonization status.46
In acute care facilities, Best Infection Control Practices from the CDC and the Healthcare Infection Control Practices Advisory Committee encourage mechanisms for the rapid recognition and reporting of CRE cases to infection prevention personnel so that contact precautions can be implemented. Furthermore, facilities without CRE cases should carry out periodic laboratory reviews to identify cases, and patients exposed to CRE cases should be screened with surveillance cultures.47
Outbreaks of CRE may require extraordinary infection control measures. An approach combining point-prevalence surveillance of colonization, detection of environmental and common-equipment contamination, with the implementation of a bundle consisting of chlorhexidine baths, cohorting of colonized patients and health care personnel, increased environmental cleaning, and staff education may be effective in controlling outbreaks of CRE.48
Nevertheless, control of CRE may prove exceptionally difficult. A recent high-profile outbreak of carbapenem-resistant K pneumoniae at the National Institutes of Health Clinical Center in Maryland caused infections in 18 patients, 11 of whom died.49 Of note, carbapenem-resistant K pneumoniae was detected in this outbreak in both respiratory equipment and sink drains. The outbreak was ultimately contained by detection through surveillance cultures and by strict cohorting of colonized patients, which minimized common medical equipment and personnel between affected patients and other patients in the hospital. Additionally, rooms were sanitized with hydrogen peroxide vapor, and sinks and drains where carbapenem-resistant K pneumoniae was detected were removed.
CHALLENGES IN THE MICROBIOLOGY LABORATORY
Adequate treatment and control of CRE infections is predicated upon their accurate and prompt diagnosis from patient samples in the clinical microbiology laboratory.50
Traditional and current culture-based methods take several days to provide that information, delaying effective antibiotic therapy and permitting the transmission of undetected CRE. Furthermore, interpretative criteria of minimal inhibitory concentrations (MICs) of carbapenems recently required readjustment, as many KPC-producing strains of K pneumoniae had MICs below the previous breakpoint of resistance. In the past, this contributed to instances of “silent” dissemination of KPC-producing K pneumoniae.51
In contrast, using the new lower breakpoints of resistance for carbapenems without using a phenotypic test such as the modified Hodge test or the carbapenem-EDTA combination tests will result in a lack of differentiation between various mechanisms of carbapenem resistance.28,52,53 This may be clinically relevant, as the clinical response to carbapenem therapy may vary depending on the mechanism of resistance.
GENERAL PRINCIPLES APPLY
In treating patients infected with CRE, clinicians need to strictly observe general principles of infectious disease management to ensure the best possible outcomes. These include:
Timely and accurate diagnosis, as discussed above.
Source control, which should include drainage of any infected collections, and removal of lines, devices, and urinary catheters.
Distinguishing between infection and colonization. CRE are often encountered as urinary isolates, and the distinction between asymptomatic bacteriuria and urinary tract infection may be extremely difficult, especially in residents of long-term care facilities with chronic indwelling catheters, who are thegroup at highest risk of CRE colonization and infection. Urinalysis may be helpful in the absence of pyuria, as this rules out an infection; however, it must be emphasized that the presence of pyuria is not a helpful feature, as pyuria is common in both asymptomatic bacteriuria and urinary tract infection.54 Symptoms should be carefully evaluated in every patient with bacteriuria, and urinary tract infection should be a diagnosis of exclusion in patients with functional symptoms such as confusion or falls.
Selection of the most appropriate antibiotic regimen. While the emphasis is often on the antibiotic regimen, the above elements should not be neglected.
A DWINDLING THERAPEUTIC ARSENAL
Clinicians treating CRE infections are left with only a few antibiotic options. These options are generally limited by a lack of clinical data on efficacy, as well as by concerns about toxicity. These “drugs of last resort” include polymyxins (such as colistin), aminoglycosides, tigecycline, and fosfomycin. The role of carbapenem therapy, potentially in combination regimens, in a high-dose prolonged infusion, or even “double carbapenem therapy” remains to be determined.37,55,56
Colistin
Colistin is one of the first-line agents for treating CRE infections. First introduced in the 1950s, its use was mostly abandoned in favor of aminoglycosides. A proportion of the data on safety and efficacy of colistin, therefore, is based on older, less rigorous studies.
Neurotoxicity and nephrotoxicity are the two main concerns with colistin, and while the incidence of these adverse events does appear to be lower with modern preparations, it is still substantial.57 Dosing issues have not been completely clarified either, especially in relation to renal clearance and in patients on renal replacement therapy.58,59 Unfortunately, there have been reports of outbreaks of CRE displaying resistance to colistin.60
Tigecycline
Tigecycline is a newer antibiotic of the glycylcycline class. Like colistin, it has no oral preparation for systemic infections.
The main side effect of tigecycline is nausea.61 Other reported issues include pancreatitis and extreme alkaline phosphatase elevations.
The efficacy of tigecycline has come into question in view of meta-analyses of clinical trials, some of which have shown higher mortality rates in patients treated with tigecycline than with comparator agents.62–65 Based on these data, the US Food and Drug Administration issued a warning in 2010 regarding the increased mortality risk. Although these meta-analyses did not include patients with CRE for whom available comparators would have been ineffective, it is an important safety signal.
The efficacy of tigecycline is further limited by increasing in vitro resistance in CRE. Serum and urinary levels of tigecycline are low, and most experts discourage the use of tigecycline as monotherapy for blood stream or urinary tract infections.
Aminoglycosides
CRE display variable in vitro susceptibility to different aminoglycosides. If the organism is susceptible, aminoglycosides may be very useful in the treatment of CRE infections, especially urinary tract infectons. In a study of carbapenem-resistant K pneumoniae urinary tract infections, patients who were treated with polymyxins or tigecycline were significantly less likely to have clearance of their urine as compared with patients treated with aminoglycosides.66
Ototoxicity and nephrotoxicity are demonstrated adverse effects of aminoglycosides. Close monitoring of serum levels, interval audiology examinations at baseline and during therapy, and the use of extended-interval dosing may help to decrease the incidence of these toxicities.
Fosfomycin
Fosfomycin is only available as an oral formulation in the United States, although intravenous administration has been used in other countries. It is exclusively used to treat urinary tract infections.
CRE often retain susceptibility to fosfomycin, and clearance of urine in cystitis may be attempted with this agent to avoid the need for intravenous treatment.29,67
Combination therapy, other topics to be explored
Recent observational reports from Greece, Italy, and the United States describe higher survival rates in patients with CRE infections treated with a combination regimen rather than monotherapy with colistin or tigecycline. This is despite reliable activity of colistin and tigecycline, and often in regimens containing carbapenems. Clinical experiments are needed to clarify the value of combination regimens that include carbapenems for the treatment of CRE infections.
Similarly, the role of carbapenems given as a high-dose prolonged infusion or as double carbapenem therapy needs to be explored further.37,55,56,68
Also to be determined is the optimal duration of treatment. To date, there is no evidence that increasing the duration of treatment beyond that recommended for infections with more susceptible bacteria results in improved outcomes. Therefore, commonly used durations include 1 week for complicated urinary tract infections, 2 weeks for bacteremia (from the first day with negative blood cultures and source control), and 8 to 14 days for pneumonia.
A SERIOUS THREAT
The emergence of CRE is a serious threat to the safety of patients in our health care system. CRE are highly successful nosocomial pathogens selected by the use of antibiotics, which burden patients debilitated by advanced age, comorbidities, and medical interventions. Infections with CRE result in poor outcomes, and available treatments of last resort such as tigecycline and colistin are of unclear efficacy and safety.
Control of CRE transmission is hindered by the transit of patients through long-term care facilities, and detection of CRE is difficult because of the myriad mechanisms involved and the imperfect methods currently available. Clinicians are concerned and frustrated, especially given the paucity of antibiotics in development to address the therapeutic dilemma posed by CRE. The challenge of CRE and other multidrug-resistant organisms requires the concerted response of professionals in various disciplines, including pharmacists, microbiologists, infection control practitioners, and infectious disease clinicians (Table 2).
Control of transmission by infection prevention strategies and by antimicrobial stewardship is going to be crucial in the years to come, not only for limiting the spread of CRE, but also for preventing the next multidrug-resistant “superbug” from emerging. However, the current reality is that health care providers will be faced with increased numbers of patients infected with CRE.
Prospective studies into transmission, molecular characteristics, and, most of all, treatment regimens are urgently needed. In addition, the development of new antimicrobials and nontraditional antimicrobial methods should have international priority.
- Papp-Wallace KM, Endimiani A, Taracila MA, Bonomo RA. Carbapenems: past, present, and future. Antimicrob Agents Chemother 2011; 55:4943–4960.
- Rahal JJ, Urban C, Horn D, et al. Class restriction of cephalosporin use to control total cephalosporin resistance in nosocomial Klebsiella. JAMA 1998; 280:1233–1237.
- Paterson DL, Ko WC, Von Gottberg A, et al. International prospective study of Klebsiella pneumoniae bacteremia: implications of extended-spectrum beta-lactamase production in nosocomial Infections. Ann Intern Med 2004; 140:26–32.
- Endimiani A, Luzzaro F, Perilli M, et al. Bacteremia due to Klebsiella pneumoniae isolates producing the TEM-52 extended-spectrum beta-lactamase: treatment outcome of patients receiving imipenem or ciprofoxacin. Clin Infect Dis 2004; 38:243–251.
- Livermore DM, Sefton AM, Scott GM. Properties and potential of ertapenem. J Antimicrob Chemother 2003; 52:331–344.
- Bazan JA, Martin SI, Kaye KM. Newer beta-lactam antibiotics: doripenem, ceftobiprole, ceftaroline, and cefepime. Infect Dis Clin North Am 2009; 23:983–996, ix.
- Pakyz AL, MacDougall C, Oinonen M, Polk RE. Trends in antibacterial use in US academic health centers: 2002 to 2006. Arch Intern Med 2008; 168:2254–2260.
- Sievert DM, Ricks P, Edwards JR, et al. Antimicrobial-resistant pathogens associated with healthcare-associated infections: summary of data reported to the National Healthcare Safety Network at the Centers for Disease Control and Prevention, 2009–2010. Infect Control Hosp Epidemiol 2013; 34:1–14.
- Centers for Disease Control and Prevention. Vital signs: carbapenem-resistant Enterobacteriaceae. MMWR 2013; 62:165–170.
- Yigit H, Queenan AM, Anderson GJ, et al. Novel carbapenem-hydrolyzing beta-lactamase, KPC-1, from a carbapenem-resistant strain of Klebsiella pneumoniae. Antimicrob Agents Chemother 2001; 45:1151–1161.
- Smith Moland E, Hanson ND, Herrera VL, et al. Plasmid-mediated, carbapenem-hydrolysing beta-lactamase, KPC-2, in Klebsiella pneumoniae isolates. J Antimicrob Chemother 2003; 51:711–714.
- Woodford N, Tierno PM, Young K, et al. Outbreak of Klebsiella pneumoniae producing a new carbapenem-hydrolyzing class A beta-lactamase, KPC-3, in a New York medical center. Antimicrob Agents Chemother 2004; 48:4793–4799.
- Lehey Clinic. OXA-type β-Lactamases. http://www.lahey.org/Studies/other.asp#table1. Accessed March 11, 2013.
- Mathers AJ, Cox HL, Kitchel B, et al. Molecular dissection of an outbreak of carbapenem-resistant Enterobacteriaceae reveals intergenus KPC carbapenemase transmission through a promiscuous plasmid. MBio 2011; 2 6:e00204–11.
- Endimiani A, Hujer AM, Perez F, et al. Characterization of blaKPC-containing Klebsiella pneumoniae isolates detected in different institutions in the Eastern USA. J Antimicrob Chemother 2009; 63:427–437.
- Endimiani A, Carias LL, Hujer AM, et al. Presence of plasmid-mediated quinolone resistance in Klebsiella pneumoniae isolates possessing blaKPC in the United States. Antimicro Agents Chemother 2008; 52:2680–2682.
- Magiorakos A P, Srinivasan A, Carey RB, et al. Multidrug-resistant, extensively drug-resistant and pandrug-resistant bacteria: an international expert proposal for interim standard definitions for acquired resistance. Clin Microbiol Infect 2012; 18:268–281.
- Tzouvelekis LS, Markogiannakis A, Psichogiou M, Tassios PT, Daikos GL. Carbapenemases in Klebsiella pneumoniae and other Enterobacteriaceae: an evolving crisis of global dimensions. Clin Microbiol Rev 2012; 25:682–707.
- Lopez JA, Correa A, Navon-Venezia S, et al. Intercontinental spread from Israel to Colombia of a KPC-3-producing Klebsiella pneumoniae strain. Clin Microbiol Infect 2011; 17:52–56.
- Naas T, Nordmann P, Vedel G, Poyart C. Plasmid-mediated carbapenem-hydrolyzing beta-lactamase KPC in a Klebsiella pneumoniae isolate from France. Antimicrob Agents Chemother 2005; 49:4423–4424.
- Navon-Venezia S, Leavitt A, Schwaber MJ, et al. First report on a hyperepidemic clone of KPC-3-producing Klebsiella pneumoniae in Israel genetically related to a strain causing outbreaks in the United States. Antimicrob Agents Chemother 2009; 53:818–820.
- Yong D, Toleman MA, Giske CG, et al. Characterization of a new metallo-beta-lactamase gene, bla(NDM-1), and a novel erythromycin esterase gene carried on a unique genetic structure in Klebsiella pneumoniae sequence type 14 from India. Antimicrob Agents Chemother 2009; 53:5046–5054.
- Livermore DM, Walsh TR, Toleman M, Woodford N. Balkan NDM-1: escape or transplant? Lancet Infect Dis 2011; 11:164.
- Centers for Disease Control and Prevention. Carbapenem-resistant enterobacteriaceae containing New Delhi metallo-beta-lactamase in two patients - Rhode Island, March 2012. MMWR Morb Mortal Wkly Rep 2012Jun 22; 61:446–448.
- Centers for Disease Control and Prevention. Detection of Enterobacteriaceae isolates carrying metallo-beta-lactamase—United States, 2010. MMWR Morb Mortal Wkly Rep 2010; 59:750.
- McGann P, Hang J, Clifford RJ, et al. Complete sequence of a novel 178-kilobase plasmid carrying bla(NDM-1) in a Providencia stuartii strain isolated in Afghanistan. Antimicrob Agents Chemother 2012; 56:1673–1679.
- Borgia S, Lastovetska O, Richardson D, et al. Outbreak of carbapenem-resistant Enterobacteriaceae containing blaNDM-1, Ontario, Canada. Clin Infect Dis 2012; 55:e109–e117.
- Endimiani A, Perez F, Bajaksouzian S, et al. Evaluation of updated interpretative criteria for categorizing Klebsiella pneumoniae with reduced carbapenem susceptibility. J Clinic Microbiol 2010; 48:4417–4425.
- Neuner EA, Sekeres J, Hall GS, van Duin D. Experience with fosfomycin for treatment of urinary tract infections due to multidrug-resistant organisms. Antimicrob Agents Chemother 2012; 56:5744–5748.
- Neuner EA, Yeh JY, Hall GS, et al. Treatment and outcomes in carbapenem-resistant Klebsiella pneumoniae bloodstream infections. Diagnostic Microbiol Infect Dis 2011; 69:357–362.
- van Duin D, Kaye KS, Neuner EA, Bonomo RA. Carbapenem-resistant Enterobacteriaceae: a review of treatment and outcomes. Diagnostic Microbiol Infect Dis 2013; 75:115–120.
- Neuner EA, Yeh J-Y, Hall GS, et al. Treatment and outcomes in carbapenem-resistant Klebsiella pneumoniae bloodstream infections. Diagn Microbiol Infect Dis 2011; 69:357–362.
- Patel G, Huprikar S, Factor SH, Jenkins SG, Calfee DP. Outcomes of carbapenem-resistant Klebsiella pneumoniae infection and the impact of antimicrobial and adjunctive therapies. Infect Control Hosp Epidemiol 2008; 29:1099–1106.
- Borer A, Saidel-Odes L, Riesenberg K, et al. Attributable mortality rate for carbapenem-resistant Klebsiella pneumoniae bacteremia. Infect Control Hosp Epidemiol 2009; 30:972–976.
- Marchaim D, Chopra T, Perez F, et al. Outcomes and genetic relatedness of carbapenem-resistant Enterobacteriaceae at Detroit medical center. Infect Control Hosp Epidemiol 2011; 32:861–871.
- Perez F, Endimiani A, Ray AJ, et al. Carbapenem-resistant Acinetobacter baumannii and Klebsiella pneumoniae across a hospital system: impact of post-acute care facilities on dissemination. J Antimicrob Chemother 2010; 65:1807–1818.
- Tumbarello M, Viale P, Viscoli C, et al. Predictors of mortality in bloodstream infections caused by Klebsiella pneumoniae carbapenemase-producing K. pneumoniae: importance of combination therapy. Clin Infect Dis 2012; 55:943–950.
- Little ML, Qin X, Zerr DM, Weissman SJ. Molecular diversity in mechanisms of carbapenem resistance in paediatric Enterobacteriaceae. Int J Antimicrob Agents 2012; 39:52–57.
- Logan LK. Carbapenem-resistant Enterobacteriaceae: an emerging problem in children. Clin Infect Dis 2012; 55:852–859.
- Rastegar Lari A, Azimi L, Rahbar M, Fallah F, Alaghehbandan R. Phenotypic detection of Klebsiella pneumoniae carbapenemase among burns patients: first report from Iran. Burns 2013; 39:174–176.
- Zarkotou O, Pournaras S, Tselioti P, et al. Predictors of mortality in patients with bloodstream infections caused by KPC-producing Klebsiella pneumoniae and impact of appropriate antimicrobial treatment. Clin Microbiol Infect 2011; 17:1798–1803.
- Hyle EP, Ferraro MJ, Silver M, Lee H, Hooper DC. Ertapenem-resistant Enterobacteriaceae: risk factors for acquisition and outcomes. Infect Control Hosp Epidemiol 2010; 31:1242–1249.
- Ben-David D, Masarwa S, Navon-Venezia S, et al. Carbapenem-resistant Klebsiella pneumoniae in post-acute-care facilities in Israel. Infect Control Hosp Epidemiol 2011; 32:845–853.
- Safdar N, Maki DG. The commonality of risk factors for nosocomial colonization and infection with antimicrobial-resistant Staphylococcus aureus, enterococcus, gram-negative bacilli, Clostridium difficile, and Candida. Ann Intern Med 2002; 136:834–844.
- Marchaim D, Perez F, Lee J, et al. “Swimming in resistance”: co-colonization with carbapenem-resistant Enterobacteriaceae and Acinetobacter baumannii or Pseudomonas aeruginosa.” Am J Infect Control 2012; 40:830–835.
- Smith PW, Bennett G, Bradley S, et al. SHEA/APIC Guideline: Infection prevention and control in the long-term care facility. Am J Infect Control 2008; 36:504–535.
- Centers for Disease Control and Prevention. Guidance for control of infections with carbapenem-resistant or carbapenemase-producing Enterobacteriaceae in acute care facilities. MMWR 2009; 58:256–260.
- Munoz-Price LS, De La Cuesta C, Adams S, et al. Successful eradication of a monoclonal strain of Klebsiella pneumoniae during a K. pneumoniae carbapenemase-producing K. pneumoniae outbreak in a surgical intensive care unit in Miami, Florida. Infect Control Hosp Epidemiol 2010; 31:1074–1077.
- Snitkin ES, Zelazny AM, Thomas PJ, et al. Tracking a hospital outbreak of carbapenem-resistant Klebsiella pneumoniae with wholegenome sequencing. Sci Transl Med 2012; 4:148ra16.
- Srinivasan A, Patel JB. Klebsiella pneumoniae carbapenemase-producing organisms: an ounce of prevention really is worth a pound of cure. Infect Control Hosp Epidemiol 2008; 29:1107–1109.
- Viau RA, Hujer AM, Marshall SH, et al. “Silent” dissemination of Klebsiella pneumoniae isolates bearing K pneumoniae carbapenemase in a long-term care facility for children and young adults in Northeast Ohio”. Clin Infect Dis 2012; 54:1314–1321.
- Galani I, Rekatsina PD, Hatzaki D, Plachouras D, Souli M, Giamarellou H. Evaluation of different laboratory tests for the detection of metallo-beta-lactamase production in Enterobacteriaceae. J Antimicrob Chemother 2008; 61:548–553.
- Anderson KF, Lonsway DR, Rasheed JK, et al. Evaluation of methods to identify the Klebsiella pneumoniae carbapenemase in Enterobacteriaceae. J Clin Microbiol 2007; 45:2723–2725.
- Nicolle LE, Bradley S, Colgan R, Rice JC, Schaeffer A, Hooton TM. Infectious Diseases Society of America guidelines for the diagnosis and treatment of asymptomatic bacteriuria in adults. Clin Infect Dis 2005; 40:643–654.
- Daikos GL, Markogiannakis A. Carbapenemase-producing Klebsiella pneumoniae: (when) might we still consider treating with carbapenems? Clin Microbiol Infect 2011; 17:1135–1141.
- Bulik CC, Nicolau DP. Double-carbapenem therapy for carbapenemase-producing Klebsiella pneumoniae. Antimicrob Agents Chemother 2011; 55:3002–3004.
- Pogue JM, Lee J, Marchaim D, et al. Incidence of and risk factors for colistin-associated nephrotoxicity in a large academic health system. Clin Infect Dis 2011; 53:879–884.
- Garonzik SM, Li J, Thamlikitkul V, et al. Population pharmacokinetics of colistin methanesulfonate and formed colistin in critically ill patients from a multicenter study provide dosing suggestions for various categories of patients. Antimicrob Agents Chemother 2011; 55:3284–3294.
- Dalfno L, Puntillo F, Mosca A, et al. High-dose, extended-interval colistin administration in critically ill patients: is this the right dosing strategy? A preliminary study. Clin Infect Dis 2012; 54:1720–1726.
- Marchaim D, Chopra T, Pogue JM, et al. Outbreak of colistin-resistant, carbapenem-resistant Klebsiella pneumoniae in metropolitan Detroit, Michigan. Antimicrob Agents Chemother 2011; 55:593–599.
- Bonilla MF, Avery RK, Rehm SJ, Neuner EA, Isada CM, van Duin D. Extreme alkaline phosphatase elevation associated with tigecycline. J Antimicrob Chemother 2011; 66:952–953.
- Prasad P, Sun J, Danner RL, Natanson C. Excess deaths associated with tigecycline after approval based on noninferiority trials. Clin Infect Dis 2012; 54:1699–1709.
- Tasina E, Haidich AB, Kokkali S, Arvanitidou M. Efficacy and safety of tigecycline for the treatment of infectious diseases: a meta-analysis. Lancet Infect Dis 2011; 11:834–844.
- Cai Y, Wang R, Liang B, Bai N, Liu Y. Systematic review and meta-analysis of the effectiveness and safety of tigecycline for treatment of infectious disease. Antimicrob Agents Chemother 2011; 55:1162–1172.
- Yahav D, Lador A, Paul M, Leibovici L. Efficacy and safety of tigecycline: a systematic review and meta-analysis. J Antimicrob Chemother 2011; 66:1963–1971.
- Satlin MJ, Kubin CJ, Blumenthal JS, et al. Comparative effectiveness of aminoglycosides, polymyxin B, and tigecycline for clearance of carbapenem-resistant Klebsiella pneumoniae from urine. Antimicrob Agents Chemother 2011; 55:5893–5899.
- Endimiani A, Patel G, Hujer KM, et al. In vitro activity of fosfomycin against blaKPC-containing Klebsiella pneumoniae isolates, including those nonsusceptible to tigecycline and/or colistin. Antimicrob Agents Chemother 2010; 54:526–529.
- Qureshi ZA, Paterson DL, Potoski BA, et al. Treatment outcome of bacteremia due to KPC-producing Klebsiella pneumoniae: superiority of combination antimicrobial regimens. Antimicrob Agents Chemother 2012; 56:2108–2113.
The past 10 years have brought a formidable challenge to the clinical arena, as carbapenems, until now the most reliable antibiotics against Klebsiella species, Escherichia coli, and other Enterobacteriaceae, are becoming increasingly ineffective.
Infections caused by carbapenem-resistant Enterobacteriaceae (CRE) pose a serious threat to hospitalized patients. Moreover, CRE often demonstrate resistance to many other classes of antibiotics, thus limiting our therapeutic options. Furthermore, few new antibiotics are in line to replace carbapenems. This public health crisis demands redefined and refocused efforts in the diagnosis, treatment, and control of infections in hospitalized patients.
Here, we present an overview of CRE and discuss avenues to escape a new era of untreatable infections.
INCREASED USE OF CARBAPENEMS AND EMERGENCE OF RESISTANCE
Developed in the 1980s, carbapenems are derivatives of thyanamycin. Imipenem and meropenem, the first members of the class, had a broad spectrum of antimicrobial activity that included coverage of Pseudomonas aeruginosa, adequately positioning them for the treatment of nosocomial infections. Back then, nearly all Enterobacteriaceae were susceptible to carbapenems.1
In the 1990s, Enterobacteriaceae started to develop resistance to cephalosporins—till then, the first-line antibiotics for these organisms—by acquiring extended-spectrum betalactamases, which inactivate those agents. Consequently, the use of cephalosporins had to be restricted, while carbapenems, which remained impervious to these enzymes, had to be used more.2 In pivotal international studies in the treatment of infections caused by strains of K pneumoniae that produced these inactivating enzymes, outcomes were better with carbapenems than with cephalosporins and fluoroquinolones.3,4
Ertapenem, a carbapenem without antipseudomonal activity and highly bound to protein, was released in 2001. Its prolonged half-life permitted once-daily dosing, which positioned it as an option for treating infections in community dwellers.5 Doripenem is the newest member of the class of carbapenems, and its spectrum of activity is similar to that of imipenem and meropenem and includes P aeruginosa.6 The use of carbapenems, measured in a representative sample of 35 university hospitals in the United States, increased by 59% between 2002 and 2006.7
In the early 2000s, carbapenem resistance in K pneumoniae and other Enterobacteriaceae was rare in North America. But then, after initial outbreaks occurred in hospitals in the Northeast (especially New York City), CRE began to spread throughout the United States. By 2009–2010, the National Healthcare Safety Network from the Centers for Disease Control and Prevention (CDC) revealed that 12.8% of K pneumoniae isolates associated with bloodstream infections were resistant to carbapenems.8
In March 2013, the CDC disclosed that 3.9% of short-stay acute-care hospitals and 17.8% of long-term acute-care hospitals reported at least one CRE health care-associated infection in 2012. CRE had extended to 42 states, and the proportion of Enterobacteriaceae that are CRE had increased fourfold over the past 10 years.9
Coinciding with the increased use of carbapenems, multiple factors and modifiers likely contributed to the dramatic increase in CRE. These include use of other antibiotics in humans and animals, their relative penetration and selective effect on the gut microbiota, case-mix and infection control practices in different health care settings, and travel patterns.
POWERFUL ENZYMES THAT TRAVEL FAR
Bacterial acquisition of carbapenemases, enzymes that inactivate carbapenems, is crucial to the emergence of CRE. The enzyme in the sentinel carbapenem-resistant K pneumoniae isolate found in 1996 in North Carolina was designated K pneumoniae carbapenemase (KPC-1). This mechanism also conferred resistance to all cephalosporins, aztreonam, and beta-lactamase inhibitors such as clavulanic acid and tazobactam.10
KPC-2 (later determined to be identical to KPC-1) was found in K pneumoniae from Baltimore, and KPC-3 caused an early outbreak in New York City.11,12 To date, 12 additional variants of blaKPC, the gene encoding for the KPC enzyme, have been described.13
The genes encoding carbapenemases are usually found on plasmids or other common mobile genetic elements.14 These genetic elements allow the organism to acquire genes conferring resistance to other classes of antimicrobials, such as aminoglycoside-modifying enzymes and fluoroquinolone-resistance determinants, and beta-lactamases.15,16 The result is that CRE isolates are increasingly multidrug-resistant (ie, resistant to three or more classes of antimicrobials), extensively drug-resistant (ie, resistant to all but one or two classes), or pandrug-resistant (ie, resistant to all available classes of antibiotics).17 Thus, up to 98% of KPC-producing K pneumoniae are resistant to trimethoprim-sulfamethoxazole, 90% are resistant to fluoroquinolones, and 60% are resistant to gentamicin or amikacin.15
The mobility of these genetic elements has also allowed for dispersion into diverse Enterobacteriaceae such as E coli, Klebsiella oxytoca, Enterobacter, Serratia, and Salmonella species. Furthermore, KPC has been described in non-Enterobacteriaceae such as Acinetobacter baumannii and P aeruginosa.
Extending globally, KPC is now endemic in the Mediterranean basin, including Israel, Greece, and Italy; in South America, especially Colombia, Argentina, and Brazil; and in China.18 Most interesting is the intercontinental transfer of these strains: it has been documented that the index patient with KPC-producing K pneumoniae in Medellin, Colombia, came from Israel to undergo liver transplantation.19 Likewise, KPC-producing K pneumoniae in France and Israel could be linked epidemiologically and genetically to the predominant US strain.20,21
Even more explosive has been the surge of another carbapenemase, the Ambler Class B New Delhi metallo-beta-lactamase, or NDM-1. Initially reported in a urinary isolate of K pneumoniae from a Swedish patient who had been hospitalized in New Delhi in 2008, NDM-1 was soon found throughout India, in Pakistan, and in the United Kingdom.22 Interestingly, several of the UK patients with NDM-1-harboring bacteria had received organ transplants in the Indian subcontinent. Reports from elsewhere in Europe, Australia, and Africa followed suit, usually with a connection to the Indian subcontinent epicenter. In contrast, several other cases in Europe were traced to the Balkans, where there appears to be another focus of NDM-1.23
Penetration of NDM-1 into North America has begun, with cases and outbreaks reported in several US and Canadian regions, and in a military medical facility in Afghanistan. In several of these instances, there has been a documented link with travel and hospitalizations overseas.24–27 However, no such link with travel could be established in a recent outbreak in Ontario.27
In addition, resistance to carbapenems may result from other enzymes (Table 1), or from combinations of changes in outer membrane porins and the production of extended spectrum beta-lactamases or other cephalosporinases.28
DEADLY IMPACT ON THE MOST VULNERABLE
Regardless of the resistance pattern, Enterobacteriaceae are an important cause of health care-associated infections, including urinary and bloodstream infections in patients with indwelling catheters, pneumonia (often in association with mechanical ventilation), and, less frequently, infections of skin and soft tissues and the central nervous system.29–31
Several studies have examined the clinical characteristics and outcomes of patients with CRE infections. Those typically affected are elderly and debilitated and have multiple comorbidities, including diabetes mellitus and immunosuppression. They are heavily exposed to health care with frequent antecedent hospitalizations and invasive procedures. Furthermore, they are often severely ill and require intensive care. Patients infected with carbapenem-resistant K pneumoniae, compared with those with carbapenem-susceptible strains, are more likely to have undergone organ or stem cell transplantation or mechanical ventilation, and to have had a longer hospital stay before infection.
They also experience a high mortality rate, which ranges from 30% in patients with nonbacteremic infections to 72% in series of patients with liver transplants or bloodstream infections.32–37
More recently, CRE has been reported in other vulnerable populations, such as children with critical illness or cancer and in burn patients.38–40
Elderly and critically ill patients with bacteremia originating from a high-risk source (eg, pneumonia) typically face the most adverse outcomes. With increasing drug resistance, inadequate initial antimicrobial therapy is more commonly seen and may account for some of these poor outcomes.37,41
LONG-TERM CARE FACILITIES IN THE EYE OF THE STORM
A growing body of evidence suggests that long-term care facilities play a crucial role in the spread of CRE.
In an investigation into carbapenem-resistant A baumanii and K pneumoniae in a hospital system,36 75% of patients with carbapenem-resistant K pneumoniae were admitted from long-term care facilities, and only 1 of 13 patients was discharged home.
In a series of patients with carbapenem-resistant K pneumoniae bloodstream infections, 42% survived their index hospital stay. Of these patients, only 32% were discharged home, and readmissions were very common.32
Admission from a long-term care facility or transfer from another hospital is significantly associated with carbapenem resistance in patients with Enterobacteriaceae.42 Similarly, in Israel, a large reservoir of CRE was found in postacute care facilities.43
It is clear that long-term care residents are at increased risk of colonization and infection with CRE. However, further studies are needed to evaluate whether this simply refects an overlap in risk factors, or whether significant patient-to-patient transmission occurs in these settings.
INFECTION CONTROL TAKES CENTER STAGE
It is important to note that risk factors for CRE match those of various nosocomial infections, including other resistant gram-negative bacilli, methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, Candida species, and Clostridium difficile; in fact, CRE often coexist with other multidrug-resistant organisms.44,45
Common risk factors include residence in a long-term care facility, an intensive care unit stay, use of lines and catheters, and antibiotic exposure. This commonality of risk factors implies that systematic infection-prevention measures will have an impact on the prevalence and incidence rates of multidrug-resistant organism infections across the board, CRE included. It should be emphasized that strict compliance with hand hygiene is still the foundation of any infection-prevention strategy.
Infection prevention and the control of transmission of CRE in long-term care facilities pose unique challenges. Guidelines from the Society for Healthcare Epidemiology and the Association for Professionals in Infection Control recommend the use of contact precautions for patients with multidrug-resistant organisms, including CRE, who are ill and totally dependent on health care workers for activities of daily living or whose secretions or drainage cannot be contained. These same guidelines advise against attempting to eradicate multidrug-resistant organism colonization status.46
In acute care facilities, Best Infection Control Practices from the CDC and the Healthcare Infection Control Practices Advisory Committee encourage mechanisms for the rapid recognition and reporting of CRE cases to infection prevention personnel so that contact precautions can be implemented. Furthermore, facilities without CRE cases should carry out periodic laboratory reviews to identify cases, and patients exposed to CRE cases should be screened with surveillance cultures.47
Outbreaks of CRE may require extraordinary infection control measures. An approach combining point-prevalence surveillance of colonization, detection of environmental and common-equipment contamination, with the implementation of a bundle consisting of chlorhexidine baths, cohorting of colonized patients and health care personnel, increased environmental cleaning, and staff education may be effective in controlling outbreaks of CRE.48
Nevertheless, control of CRE may prove exceptionally difficult. A recent high-profile outbreak of carbapenem-resistant K pneumoniae at the National Institutes of Health Clinical Center in Maryland caused infections in 18 patients, 11 of whom died.49 Of note, carbapenem-resistant K pneumoniae was detected in this outbreak in both respiratory equipment and sink drains. The outbreak was ultimately contained by detection through surveillance cultures and by strict cohorting of colonized patients, which minimized common medical equipment and personnel between affected patients and other patients in the hospital. Additionally, rooms were sanitized with hydrogen peroxide vapor, and sinks and drains where carbapenem-resistant K pneumoniae was detected were removed.
CHALLENGES IN THE MICROBIOLOGY LABORATORY
Adequate treatment and control of CRE infections is predicated upon their accurate and prompt diagnosis from patient samples in the clinical microbiology laboratory.50
Traditional and current culture-based methods take several days to provide that information, delaying effective antibiotic therapy and permitting the transmission of undetected CRE. Furthermore, interpretative criteria of minimal inhibitory concentrations (MICs) of carbapenems recently required readjustment, as many KPC-producing strains of K pneumoniae had MICs below the previous breakpoint of resistance. In the past, this contributed to instances of “silent” dissemination of KPC-producing K pneumoniae.51
In contrast, using the new lower breakpoints of resistance for carbapenems without using a phenotypic test such as the modified Hodge test or the carbapenem-EDTA combination tests will result in a lack of differentiation between various mechanisms of carbapenem resistance.28,52,53 This may be clinically relevant, as the clinical response to carbapenem therapy may vary depending on the mechanism of resistance.
GENERAL PRINCIPLES APPLY
In treating patients infected with CRE, clinicians need to strictly observe general principles of infectious disease management to ensure the best possible outcomes. These include:
Timely and accurate diagnosis, as discussed above.
Source control, which should include drainage of any infected collections, and removal of lines, devices, and urinary catheters.
Distinguishing between infection and colonization. CRE are often encountered as urinary isolates, and the distinction between asymptomatic bacteriuria and urinary tract infection may be extremely difficult, especially in residents of long-term care facilities with chronic indwelling catheters, who are thegroup at highest risk of CRE colonization and infection. Urinalysis may be helpful in the absence of pyuria, as this rules out an infection; however, it must be emphasized that the presence of pyuria is not a helpful feature, as pyuria is common in both asymptomatic bacteriuria and urinary tract infection.54 Symptoms should be carefully evaluated in every patient with bacteriuria, and urinary tract infection should be a diagnosis of exclusion in patients with functional symptoms such as confusion or falls.
Selection of the most appropriate antibiotic regimen. While the emphasis is often on the antibiotic regimen, the above elements should not be neglected.
A DWINDLING THERAPEUTIC ARSENAL
Clinicians treating CRE infections are left with only a few antibiotic options. These options are generally limited by a lack of clinical data on efficacy, as well as by concerns about toxicity. These “drugs of last resort” include polymyxins (such as colistin), aminoglycosides, tigecycline, and fosfomycin. The role of carbapenem therapy, potentially in combination regimens, in a high-dose prolonged infusion, or even “double carbapenem therapy” remains to be determined.37,55,56
Colistin
Colistin is one of the first-line agents for treating CRE infections. First introduced in the 1950s, its use was mostly abandoned in favor of aminoglycosides. A proportion of the data on safety and efficacy of colistin, therefore, is based on older, less rigorous studies.
Neurotoxicity and nephrotoxicity are the two main concerns with colistin, and while the incidence of these adverse events does appear to be lower with modern preparations, it is still substantial.57 Dosing issues have not been completely clarified either, especially in relation to renal clearance and in patients on renal replacement therapy.58,59 Unfortunately, there have been reports of outbreaks of CRE displaying resistance to colistin.60
Tigecycline
Tigecycline is a newer antibiotic of the glycylcycline class. Like colistin, it has no oral preparation for systemic infections.
The main side effect of tigecycline is nausea.61 Other reported issues include pancreatitis and extreme alkaline phosphatase elevations.
The efficacy of tigecycline has come into question in view of meta-analyses of clinical trials, some of which have shown higher mortality rates in patients treated with tigecycline than with comparator agents.62–65 Based on these data, the US Food and Drug Administration issued a warning in 2010 regarding the increased mortality risk. Although these meta-analyses did not include patients with CRE for whom available comparators would have been ineffective, it is an important safety signal.
The efficacy of tigecycline is further limited by increasing in vitro resistance in CRE. Serum and urinary levels of tigecycline are low, and most experts discourage the use of tigecycline as monotherapy for blood stream or urinary tract infections.
Aminoglycosides
CRE display variable in vitro susceptibility to different aminoglycosides. If the organism is susceptible, aminoglycosides may be very useful in the treatment of CRE infections, especially urinary tract infectons. In a study of carbapenem-resistant K pneumoniae urinary tract infections, patients who were treated with polymyxins or tigecycline were significantly less likely to have clearance of their urine as compared with patients treated with aminoglycosides.66
Ototoxicity and nephrotoxicity are demonstrated adverse effects of aminoglycosides. Close monitoring of serum levels, interval audiology examinations at baseline and during therapy, and the use of extended-interval dosing may help to decrease the incidence of these toxicities.
Fosfomycin
Fosfomycin is only available as an oral formulation in the United States, although intravenous administration has been used in other countries. It is exclusively used to treat urinary tract infections.
CRE often retain susceptibility to fosfomycin, and clearance of urine in cystitis may be attempted with this agent to avoid the need for intravenous treatment.29,67
Combination therapy, other topics to be explored
Recent observational reports from Greece, Italy, and the United States describe higher survival rates in patients with CRE infections treated with a combination regimen rather than monotherapy with colistin or tigecycline. This is despite reliable activity of colistin and tigecycline, and often in regimens containing carbapenems. Clinical experiments are needed to clarify the value of combination regimens that include carbapenems for the treatment of CRE infections.
Similarly, the role of carbapenems given as a high-dose prolonged infusion or as double carbapenem therapy needs to be explored further.37,55,56,68
Also to be determined is the optimal duration of treatment. To date, there is no evidence that increasing the duration of treatment beyond that recommended for infections with more susceptible bacteria results in improved outcomes. Therefore, commonly used durations include 1 week for complicated urinary tract infections, 2 weeks for bacteremia (from the first day with negative blood cultures and source control), and 8 to 14 days for pneumonia.
A SERIOUS THREAT
The emergence of CRE is a serious threat to the safety of patients in our health care system. CRE are highly successful nosocomial pathogens selected by the use of antibiotics, which burden patients debilitated by advanced age, comorbidities, and medical interventions. Infections with CRE result in poor outcomes, and available treatments of last resort such as tigecycline and colistin are of unclear efficacy and safety.
Control of CRE transmission is hindered by the transit of patients through long-term care facilities, and detection of CRE is difficult because of the myriad mechanisms involved and the imperfect methods currently available. Clinicians are concerned and frustrated, especially given the paucity of antibiotics in development to address the therapeutic dilemma posed by CRE. The challenge of CRE and other multidrug-resistant organisms requires the concerted response of professionals in various disciplines, including pharmacists, microbiologists, infection control practitioners, and infectious disease clinicians (Table 2).
Control of transmission by infection prevention strategies and by antimicrobial stewardship is going to be crucial in the years to come, not only for limiting the spread of CRE, but also for preventing the next multidrug-resistant “superbug” from emerging. However, the current reality is that health care providers will be faced with increased numbers of patients infected with CRE.
Prospective studies into transmission, molecular characteristics, and, most of all, treatment regimens are urgently needed. In addition, the development of new antimicrobials and nontraditional antimicrobial methods should have international priority.
The past 10 years have brought a formidable challenge to the clinical arena, as carbapenems, until now the most reliable antibiotics against Klebsiella species, Escherichia coli, and other Enterobacteriaceae, are becoming increasingly ineffective.
Infections caused by carbapenem-resistant Enterobacteriaceae (CRE) pose a serious threat to hospitalized patients. Moreover, CRE often demonstrate resistance to many other classes of antibiotics, thus limiting our therapeutic options. Furthermore, few new antibiotics are in line to replace carbapenems. This public health crisis demands redefined and refocused efforts in the diagnosis, treatment, and control of infections in hospitalized patients.
Here, we present an overview of CRE and discuss avenues to escape a new era of untreatable infections.
INCREASED USE OF CARBAPENEMS AND EMERGENCE OF RESISTANCE
Developed in the 1980s, carbapenems are derivatives of thyanamycin. Imipenem and meropenem, the first members of the class, had a broad spectrum of antimicrobial activity that included coverage of Pseudomonas aeruginosa, adequately positioning them for the treatment of nosocomial infections. Back then, nearly all Enterobacteriaceae were susceptible to carbapenems.1
In the 1990s, Enterobacteriaceae started to develop resistance to cephalosporins—till then, the first-line antibiotics for these organisms—by acquiring extended-spectrum betalactamases, which inactivate those agents. Consequently, the use of cephalosporins had to be restricted, while carbapenems, which remained impervious to these enzymes, had to be used more.2 In pivotal international studies in the treatment of infections caused by strains of K pneumoniae that produced these inactivating enzymes, outcomes were better with carbapenems than with cephalosporins and fluoroquinolones.3,4
Ertapenem, a carbapenem without antipseudomonal activity and highly bound to protein, was released in 2001. Its prolonged half-life permitted once-daily dosing, which positioned it as an option for treating infections in community dwellers.5 Doripenem is the newest member of the class of carbapenems, and its spectrum of activity is similar to that of imipenem and meropenem and includes P aeruginosa.6 The use of carbapenems, measured in a representative sample of 35 university hospitals in the United States, increased by 59% between 2002 and 2006.7
In the early 2000s, carbapenem resistance in K pneumoniae and other Enterobacteriaceae was rare in North America. But then, after initial outbreaks occurred in hospitals in the Northeast (especially New York City), CRE began to spread throughout the United States. By 2009–2010, the National Healthcare Safety Network from the Centers for Disease Control and Prevention (CDC) revealed that 12.8% of K pneumoniae isolates associated with bloodstream infections were resistant to carbapenems.8
In March 2013, the CDC disclosed that 3.9% of short-stay acute-care hospitals and 17.8% of long-term acute-care hospitals reported at least one CRE health care-associated infection in 2012. CRE had extended to 42 states, and the proportion of Enterobacteriaceae that are CRE had increased fourfold over the past 10 years.9
Coinciding with the increased use of carbapenems, multiple factors and modifiers likely contributed to the dramatic increase in CRE. These include use of other antibiotics in humans and animals, their relative penetration and selective effect on the gut microbiota, case-mix and infection control practices in different health care settings, and travel patterns.
POWERFUL ENZYMES THAT TRAVEL FAR
Bacterial acquisition of carbapenemases, enzymes that inactivate carbapenems, is crucial to the emergence of CRE. The enzyme in the sentinel carbapenem-resistant K pneumoniae isolate found in 1996 in North Carolina was designated K pneumoniae carbapenemase (KPC-1). This mechanism also conferred resistance to all cephalosporins, aztreonam, and beta-lactamase inhibitors such as clavulanic acid and tazobactam.10
KPC-2 (later determined to be identical to KPC-1) was found in K pneumoniae from Baltimore, and KPC-3 caused an early outbreak in New York City.11,12 To date, 12 additional variants of blaKPC, the gene encoding for the KPC enzyme, have been described.13
The genes encoding carbapenemases are usually found on plasmids or other common mobile genetic elements.14 These genetic elements allow the organism to acquire genes conferring resistance to other classes of antimicrobials, such as aminoglycoside-modifying enzymes and fluoroquinolone-resistance determinants, and beta-lactamases.15,16 The result is that CRE isolates are increasingly multidrug-resistant (ie, resistant to three or more classes of antimicrobials), extensively drug-resistant (ie, resistant to all but one or two classes), or pandrug-resistant (ie, resistant to all available classes of antibiotics).17 Thus, up to 98% of KPC-producing K pneumoniae are resistant to trimethoprim-sulfamethoxazole, 90% are resistant to fluoroquinolones, and 60% are resistant to gentamicin or amikacin.15
The mobility of these genetic elements has also allowed for dispersion into diverse Enterobacteriaceae such as E coli, Klebsiella oxytoca, Enterobacter, Serratia, and Salmonella species. Furthermore, KPC has been described in non-Enterobacteriaceae such as Acinetobacter baumannii and P aeruginosa.
Extending globally, KPC is now endemic in the Mediterranean basin, including Israel, Greece, and Italy; in South America, especially Colombia, Argentina, and Brazil; and in China.18 Most interesting is the intercontinental transfer of these strains: it has been documented that the index patient with KPC-producing K pneumoniae in Medellin, Colombia, came from Israel to undergo liver transplantation.19 Likewise, KPC-producing K pneumoniae in France and Israel could be linked epidemiologically and genetically to the predominant US strain.20,21
Even more explosive has been the surge of another carbapenemase, the Ambler Class B New Delhi metallo-beta-lactamase, or NDM-1. Initially reported in a urinary isolate of K pneumoniae from a Swedish patient who had been hospitalized in New Delhi in 2008, NDM-1 was soon found throughout India, in Pakistan, and in the United Kingdom.22 Interestingly, several of the UK patients with NDM-1-harboring bacteria had received organ transplants in the Indian subcontinent. Reports from elsewhere in Europe, Australia, and Africa followed suit, usually with a connection to the Indian subcontinent epicenter. In contrast, several other cases in Europe were traced to the Balkans, where there appears to be another focus of NDM-1.23
Penetration of NDM-1 into North America has begun, with cases and outbreaks reported in several US and Canadian regions, and in a military medical facility in Afghanistan. In several of these instances, there has been a documented link with travel and hospitalizations overseas.24–27 However, no such link with travel could be established in a recent outbreak in Ontario.27
In addition, resistance to carbapenems may result from other enzymes (Table 1), or from combinations of changes in outer membrane porins and the production of extended spectrum beta-lactamases or other cephalosporinases.28
DEADLY IMPACT ON THE MOST VULNERABLE
Regardless of the resistance pattern, Enterobacteriaceae are an important cause of health care-associated infections, including urinary and bloodstream infections in patients with indwelling catheters, pneumonia (often in association with mechanical ventilation), and, less frequently, infections of skin and soft tissues and the central nervous system.29–31
Several studies have examined the clinical characteristics and outcomes of patients with CRE infections. Those typically affected are elderly and debilitated and have multiple comorbidities, including diabetes mellitus and immunosuppression. They are heavily exposed to health care with frequent antecedent hospitalizations and invasive procedures. Furthermore, they are often severely ill and require intensive care. Patients infected with carbapenem-resistant K pneumoniae, compared with those with carbapenem-susceptible strains, are more likely to have undergone organ or stem cell transplantation or mechanical ventilation, and to have had a longer hospital stay before infection.
They also experience a high mortality rate, which ranges from 30% in patients with nonbacteremic infections to 72% in series of patients with liver transplants or bloodstream infections.32–37
More recently, CRE has been reported in other vulnerable populations, such as children with critical illness or cancer and in burn patients.38–40
Elderly and critically ill patients with bacteremia originating from a high-risk source (eg, pneumonia) typically face the most adverse outcomes. With increasing drug resistance, inadequate initial antimicrobial therapy is more commonly seen and may account for some of these poor outcomes.37,41
LONG-TERM CARE FACILITIES IN THE EYE OF THE STORM
A growing body of evidence suggests that long-term care facilities play a crucial role in the spread of CRE.
In an investigation into carbapenem-resistant A baumanii and K pneumoniae in a hospital system,36 75% of patients with carbapenem-resistant K pneumoniae were admitted from long-term care facilities, and only 1 of 13 patients was discharged home.
In a series of patients with carbapenem-resistant K pneumoniae bloodstream infections, 42% survived their index hospital stay. Of these patients, only 32% were discharged home, and readmissions were very common.32
Admission from a long-term care facility or transfer from another hospital is significantly associated with carbapenem resistance in patients with Enterobacteriaceae.42 Similarly, in Israel, a large reservoir of CRE was found in postacute care facilities.43
It is clear that long-term care residents are at increased risk of colonization and infection with CRE. However, further studies are needed to evaluate whether this simply refects an overlap in risk factors, or whether significant patient-to-patient transmission occurs in these settings.
INFECTION CONTROL TAKES CENTER STAGE
It is important to note that risk factors for CRE match those of various nosocomial infections, including other resistant gram-negative bacilli, methicillin-resistant Staphylococcus aureus, vancomycin-resistant enterococci, Candida species, and Clostridium difficile; in fact, CRE often coexist with other multidrug-resistant organisms.44,45
Common risk factors include residence in a long-term care facility, an intensive care unit stay, use of lines and catheters, and antibiotic exposure. This commonality of risk factors implies that systematic infection-prevention measures will have an impact on the prevalence and incidence rates of multidrug-resistant organism infections across the board, CRE included. It should be emphasized that strict compliance with hand hygiene is still the foundation of any infection-prevention strategy.
Infection prevention and the control of transmission of CRE in long-term care facilities pose unique challenges. Guidelines from the Society for Healthcare Epidemiology and the Association for Professionals in Infection Control recommend the use of contact precautions for patients with multidrug-resistant organisms, including CRE, who are ill and totally dependent on health care workers for activities of daily living or whose secretions or drainage cannot be contained. These same guidelines advise against attempting to eradicate multidrug-resistant organism colonization status.46
In acute care facilities, Best Infection Control Practices from the CDC and the Healthcare Infection Control Practices Advisory Committee encourage mechanisms for the rapid recognition and reporting of CRE cases to infection prevention personnel so that contact precautions can be implemented. Furthermore, facilities without CRE cases should carry out periodic laboratory reviews to identify cases, and patients exposed to CRE cases should be screened with surveillance cultures.47
Outbreaks of CRE may require extraordinary infection control measures. An approach combining point-prevalence surveillance of colonization, detection of environmental and common-equipment contamination, with the implementation of a bundle consisting of chlorhexidine baths, cohorting of colonized patients and health care personnel, increased environmental cleaning, and staff education may be effective in controlling outbreaks of CRE.48
Nevertheless, control of CRE may prove exceptionally difficult. A recent high-profile outbreak of carbapenem-resistant K pneumoniae at the National Institutes of Health Clinical Center in Maryland caused infections in 18 patients, 11 of whom died.49 Of note, carbapenem-resistant K pneumoniae was detected in this outbreak in both respiratory equipment and sink drains. The outbreak was ultimately contained by detection through surveillance cultures and by strict cohorting of colonized patients, which minimized common medical equipment and personnel between affected patients and other patients in the hospital. Additionally, rooms were sanitized with hydrogen peroxide vapor, and sinks and drains where carbapenem-resistant K pneumoniae was detected were removed.
CHALLENGES IN THE MICROBIOLOGY LABORATORY
Adequate treatment and control of CRE infections is predicated upon their accurate and prompt diagnosis from patient samples in the clinical microbiology laboratory.50
Traditional and current culture-based methods take several days to provide that information, delaying effective antibiotic therapy and permitting the transmission of undetected CRE. Furthermore, interpretative criteria of minimal inhibitory concentrations (MICs) of carbapenems recently required readjustment, as many KPC-producing strains of K pneumoniae had MICs below the previous breakpoint of resistance. In the past, this contributed to instances of “silent” dissemination of KPC-producing K pneumoniae.51
In contrast, using the new lower breakpoints of resistance for carbapenems without using a phenotypic test such as the modified Hodge test or the carbapenem-EDTA combination tests will result in a lack of differentiation between various mechanisms of carbapenem resistance.28,52,53 This may be clinically relevant, as the clinical response to carbapenem therapy may vary depending on the mechanism of resistance.
GENERAL PRINCIPLES APPLY
In treating patients infected with CRE, clinicians need to strictly observe general principles of infectious disease management to ensure the best possible outcomes. These include:
Timely and accurate diagnosis, as discussed above.
Source control, which should include drainage of any infected collections, and removal of lines, devices, and urinary catheters.
Distinguishing between infection and colonization. CRE are often encountered as urinary isolates, and the distinction between asymptomatic bacteriuria and urinary tract infection may be extremely difficult, especially in residents of long-term care facilities with chronic indwelling catheters, who are thegroup at highest risk of CRE colonization and infection. Urinalysis may be helpful in the absence of pyuria, as this rules out an infection; however, it must be emphasized that the presence of pyuria is not a helpful feature, as pyuria is common in both asymptomatic bacteriuria and urinary tract infection.54 Symptoms should be carefully evaluated in every patient with bacteriuria, and urinary tract infection should be a diagnosis of exclusion in patients with functional symptoms such as confusion or falls.
Selection of the most appropriate antibiotic regimen. While the emphasis is often on the antibiotic regimen, the above elements should not be neglected.
A DWINDLING THERAPEUTIC ARSENAL
Clinicians treating CRE infections are left with only a few antibiotic options. These options are generally limited by a lack of clinical data on efficacy, as well as by concerns about toxicity. These “drugs of last resort” include polymyxins (such as colistin), aminoglycosides, tigecycline, and fosfomycin. The role of carbapenem therapy, potentially in combination regimens, in a high-dose prolonged infusion, or even “double carbapenem therapy” remains to be determined.37,55,56
Colistin
Colistin is one of the first-line agents for treating CRE infections. First introduced in the 1950s, its use was mostly abandoned in favor of aminoglycosides. A proportion of the data on safety and efficacy of colistin, therefore, is based on older, less rigorous studies.
Neurotoxicity and nephrotoxicity are the two main concerns with colistin, and while the incidence of these adverse events does appear to be lower with modern preparations, it is still substantial.57 Dosing issues have not been completely clarified either, especially in relation to renal clearance and in patients on renal replacement therapy.58,59 Unfortunately, there have been reports of outbreaks of CRE displaying resistance to colistin.60
Tigecycline
Tigecycline is a newer antibiotic of the glycylcycline class. Like colistin, it has no oral preparation for systemic infections.
The main side effect of tigecycline is nausea.61 Other reported issues include pancreatitis and extreme alkaline phosphatase elevations.
The efficacy of tigecycline has come into question in view of meta-analyses of clinical trials, some of which have shown higher mortality rates in patients treated with tigecycline than with comparator agents.62–65 Based on these data, the US Food and Drug Administration issued a warning in 2010 regarding the increased mortality risk. Although these meta-analyses did not include patients with CRE for whom available comparators would have been ineffective, it is an important safety signal.
The efficacy of tigecycline is further limited by increasing in vitro resistance in CRE. Serum and urinary levels of tigecycline are low, and most experts discourage the use of tigecycline as monotherapy for blood stream or urinary tract infections.
Aminoglycosides
CRE display variable in vitro susceptibility to different aminoglycosides. If the organism is susceptible, aminoglycosides may be very useful in the treatment of CRE infections, especially urinary tract infectons. In a study of carbapenem-resistant K pneumoniae urinary tract infections, patients who were treated with polymyxins or tigecycline were significantly less likely to have clearance of their urine as compared with patients treated with aminoglycosides.66
Ototoxicity and nephrotoxicity are demonstrated adverse effects of aminoglycosides. Close monitoring of serum levels, interval audiology examinations at baseline and during therapy, and the use of extended-interval dosing may help to decrease the incidence of these toxicities.
Fosfomycin
Fosfomycin is only available as an oral formulation in the United States, although intravenous administration has been used in other countries. It is exclusively used to treat urinary tract infections.
CRE often retain susceptibility to fosfomycin, and clearance of urine in cystitis may be attempted with this agent to avoid the need for intravenous treatment.29,67
Combination therapy, other topics to be explored
Recent observational reports from Greece, Italy, and the United States describe higher survival rates in patients with CRE infections treated with a combination regimen rather than monotherapy with colistin or tigecycline. This is despite reliable activity of colistin and tigecycline, and often in regimens containing carbapenems. Clinical experiments are needed to clarify the value of combination regimens that include carbapenems for the treatment of CRE infections.
Similarly, the role of carbapenems given as a high-dose prolonged infusion or as double carbapenem therapy needs to be explored further.37,55,56,68
Also to be determined is the optimal duration of treatment. To date, there is no evidence that increasing the duration of treatment beyond that recommended for infections with more susceptible bacteria results in improved outcomes. Therefore, commonly used durations include 1 week for complicated urinary tract infections, 2 weeks for bacteremia (from the first day with negative blood cultures and source control), and 8 to 14 days for pneumonia.
A SERIOUS THREAT
The emergence of CRE is a serious threat to the safety of patients in our health care system. CRE are highly successful nosocomial pathogens selected by the use of antibiotics, which burden patients debilitated by advanced age, comorbidities, and medical interventions. Infections with CRE result in poor outcomes, and available treatments of last resort such as tigecycline and colistin are of unclear efficacy and safety.
Control of CRE transmission is hindered by the transit of patients through long-term care facilities, and detection of CRE is difficult because of the myriad mechanisms involved and the imperfect methods currently available. Clinicians are concerned and frustrated, especially given the paucity of antibiotics in development to address the therapeutic dilemma posed by CRE. The challenge of CRE and other multidrug-resistant organisms requires the concerted response of professionals in various disciplines, including pharmacists, microbiologists, infection control practitioners, and infectious disease clinicians (Table 2).
Control of transmission by infection prevention strategies and by antimicrobial stewardship is going to be crucial in the years to come, not only for limiting the spread of CRE, but also for preventing the next multidrug-resistant “superbug” from emerging. However, the current reality is that health care providers will be faced with increased numbers of patients infected with CRE.
Prospective studies into transmission, molecular characteristics, and, most of all, treatment regimens are urgently needed. In addition, the development of new antimicrobials and nontraditional antimicrobial methods should have international priority.
- Papp-Wallace KM, Endimiani A, Taracila MA, Bonomo RA. Carbapenems: past, present, and future. Antimicrob Agents Chemother 2011; 55:4943–4960.
- Rahal JJ, Urban C, Horn D, et al. Class restriction of cephalosporin use to control total cephalosporin resistance in nosocomial Klebsiella. JAMA 1998; 280:1233–1237.
- Paterson DL, Ko WC, Von Gottberg A, et al. International prospective study of Klebsiella pneumoniae bacteremia: implications of extended-spectrum beta-lactamase production in nosocomial Infections. Ann Intern Med 2004; 140:26–32.
- Endimiani A, Luzzaro F, Perilli M, et al. Bacteremia due to Klebsiella pneumoniae isolates producing the TEM-52 extended-spectrum beta-lactamase: treatment outcome of patients receiving imipenem or ciprofoxacin. Clin Infect Dis 2004; 38:243–251.
- Livermore DM, Sefton AM, Scott GM. Properties and potential of ertapenem. J Antimicrob Chemother 2003; 52:331–344.
- Bazan JA, Martin SI, Kaye KM. Newer beta-lactam antibiotics: doripenem, ceftobiprole, ceftaroline, and cefepime. Infect Dis Clin North Am 2009; 23:983–996, ix.
- Pakyz AL, MacDougall C, Oinonen M, Polk RE. Trends in antibacterial use in US academic health centers: 2002 to 2006. Arch Intern Med 2008; 168:2254–2260.
- Sievert DM, Ricks P, Edwards JR, et al. Antimicrobial-resistant pathogens associated with healthcare-associated infections: summary of data reported to the National Healthcare Safety Network at the Centers for Disease Control and Prevention, 2009–2010. Infect Control Hosp Epidemiol 2013; 34:1–14.
- Centers for Disease Control and Prevention. Vital signs: carbapenem-resistant Enterobacteriaceae. MMWR 2013; 62:165–170.
- Yigit H, Queenan AM, Anderson GJ, et al. Novel carbapenem-hydrolyzing beta-lactamase, KPC-1, from a carbapenem-resistant strain of Klebsiella pneumoniae. Antimicrob Agents Chemother 2001; 45:1151–1161.
- Smith Moland E, Hanson ND, Herrera VL, et al. Plasmid-mediated, carbapenem-hydrolysing beta-lactamase, KPC-2, in Klebsiella pneumoniae isolates. J Antimicrob Chemother 2003; 51:711–714.
- Woodford N, Tierno PM, Young K, et al. Outbreak of Klebsiella pneumoniae producing a new carbapenem-hydrolyzing class A beta-lactamase, KPC-3, in a New York medical center. Antimicrob Agents Chemother 2004; 48:4793–4799.
- Lehey Clinic. OXA-type β-Lactamases. http://www.lahey.org/Studies/other.asp#table1. Accessed March 11, 2013.
- Mathers AJ, Cox HL, Kitchel B, et al. Molecular dissection of an outbreak of carbapenem-resistant Enterobacteriaceae reveals intergenus KPC carbapenemase transmission through a promiscuous plasmid. MBio 2011; 2 6:e00204–11.
- Endimiani A, Hujer AM, Perez F, et al. Characterization of blaKPC-containing Klebsiella pneumoniae isolates detected in different institutions in the Eastern USA. J Antimicrob Chemother 2009; 63:427–437.
- Endimiani A, Carias LL, Hujer AM, et al. Presence of plasmid-mediated quinolone resistance in Klebsiella pneumoniae isolates possessing blaKPC in the United States. Antimicro Agents Chemother 2008; 52:2680–2682.
- Magiorakos A P, Srinivasan A, Carey RB, et al. Multidrug-resistant, extensively drug-resistant and pandrug-resistant bacteria: an international expert proposal for interim standard definitions for acquired resistance. Clin Microbiol Infect 2012; 18:268–281.
- Tzouvelekis LS, Markogiannakis A, Psichogiou M, Tassios PT, Daikos GL. Carbapenemases in Klebsiella pneumoniae and other Enterobacteriaceae: an evolving crisis of global dimensions. Clin Microbiol Rev 2012; 25:682–707.
- Lopez JA, Correa A, Navon-Venezia S, et al. Intercontinental spread from Israel to Colombia of a KPC-3-producing Klebsiella pneumoniae strain. Clin Microbiol Infect 2011; 17:52–56.
- Naas T, Nordmann P, Vedel G, Poyart C. Plasmid-mediated carbapenem-hydrolyzing beta-lactamase KPC in a Klebsiella pneumoniae isolate from France. Antimicrob Agents Chemother 2005; 49:4423–4424.
- Navon-Venezia S, Leavitt A, Schwaber MJ, et al. First report on a hyperepidemic clone of KPC-3-producing Klebsiella pneumoniae in Israel genetically related to a strain causing outbreaks in the United States. Antimicrob Agents Chemother 2009; 53:818–820.
- Yong D, Toleman MA, Giske CG, et al. Characterization of a new metallo-beta-lactamase gene, bla(NDM-1), and a novel erythromycin esterase gene carried on a unique genetic structure in Klebsiella pneumoniae sequence type 14 from India. Antimicrob Agents Chemother 2009; 53:5046–5054.
- Livermore DM, Walsh TR, Toleman M, Woodford N. Balkan NDM-1: escape or transplant? Lancet Infect Dis 2011; 11:164.
- Centers for Disease Control and Prevention. Carbapenem-resistant enterobacteriaceae containing New Delhi metallo-beta-lactamase in two patients - Rhode Island, March 2012. MMWR Morb Mortal Wkly Rep 2012Jun 22; 61:446–448.
- Centers for Disease Control and Prevention. Detection of Enterobacteriaceae isolates carrying metallo-beta-lactamase—United States, 2010. MMWR Morb Mortal Wkly Rep 2010; 59:750.
- McGann P, Hang J, Clifford RJ, et al. Complete sequence of a novel 178-kilobase plasmid carrying bla(NDM-1) in a Providencia stuartii strain isolated in Afghanistan. Antimicrob Agents Chemother 2012; 56:1673–1679.
- Borgia S, Lastovetska O, Richardson D, et al. Outbreak of carbapenem-resistant Enterobacteriaceae containing blaNDM-1, Ontario, Canada. Clin Infect Dis 2012; 55:e109–e117.
- Endimiani A, Perez F, Bajaksouzian S, et al. Evaluation of updated interpretative criteria for categorizing Klebsiella pneumoniae with reduced carbapenem susceptibility. J Clinic Microbiol 2010; 48:4417–4425.
- Neuner EA, Sekeres J, Hall GS, van Duin D. Experience with fosfomycin for treatment of urinary tract infections due to multidrug-resistant organisms. Antimicrob Agents Chemother 2012; 56:5744–5748.
- Neuner EA, Yeh JY, Hall GS, et al. Treatment and outcomes in carbapenem-resistant Klebsiella pneumoniae bloodstream infections. Diagnostic Microbiol Infect Dis 2011; 69:357–362.
- van Duin D, Kaye KS, Neuner EA, Bonomo RA. Carbapenem-resistant Enterobacteriaceae: a review of treatment and outcomes. Diagnostic Microbiol Infect Dis 2013; 75:115–120.
- Neuner EA, Yeh J-Y, Hall GS, et al. Treatment and outcomes in carbapenem-resistant Klebsiella pneumoniae bloodstream infections. Diagn Microbiol Infect Dis 2011; 69:357–362.
- Patel G, Huprikar S, Factor SH, Jenkins SG, Calfee DP. Outcomes of carbapenem-resistant Klebsiella pneumoniae infection and the impact of antimicrobial and adjunctive therapies. Infect Control Hosp Epidemiol 2008; 29:1099–1106.
- Borer A, Saidel-Odes L, Riesenberg K, et al. Attributable mortality rate for carbapenem-resistant Klebsiella pneumoniae bacteremia. Infect Control Hosp Epidemiol 2009; 30:972–976.
- Marchaim D, Chopra T, Perez F, et al. Outcomes and genetic relatedness of carbapenem-resistant Enterobacteriaceae at Detroit medical center. Infect Control Hosp Epidemiol 2011; 32:861–871.
- Perez F, Endimiani A, Ray AJ, et al. Carbapenem-resistant Acinetobacter baumannii and Klebsiella pneumoniae across a hospital system: impact of post-acute care facilities on dissemination. J Antimicrob Chemother 2010; 65:1807–1818.
- Tumbarello M, Viale P, Viscoli C, et al. Predictors of mortality in bloodstream infections caused by Klebsiella pneumoniae carbapenemase-producing K. pneumoniae: importance of combination therapy. Clin Infect Dis 2012; 55:943–950.
- Little ML, Qin X, Zerr DM, Weissman SJ. Molecular diversity in mechanisms of carbapenem resistance in paediatric Enterobacteriaceae. Int J Antimicrob Agents 2012; 39:52–57.
- Logan LK. Carbapenem-resistant Enterobacteriaceae: an emerging problem in children. Clin Infect Dis 2012; 55:852–859.
- Rastegar Lari A, Azimi L, Rahbar M, Fallah F, Alaghehbandan R. Phenotypic detection of Klebsiella pneumoniae carbapenemase among burns patients: first report from Iran. Burns 2013; 39:174–176.
- Zarkotou O, Pournaras S, Tselioti P, et al. Predictors of mortality in patients with bloodstream infections caused by KPC-producing Klebsiella pneumoniae and impact of appropriate antimicrobial treatment. Clin Microbiol Infect 2011; 17:1798–1803.
- Hyle EP, Ferraro MJ, Silver M, Lee H, Hooper DC. Ertapenem-resistant Enterobacteriaceae: risk factors for acquisition and outcomes. Infect Control Hosp Epidemiol 2010; 31:1242–1249.
- Ben-David D, Masarwa S, Navon-Venezia S, et al. Carbapenem-resistant Klebsiella pneumoniae in post-acute-care facilities in Israel. Infect Control Hosp Epidemiol 2011; 32:845–853.
- Safdar N, Maki DG. The commonality of risk factors for nosocomial colonization and infection with antimicrobial-resistant Staphylococcus aureus, enterococcus, gram-negative bacilli, Clostridium difficile, and Candida. Ann Intern Med 2002; 136:834–844.
- Marchaim D, Perez F, Lee J, et al. “Swimming in resistance”: co-colonization with carbapenem-resistant Enterobacteriaceae and Acinetobacter baumannii or Pseudomonas aeruginosa.” Am J Infect Control 2012; 40:830–835.
- Smith PW, Bennett G, Bradley S, et al. SHEA/APIC Guideline: Infection prevention and control in the long-term care facility. Am J Infect Control 2008; 36:504–535.
- Centers for Disease Control and Prevention. Guidance for control of infections with carbapenem-resistant or carbapenemase-producing Enterobacteriaceae in acute care facilities. MMWR 2009; 58:256–260.
- Munoz-Price LS, De La Cuesta C, Adams S, et al. Successful eradication of a monoclonal strain of Klebsiella pneumoniae during a K. pneumoniae carbapenemase-producing K. pneumoniae outbreak in a surgical intensive care unit in Miami, Florida. Infect Control Hosp Epidemiol 2010; 31:1074–1077.
- Snitkin ES, Zelazny AM, Thomas PJ, et al. Tracking a hospital outbreak of carbapenem-resistant Klebsiella pneumoniae with wholegenome sequencing. Sci Transl Med 2012; 4:148ra16.
- Srinivasan A, Patel JB. Klebsiella pneumoniae carbapenemase-producing organisms: an ounce of prevention really is worth a pound of cure. Infect Control Hosp Epidemiol 2008; 29:1107–1109.
- Viau RA, Hujer AM, Marshall SH, et al. “Silent” dissemination of Klebsiella pneumoniae isolates bearing K pneumoniae carbapenemase in a long-term care facility for children and young adults in Northeast Ohio”. Clin Infect Dis 2012; 54:1314–1321.
- Galani I, Rekatsina PD, Hatzaki D, Plachouras D, Souli M, Giamarellou H. Evaluation of different laboratory tests for the detection of metallo-beta-lactamase production in Enterobacteriaceae. J Antimicrob Chemother 2008; 61:548–553.
- Anderson KF, Lonsway DR, Rasheed JK, et al. Evaluation of methods to identify the Klebsiella pneumoniae carbapenemase in Enterobacteriaceae. J Clin Microbiol 2007; 45:2723–2725.
- Nicolle LE, Bradley S, Colgan R, Rice JC, Schaeffer A, Hooton TM. Infectious Diseases Society of America guidelines for the diagnosis and treatment of asymptomatic bacteriuria in adults. Clin Infect Dis 2005; 40:643–654.
- Daikos GL, Markogiannakis A. Carbapenemase-producing Klebsiella pneumoniae: (when) might we still consider treating with carbapenems? Clin Microbiol Infect 2011; 17:1135–1141.
- Bulik CC, Nicolau DP. Double-carbapenem therapy for carbapenemase-producing Klebsiella pneumoniae. Antimicrob Agents Chemother 2011; 55:3002–3004.
- Pogue JM, Lee J, Marchaim D, et al. Incidence of and risk factors for colistin-associated nephrotoxicity in a large academic health system. Clin Infect Dis 2011; 53:879–884.
- Garonzik SM, Li J, Thamlikitkul V, et al. Population pharmacokinetics of colistin methanesulfonate and formed colistin in critically ill patients from a multicenter study provide dosing suggestions for various categories of patients. Antimicrob Agents Chemother 2011; 55:3284–3294.
- Dalfno L, Puntillo F, Mosca A, et al. High-dose, extended-interval colistin administration in critically ill patients: is this the right dosing strategy? A preliminary study. Clin Infect Dis 2012; 54:1720–1726.
- Marchaim D, Chopra T, Pogue JM, et al. Outbreak of colistin-resistant, carbapenem-resistant Klebsiella pneumoniae in metropolitan Detroit, Michigan. Antimicrob Agents Chemother 2011; 55:593–599.
- Bonilla MF, Avery RK, Rehm SJ, Neuner EA, Isada CM, van Duin D. Extreme alkaline phosphatase elevation associated with tigecycline. J Antimicrob Chemother 2011; 66:952–953.
- Prasad P, Sun J, Danner RL, Natanson C. Excess deaths associated with tigecycline after approval based on noninferiority trials. Clin Infect Dis 2012; 54:1699–1709.
- Tasina E, Haidich AB, Kokkali S, Arvanitidou M. Efficacy and safety of tigecycline for the treatment of infectious diseases: a meta-analysis. Lancet Infect Dis 2011; 11:834–844.
- Cai Y, Wang R, Liang B, Bai N, Liu Y. Systematic review and meta-analysis of the effectiveness and safety of tigecycline for treatment of infectious disease. Antimicrob Agents Chemother 2011; 55:1162–1172.
- Yahav D, Lador A, Paul M, Leibovici L. Efficacy and safety of tigecycline: a systematic review and meta-analysis. J Antimicrob Chemother 2011; 66:1963–1971.
- Satlin MJ, Kubin CJ, Blumenthal JS, et al. Comparative effectiveness of aminoglycosides, polymyxin B, and tigecycline for clearance of carbapenem-resistant Klebsiella pneumoniae from urine. Antimicrob Agents Chemother 2011; 55:5893–5899.
- Endimiani A, Patel G, Hujer KM, et al. In vitro activity of fosfomycin against blaKPC-containing Klebsiella pneumoniae isolates, including those nonsusceptible to tigecycline and/or colistin. Antimicrob Agents Chemother 2010; 54:526–529.
- Qureshi ZA, Paterson DL, Potoski BA, et al. Treatment outcome of bacteremia due to KPC-producing Klebsiella pneumoniae: superiority of combination antimicrobial regimens. Antimicrob Agents Chemother 2012; 56:2108–2113.
- Papp-Wallace KM, Endimiani A, Taracila MA, Bonomo RA. Carbapenems: past, present, and future. Antimicrob Agents Chemother 2011; 55:4943–4960.
- Rahal JJ, Urban C, Horn D, et al. Class restriction of cephalosporin use to control total cephalosporin resistance in nosocomial Klebsiella. JAMA 1998; 280:1233–1237.
- Paterson DL, Ko WC, Von Gottberg A, et al. International prospective study of Klebsiella pneumoniae bacteremia: implications of extended-spectrum beta-lactamase production in nosocomial Infections. Ann Intern Med 2004; 140:26–32.
- Endimiani A, Luzzaro F, Perilli M, et al. Bacteremia due to Klebsiella pneumoniae isolates producing the TEM-52 extended-spectrum beta-lactamase: treatment outcome of patients receiving imipenem or ciprofoxacin. Clin Infect Dis 2004; 38:243–251.
- Livermore DM, Sefton AM, Scott GM. Properties and potential of ertapenem. J Antimicrob Chemother 2003; 52:331–344.
- Bazan JA, Martin SI, Kaye KM. Newer beta-lactam antibiotics: doripenem, ceftobiprole, ceftaroline, and cefepime. Infect Dis Clin North Am 2009; 23:983–996, ix.
- Pakyz AL, MacDougall C, Oinonen M, Polk RE. Trends in antibacterial use in US academic health centers: 2002 to 2006. Arch Intern Med 2008; 168:2254–2260.
- Sievert DM, Ricks P, Edwards JR, et al. Antimicrobial-resistant pathogens associated with healthcare-associated infections: summary of data reported to the National Healthcare Safety Network at the Centers for Disease Control and Prevention, 2009–2010. Infect Control Hosp Epidemiol 2013; 34:1–14.
- Centers for Disease Control and Prevention. Vital signs: carbapenem-resistant Enterobacteriaceae. MMWR 2013; 62:165–170.
- Yigit H, Queenan AM, Anderson GJ, et al. Novel carbapenem-hydrolyzing beta-lactamase, KPC-1, from a carbapenem-resistant strain of Klebsiella pneumoniae. Antimicrob Agents Chemother 2001; 45:1151–1161.
- Smith Moland E, Hanson ND, Herrera VL, et al. Plasmid-mediated, carbapenem-hydrolysing beta-lactamase, KPC-2, in Klebsiella pneumoniae isolates. J Antimicrob Chemother 2003; 51:711–714.
- Woodford N, Tierno PM, Young K, et al. Outbreak of Klebsiella pneumoniae producing a new carbapenem-hydrolyzing class A beta-lactamase, KPC-3, in a New York medical center. Antimicrob Agents Chemother 2004; 48:4793–4799.
- Lehey Clinic. OXA-type β-Lactamases. http://www.lahey.org/Studies/other.asp#table1. Accessed March 11, 2013.
- Mathers AJ, Cox HL, Kitchel B, et al. Molecular dissection of an outbreak of carbapenem-resistant Enterobacteriaceae reveals intergenus KPC carbapenemase transmission through a promiscuous plasmid. MBio 2011; 2 6:e00204–11.
- Endimiani A, Hujer AM, Perez F, et al. Characterization of blaKPC-containing Klebsiella pneumoniae isolates detected in different institutions in the Eastern USA. J Antimicrob Chemother 2009; 63:427–437.
- Endimiani A, Carias LL, Hujer AM, et al. Presence of plasmid-mediated quinolone resistance in Klebsiella pneumoniae isolates possessing blaKPC in the United States. Antimicro Agents Chemother 2008; 52:2680–2682.
- Magiorakos A P, Srinivasan A, Carey RB, et al. Multidrug-resistant, extensively drug-resistant and pandrug-resistant bacteria: an international expert proposal for interim standard definitions for acquired resistance. Clin Microbiol Infect 2012; 18:268–281.
- Tzouvelekis LS, Markogiannakis A, Psichogiou M, Tassios PT, Daikos GL. Carbapenemases in Klebsiella pneumoniae and other Enterobacteriaceae: an evolving crisis of global dimensions. Clin Microbiol Rev 2012; 25:682–707.
- Lopez JA, Correa A, Navon-Venezia S, et al. Intercontinental spread from Israel to Colombia of a KPC-3-producing Klebsiella pneumoniae strain. Clin Microbiol Infect 2011; 17:52–56.
- Naas T, Nordmann P, Vedel G, Poyart C. Plasmid-mediated carbapenem-hydrolyzing beta-lactamase KPC in a Klebsiella pneumoniae isolate from France. Antimicrob Agents Chemother 2005; 49:4423–4424.
- Navon-Venezia S, Leavitt A, Schwaber MJ, et al. First report on a hyperepidemic clone of KPC-3-producing Klebsiella pneumoniae in Israel genetically related to a strain causing outbreaks in the United States. Antimicrob Agents Chemother 2009; 53:818–820.
- Yong D, Toleman MA, Giske CG, et al. Characterization of a new metallo-beta-lactamase gene, bla(NDM-1), and a novel erythromycin esterase gene carried on a unique genetic structure in Klebsiella pneumoniae sequence type 14 from India. Antimicrob Agents Chemother 2009; 53:5046–5054.
- Livermore DM, Walsh TR, Toleman M, Woodford N. Balkan NDM-1: escape or transplant? Lancet Infect Dis 2011; 11:164.
- Centers for Disease Control and Prevention. Carbapenem-resistant enterobacteriaceae containing New Delhi metallo-beta-lactamase in two patients - Rhode Island, March 2012. MMWR Morb Mortal Wkly Rep 2012Jun 22; 61:446–448.
- Centers for Disease Control and Prevention. Detection of Enterobacteriaceae isolates carrying metallo-beta-lactamase—United States, 2010. MMWR Morb Mortal Wkly Rep 2010; 59:750.
- McGann P, Hang J, Clifford RJ, et al. Complete sequence of a novel 178-kilobase plasmid carrying bla(NDM-1) in a Providencia stuartii strain isolated in Afghanistan. Antimicrob Agents Chemother 2012; 56:1673–1679.
- Borgia S, Lastovetska O, Richardson D, et al. Outbreak of carbapenem-resistant Enterobacteriaceae containing blaNDM-1, Ontario, Canada. Clin Infect Dis 2012; 55:e109–e117.
- Endimiani A, Perez F, Bajaksouzian S, et al. Evaluation of updated interpretative criteria for categorizing Klebsiella pneumoniae with reduced carbapenem susceptibility. J Clinic Microbiol 2010; 48:4417–4425.
- Neuner EA, Sekeres J, Hall GS, van Duin D. Experience with fosfomycin for treatment of urinary tract infections due to multidrug-resistant organisms. Antimicrob Agents Chemother 2012; 56:5744–5748.
- Neuner EA, Yeh JY, Hall GS, et al. Treatment and outcomes in carbapenem-resistant Klebsiella pneumoniae bloodstream infections. Diagnostic Microbiol Infect Dis 2011; 69:357–362.
- van Duin D, Kaye KS, Neuner EA, Bonomo RA. Carbapenem-resistant Enterobacteriaceae: a review of treatment and outcomes. Diagnostic Microbiol Infect Dis 2013; 75:115–120.
- Neuner EA, Yeh J-Y, Hall GS, et al. Treatment and outcomes in carbapenem-resistant Klebsiella pneumoniae bloodstream infections. Diagn Microbiol Infect Dis 2011; 69:357–362.
- Patel G, Huprikar S, Factor SH, Jenkins SG, Calfee DP. Outcomes of carbapenem-resistant Klebsiella pneumoniae infection and the impact of antimicrobial and adjunctive therapies. Infect Control Hosp Epidemiol 2008; 29:1099–1106.
- Borer A, Saidel-Odes L, Riesenberg K, et al. Attributable mortality rate for carbapenem-resistant Klebsiella pneumoniae bacteremia. Infect Control Hosp Epidemiol 2009; 30:972–976.
- Marchaim D, Chopra T, Perez F, et al. Outcomes and genetic relatedness of carbapenem-resistant Enterobacteriaceae at Detroit medical center. Infect Control Hosp Epidemiol 2011; 32:861–871.
- Perez F, Endimiani A, Ray AJ, et al. Carbapenem-resistant Acinetobacter baumannii and Klebsiella pneumoniae across a hospital system: impact of post-acute care facilities on dissemination. J Antimicrob Chemother 2010; 65:1807–1818.
- Tumbarello M, Viale P, Viscoli C, et al. Predictors of mortality in bloodstream infections caused by Klebsiella pneumoniae carbapenemase-producing K. pneumoniae: importance of combination therapy. Clin Infect Dis 2012; 55:943–950.
- Little ML, Qin X, Zerr DM, Weissman SJ. Molecular diversity in mechanisms of carbapenem resistance in paediatric Enterobacteriaceae. Int J Antimicrob Agents 2012; 39:52–57.
- Logan LK. Carbapenem-resistant Enterobacteriaceae: an emerging problem in children. Clin Infect Dis 2012; 55:852–859.
- Rastegar Lari A, Azimi L, Rahbar M, Fallah F, Alaghehbandan R. Phenotypic detection of Klebsiella pneumoniae carbapenemase among burns patients: first report from Iran. Burns 2013; 39:174–176.
- Zarkotou O, Pournaras S, Tselioti P, et al. Predictors of mortality in patients with bloodstream infections caused by KPC-producing Klebsiella pneumoniae and impact of appropriate antimicrobial treatment. Clin Microbiol Infect 2011; 17:1798–1803.
- Hyle EP, Ferraro MJ, Silver M, Lee H, Hooper DC. Ertapenem-resistant Enterobacteriaceae: risk factors for acquisition and outcomes. Infect Control Hosp Epidemiol 2010; 31:1242–1249.
- Ben-David D, Masarwa S, Navon-Venezia S, et al. Carbapenem-resistant Klebsiella pneumoniae in post-acute-care facilities in Israel. Infect Control Hosp Epidemiol 2011; 32:845–853.
- Safdar N, Maki DG. The commonality of risk factors for nosocomial colonization and infection with antimicrobial-resistant Staphylococcus aureus, enterococcus, gram-negative bacilli, Clostridium difficile, and Candida. Ann Intern Med 2002; 136:834–844.
- Marchaim D, Perez F, Lee J, et al. “Swimming in resistance”: co-colonization with carbapenem-resistant Enterobacteriaceae and Acinetobacter baumannii or Pseudomonas aeruginosa.” Am J Infect Control 2012; 40:830–835.
- Smith PW, Bennett G, Bradley S, et al. SHEA/APIC Guideline: Infection prevention and control in the long-term care facility. Am J Infect Control 2008; 36:504–535.
- Centers for Disease Control and Prevention. Guidance for control of infections with carbapenem-resistant or carbapenemase-producing Enterobacteriaceae in acute care facilities. MMWR 2009; 58:256–260.
- Munoz-Price LS, De La Cuesta C, Adams S, et al. Successful eradication of a monoclonal strain of Klebsiella pneumoniae during a K. pneumoniae carbapenemase-producing K. pneumoniae outbreak in a surgical intensive care unit in Miami, Florida. Infect Control Hosp Epidemiol 2010; 31:1074–1077.
- Snitkin ES, Zelazny AM, Thomas PJ, et al. Tracking a hospital outbreak of carbapenem-resistant Klebsiella pneumoniae with wholegenome sequencing. Sci Transl Med 2012; 4:148ra16.
- Srinivasan A, Patel JB. Klebsiella pneumoniae carbapenemase-producing organisms: an ounce of prevention really is worth a pound of cure. Infect Control Hosp Epidemiol 2008; 29:1107–1109.
- Viau RA, Hujer AM, Marshall SH, et al. “Silent” dissemination of Klebsiella pneumoniae isolates bearing K pneumoniae carbapenemase in a long-term care facility for children and young adults in Northeast Ohio”. Clin Infect Dis 2012; 54:1314–1321.
- Galani I, Rekatsina PD, Hatzaki D, Plachouras D, Souli M, Giamarellou H. Evaluation of different laboratory tests for the detection of metallo-beta-lactamase production in Enterobacteriaceae. J Antimicrob Chemother 2008; 61:548–553.
- Anderson KF, Lonsway DR, Rasheed JK, et al. Evaluation of methods to identify the Klebsiella pneumoniae carbapenemase in Enterobacteriaceae. J Clin Microbiol 2007; 45:2723–2725.
- Nicolle LE, Bradley S, Colgan R, Rice JC, Schaeffer A, Hooton TM. Infectious Diseases Society of America guidelines for the diagnosis and treatment of asymptomatic bacteriuria in adults. Clin Infect Dis 2005; 40:643–654.
- Daikos GL, Markogiannakis A. Carbapenemase-producing Klebsiella pneumoniae: (when) might we still consider treating with carbapenems? Clin Microbiol Infect 2011; 17:1135–1141.
- Bulik CC, Nicolau DP. Double-carbapenem therapy for carbapenemase-producing Klebsiella pneumoniae. Antimicrob Agents Chemother 2011; 55:3002–3004.
- Pogue JM, Lee J, Marchaim D, et al. Incidence of and risk factors for colistin-associated nephrotoxicity in a large academic health system. Clin Infect Dis 2011; 53:879–884.
- Garonzik SM, Li J, Thamlikitkul V, et al. Population pharmacokinetics of colistin methanesulfonate and formed colistin in critically ill patients from a multicenter study provide dosing suggestions for various categories of patients. Antimicrob Agents Chemother 2011; 55:3284–3294.
- Dalfno L, Puntillo F, Mosca A, et al. High-dose, extended-interval colistin administration in critically ill patients: is this the right dosing strategy? A preliminary study. Clin Infect Dis 2012; 54:1720–1726.
- Marchaim D, Chopra T, Pogue JM, et al. Outbreak of colistin-resistant, carbapenem-resistant Klebsiella pneumoniae in metropolitan Detroit, Michigan. Antimicrob Agents Chemother 2011; 55:593–599.
- Bonilla MF, Avery RK, Rehm SJ, Neuner EA, Isada CM, van Duin D. Extreme alkaline phosphatase elevation associated with tigecycline. J Antimicrob Chemother 2011; 66:952–953.
- Prasad P, Sun J, Danner RL, Natanson C. Excess deaths associated with tigecycline after approval based on noninferiority trials. Clin Infect Dis 2012; 54:1699–1709.
- Tasina E, Haidich AB, Kokkali S, Arvanitidou M. Efficacy and safety of tigecycline for the treatment of infectious diseases: a meta-analysis. Lancet Infect Dis 2011; 11:834–844.
- Cai Y, Wang R, Liang B, Bai N, Liu Y. Systematic review and meta-analysis of the effectiveness and safety of tigecycline for treatment of infectious disease. Antimicrob Agents Chemother 2011; 55:1162–1172.
- Yahav D, Lador A, Paul M, Leibovici L. Efficacy and safety of tigecycline: a systematic review and meta-analysis. J Antimicrob Chemother 2011; 66:1963–1971.
- Satlin MJ, Kubin CJ, Blumenthal JS, et al. Comparative effectiveness of aminoglycosides, polymyxin B, and tigecycline for clearance of carbapenem-resistant Klebsiella pneumoniae from urine. Antimicrob Agents Chemother 2011; 55:5893–5899.
- Endimiani A, Patel G, Hujer KM, et al. In vitro activity of fosfomycin against blaKPC-containing Klebsiella pneumoniae isolates, including those nonsusceptible to tigecycline and/or colistin. Antimicrob Agents Chemother 2010; 54:526–529.
- Qureshi ZA, Paterson DL, Potoski BA, et al. Treatment outcome of bacteremia due to KPC-producing Klebsiella pneumoniae: superiority of combination antimicrobial regimens. Antimicrob Agents Chemother 2012; 56:2108–2113.
KEY POINTS
- The utility of carbapenems is being undermined by the emergence of resistance in Enterobacteriaceae and other bacteria.
- The clinical impact of CRE falls on elderly patients exposed to these organisms in hospitals and long-term care facilities. In this vulnerable group, invasive infections with CRE exact a high death rate.
- Long-term care facilities play an important role in the transmission dynamics of CRE.
- Tigecycline and colistin are treatments of last resort against infections caused by CRE. Their use in combination with other agents, especially carbapenems, may improve outcomes and needs to be explored further.
- Early detection of CRE in the microbiology laboratory is key to guiding infection control and treatment decisions and supporting surveillance efforts.