User login
Does digoxin decrease morbidity for those in sinus rhythm with heart failure?
In patients with congestive heart failure due to systolic dysfunction who are in normal sinus rhythm, digoxin therapy reduces rates of hospitalization, as well as clinical deterioration, defined as worsening New York Heart Association (NYHA) classification or an increase in clinical signs and symptoms (strength of recommendation [SOR]: A, systematic review of randomized controlled trials [RCT]).1 These benefits appear to be more pronounced for men.2
Patients treated with digoxin are at increased risk of developing supraventricular dysrhythmias and second- or third-degree atrioventricular block (SOR: A, large RCT).3 It is unclear if patients with diastolic dysfunction experience similar benefits or harms (SOR: A, systematic review of RCTs).1 Digoxin has not been shown to have any effect on mortality for men with congestive heart failure in sinus rhythm (SOR: A, systematic review of RCTs).1 Digoxin use for women may be associated with an increased risk of mortality2 (SOR: B, extrapolation from RCT).
Evidence summary
A recent Cochrane systematic review summarizes the clinical effects of digoxin when used for patients with heart failure in normal sinus rhythm. Thirteen studies including 7896 participants, most of whom had systolic dysfunction, met the criteria for inclusion. Ninety-four percent of all study participants came from a single large randomized placebo-controlled trial.3 Because the studies did not all measure the same outcomes, subgroup analyses were performed.
Four studies with 1096 participants contributed to the findings on clinical status, 12 studies with 7262 participants contributed to the findings of hospitalization and 8 studies including 7755 patients contributed to the data on mortality. Patients receiving digoxin experienced reduced rates of hospitalization due to worsening heart failure (odds ratio [OR]=0.68; 95% confidence interval [CI], 0.61–0.75; number needed to treat [NNT]=13–17) and less clinical deterioration (OR=0.31; 95% CI, 0.21–0.43; NNT=3–61). The wide range in NNT for the reduction in clinical deterioration reflects varying baseline rates of worsening clinical status found among the 12 studies for patients receiving placebo. The narrow CI associated with the odds ratio for reduced rates of clinical deterioration reflects the fact that the majority of patients whose clinical status was evaluated as an outcome came from a single large study, the DIG trial.3 This trial followed 6800 patients with NYHA classifications I to III. Ninety-four percent of patients in this trial were additionally on angiotensin-converting enzyme (ACE) inhibitors and 82% were taking diuretics. Patients were followed for a mean of 37 months.
A subgroup analysis of 988 patients with diastolic dysfunction (ejection fraction >45%) in this study3 suggested no clear benefits or harms when digoxin was used in combination with other therapies vs placebo; however, it did show a positive trend towards the combined outcome of reduced hospitalizations and less clinical deterioration (relative risk [RR]=0.82; 95% CI, 0.63–1.07). Increased rates of supraventricular dysrhythmias (RR=2.08; 95% CI, 1.44–2.99; number needed to harm [NNH]=77) and second- and third-degree heart block were demonstrated for patients receiving digoxin (RR=2.93; 95% CI, 1.61–5.34; NNH=125). There was no difference in mortality between patients receiving digoxin or those receiving placebo (OR=0.98; 95% CI, 0.89–1.09).1
A post-hoc subgroup analysis focusing only on sex-based differences in the DIG trial suggested women benefit less than men from reduced hospitalizations: –4.2% (95% CI, –8.9 to 0.5) vs –8.9% (95% CI, –11.4 to –6.5) (P=.053).2 When a multivariable analysis was performed, digoxin use for women was associated with a higher risk of mortality (adjusted hazard ratio vs placebo=1.23; 95% CI, 1.02–1.47).2
Two randomized controlled withdrawal studies, in which patients who were being treated with digoxin had it discontinued, were also included in the systematic review. These patients’ clinical outcomes were then compared with persons who had continued to receive digoxin for the duration of the trial. Six parallel design studies, in which patients taking digoxin underwent a washout period before being randomized to either digoxin or placebo, were also included in the evaluation of digoxin’s effect on clinical status. Because these patients had already demonstrated the ability to tolerate digoxin, these studies may have been biased in favor of digoxin.4,5
Recommendations from others
The American College of Cardiology/ American Heart Association6 and Heart Failure Society of America7 guidelines both recommend that digoxin be used in NYHA class II–III patients in sinus rhythm who remain symptomatic on standard therapy (described as ACE inhibitors, diuretics, and beta-blockers). Guidelines from the Scottish Intercollegiate Society,8 the European Society of Cardiology,9 and the American Medical Directors Association10 all offer similar recommendations.
Digoxin unlikely to benefit most patients with mild heart failure
It is clear that ACE inhibitors, diuretics, and beta-blockers should all be the first drugs chosen for therapy for patients with CHF. They have not only been shown to improve mortality and reduce symptoms but they do not carry any of the significant risks associated with digoxin toxicity.
Digoxin is unlikely to benefit patients with Class I heart failure, as their risk of clinical deterioration and hospitalizations are low. However, for patients who cannot tolerate any of the first-line drugs or who remain symptomatic while taking them, digoxin carefully dosed and monitored is a useful adjunct in practice.
While it is true that these patients need periodic laboratory monitoring, by the time they require digoxin therapy, their visits for care are already frequent and they would likely require few, if any, additional visits.
1. Hood WB, Jr, Dans AL, Guyatt GH, Jaeschke R, McMurray JJV. Digitalis for treatment of congestive heart failure in patients in sinus rhythm. Cochrane Database Syst Rev 2004;(2):CD002901.-
2. Rathore SS, Wang Y, Krumholz HM. Sex based differences in the effect of digoxin for the treatment of heart failure. N Engl J Med 2002;347:1403-1411.
3. The effect of digoxin on mortality and morbidity in patients with heart failure. The Digitalis Investigation Group. N Engl J Med 1997;336:525-333.
4. Uretsky BF, Young JB, Shahidi FE, Yellen LG, Harrison MC, Jolly MK. Randomized study assessing the effect of digoxin withdrawal in patients with mild to moderate chronic congestive heart failure: results of the PROVED trial. PROVED Investigative Group. J Am Coll Cardiol 1993;22:955-962.
5. Packer M, Gheoghiade M, Young JB, et al. Withdrawal of digoxin from patients with chronic heart failure treated with angiotensin-converting-enzyme inhibitors. RADIANCE Study. N Engl J Med 1993;329:1-7.
6. Hunt SA, Baker DW, Chin MH, et al. ACC/AHA guidelines for the evaluation and management of chronic heart failure in the adult. Bethesda, Md: American College of Cardiology Foundation; 2001.
7. Heart Failure Society of America guidelines for management of patients with heart failure caused by left ventricular systolic dysfunction-pharmacological management. J Card Fail 1999;5:357-382.
8. Diagnosis and treatment of heart failure due to left ventricular systolic dysfunction. Edinburgh: Scottish Intercollegiate Guidelines Network; 1999. Available at: www.sign.ac.uk/guidelines/fulltext/35/index.html.
9. Remme WJ, Swedberg K. Task Force for the Diagnosis and Treatment of Chronic Heart Failure. European Guidelines for the diagnosis and treatment of chronic heart failure. Eur Heart J 2001;22:1527-1560.
10. American Medical Directors Association. Heart failure. Columbia, Md: American Medical Directors Association (AMDA); 2002.
In patients with congestive heart failure due to systolic dysfunction who are in normal sinus rhythm, digoxin therapy reduces rates of hospitalization, as well as clinical deterioration, defined as worsening New York Heart Association (NYHA) classification or an increase in clinical signs and symptoms (strength of recommendation [SOR]: A, systematic review of randomized controlled trials [RCT]).1 These benefits appear to be more pronounced for men.2
Patients treated with digoxin are at increased risk of developing supraventricular dysrhythmias and second- or third-degree atrioventricular block (SOR: A, large RCT).3 It is unclear if patients with diastolic dysfunction experience similar benefits or harms (SOR: A, systematic review of RCTs).1 Digoxin has not been shown to have any effect on mortality for men with congestive heart failure in sinus rhythm (SOR: A, systematic review of RCTs).1 Digoxin use for women may be associated with an increased risk of mortality2 (SOR: B, extrapolation from RCT).
Evidence summary
A recent Cochrane systematic review summarizes the clinical effects of digoxin when used for patients with heart failure in normal sinus rhythm. Thirteen studies including 7896 participants, most of whom had systolic dysfunction, met the criteria for inclusion. Ninety-four percent of all study participants came from a single large randomized placebo-controlled trial.3 Because the studies did not all measure the same outcomes, subgroup analyses were performed.
Four studies with 1096 participants contributed to the findings on clinical status, 12 studies with 7262 participants contributed to the findings of hospitalization and 8 studies including 7755 patients contributed to the data on mortality. Patients receiving digoxin experienced reduced rates of hospitalization due to worsening heart failure (odds ratio [OR]=0.68; 95% confidence interval [CI], 0.61–0.75; number needed to treat [NNT]=13–17) and less clinical deterioration (OR=0.31; 95% CI, 0.21–0.43; NNT=3–61). The wide range in NNT for the reduction in clinical deterioration reflects varying baseline rates of worsening clinical status found among the 12 studies for patients receiving placebo. The narrow CI associated with the odds ratio for reduced rates of clinical deterioration reflects the fact that the majority of patients whose clinical status was evaluated as an outcome came from a single large study, the DIG trial.3 This trial followed 6800 patients with NYHA classifications I to III. Ninety-four percent of patients in this trial were additionally on angiotensin-converting enzyme (ACE) inhibitors and 82% were taking diuretics. Patients were followed for a mean of 37 months.
A subgroup analysis of 988 patients with diastolic dysfunction (ejection fraction >45%) in this study3 suggested no clear benefits or harms when digoxin was used in combination with other therapies vs placebo; however, it did show a positive trend towards the combined outcome of reduced hospitalizations and less clinical deterioration (relative risk [RR]=0.82; 95% CI, 0.63–1.07). Increased rates of supraventricular dysrhythmias (RR=2.08; 95% CI, 1.44–2.99; number needed to harm [NNH]=77) and second- and third-degree heart block were demonstrated for patients receiving digoxin (RR=2.93; 95% CI, 1.61–5.34; NNH=125). There was no difference in mortality between patients receiving digoxin or those receiving placebo (OR=0.98; 95% CI, 0.89–1.09).1
A post-hoc subgroup analysis focusing only on sex-based differences in the DIG trial suggested women benefit less than men from reduced hospitalizations: –4.2% (95% CI, –8.9 to 0.5) vs –8.9% (95% CI, –11.4 to –6.5) (P=.053).2 When a multivariable analysis was performed, digoxin use for women was associated with a higher risk of mortality (adjusted hazard ratio vs placebo=1.23; 95% CI, 1.02–1.47).2
Two randomized controlled withdrawal studies, in which patients who were being treated with digoxin had it discontinued, were also included in the systematic review. These patients’ clinical outcomes were then compared with persons who had continued to receive digoxin for the duration of the trial. Six parallel design studies, in which patients taking digoxin underwent a washout period before being randomized to either digoxin or placebo, were also included in the evaluation of digoxin’s effect on clinical status. Because these patients had already demonstrated the ability to tolerate digoxin, these studies may have been biased in favor of digoxin.4,5
Recommendations from others
The American College of Cardiology/ American Heart Association6 and Heart Failure Society of America7 guidelines both recommend that digoxin be used in NYHA class II–III patients in sinus rhythm who remain symptomatic on standard therapy (described as ACE inhibitors, diuretics, and beta-blockers). Guidelines from the Scottish Intercollegiate Society,8 the European Society of Cardiology,9 and the American Medical Directors Association10 all offer similar recommendations.
Digoxin unlikely to benefit most patients with mild heart failure
It is clear that ACE inhibitors, diuretics, and beta-blockers should all be the first drugs chosen for therapy for patients with CHF. They have not only been shown to improve mortality and reduce symptoms but they do not carry any of the significant risks associated with digoxin toxicity.
Digoxin is unlikely to benefit patients with Class I heart failure, as their risk of clinical deterioration and hospitalizations are low. However, for patients who cannot tolerate any of the first-line drugs or who remain symptomatic while taking them, digoxin carefully dosed and monitored is a useful adjunct in practice.
While it is true that these patients need periodic laboratory monitoring, by the time they require digoxin therapy, their visits for care are already frequent and they would likely require few, if any, additional visits.
In patients with congestive heart failure due to systolic dysfunction who are in normal sinus rhythm, digoxin therapy reduces rates of hospitalization, as well as clinical deterioration, defined as worsening New York Heart Association (NYHA) classification or an increase in clinical signs and symptoms (strength of recommendation [SOR]: A, systematic review of randomized controlled trials [RCT]).1 These benefits appear to be more pronounced for men.2
Patients treated with digoxin are at increased risk of developing supraventricular dysrhythmias and second- or third-degree atrioventricular block (SOR: A, large RCT).3 It is unclear if patients with diastolic dysfunction experience similar benefits or harms (SOR: A, systematic review of RCTs).1 Digoxin has not been shown to have any effect on mortality for men with congestive heart failure in sinus rhythm (SOR: A, systematic review of RCTs).1 Digoxin use for women may be associated with an increased risk of mortality2 (SOR: B, extrapolation from RCT).
Evidence summary
A recent Cochrane systematic review summarizes the clinical effects of digoxin when used for patients with heart failure in normal sinus rhythm. Thirteen studies including 7896 participants, most of whom had systolic dysfunction, met the criteria for inclusion. Ninety-four percent of all study participants came from a single large randomized placebo-controlled trial.3 Because the studies did not all measure the same outcomes, subgroup analyses were performed.
Four studies with 1096 participants contributed to the findings on clinical status, 12 studies with 7262 participants contributed to the findings of hospitalization and 8 studies including 7755 patients contributed to the data on mortality. Patients receiving digoxin experienced reduced rates of hospitalization due to worsening heart failure (odds ratio [OR]=0.68; 95% confidence interval [CI], 0.61–0.75; number needed to treat [NNT]=13–17) and less clinical deterioration (OR=0.31; 95% CI, 0.21–0.43; NNT=3–61). The wide range in NNT for the reduction in clinical deterioration reflects varying baseline rates of worsening clinical status found among the 12 studies for patients receiving placebo. The narrow CI associated with the odds ratio for reduced rates of clinical deterioration reflects the fact that the majority of patients whose clinical status was evaluated as an outcome came from a single large study, the DIG trial.3 This trial followed 6800 patients with NYHA classifications I to III. Ninety-four percent of patients in this trial were additionally on angiotensin-converting enzyme (ACE) inhibitors and 82% were taking diuretics. Patients were followed for a mean of 37 months.
A subgroup analysis of 988 patients with diastolic dysfunction (ejection fraction >45%) in this study3 suggested no clear benefits or harms when digoxin was used in combination with other therapies vs placebo; however, it did show a positive trend towards the combined outcome of reduced hospitalizations and less clinical deterioration (relative risk [RR]=0.82; 95% CI, 0.63–1.07). Increased rates of supraventricular dysrhythmias (RR=2.08; 95% CI, 1.44–2.99; number needed to harm [NNH]=77) and second- and third-degree heart block were demonstrated for patients receiving digoxin (RR=2.93; 95% CI, 1.61–5.34; NNH=125). There was no difference in mortality between patients receiving digoxin or those receiving placebo (OR=0.98; 95% CI, 0.89–1.09).1
A post-hoc subgroup analysis focusing only on sex-based differences in the DIG trial suggested women benefit less than men from reduced hospitalizations: –4.2% (95% CI, –8.9 to 0.5) vs –8.9% (95% CI, –11.4 to –6.5) (P=.053).2 When a multivariable analysis was performed, digoxin use for women was associated with a higher risk of mortality (adjusted hazard ratio vs placebo=1.23; 95% CI, 1.02–1.47).2
Two randomized controlled withdrawal studies, in which patients who were being treated with digoxin had it discontinued, were also included in the systematic review. These patients’ clinical outcomes were then compared with persons who had continued to receive digoxin for the duration of the trial. Six parallel design studies, in which patients taking digoxin underwent a washout period before being randomized to either digoxin or placebo, were also included in the evaluation of digoxin’s effect on clinical status. Because these patients had already demonstrated the ability to tolerate digoxin, these studies may have been biased in favor of digoxin.4,5
Recommendations from others
The American College of Cardiology/ American Heart Association6 and Heart Failure Society of America7 guidelines both recommend that digoxin be used in NYHA class II–III patients in sinus rhythm who remain symptomatic on standard therapy (described as ACE inhibitors, diuretics, and beta-blockers). Guidelines from the Scottish Intercollegiate Society,8 the European Society of Cardiology,9 and the American Medical Directors Association10 all offer similar recommendations.
Digoxin unlikely to benefit most patients with mild heart failure
It is clear that ACE inhibitors, diuretics, and beta-blockers should all be the first drugs chosen for therapy for patients with CHF. They have not only been shown to improve mortality and reduce symptoms but they do not carry any of the significant risks associated with digoxin toxicity.
Digoxin is unlikely to benefit patients with Class I heart failure, as their risk of clinical deterioration and hospitalizations are low. However, for patients who cannot tolerate any of the first-line drugs or who remain symptomatic while taking them, digoxin carefully dosed and monitored is a useful adjunct in practice.
While it is true that these patients need periodic laboratory monitoring, by the time they require digoxin therapy, their visits for care are already frequent and they would likely require few, if any, additional visits.
1. Hood WB, Jr, Dans AL, Guyatt GH, Jaeschke R, McMurray JJV. Digitalis for treatment of congestive heart failure in patients in sinus rhythm. Cochrane Database Syst Rev 2004;(2):CD002901.-
2. Rathore SS, Wang Y, Krumholz HM. Sex based differences in the effect of digoxin for the treatment of heart failure. N Engl J Med 2002;347:1403-1411.
3. The effect of digoxin on mortality and morbidity in patients with heart failure. The Digitalis Investigation Group. N Engl J Med 1997;336:525-333.
4. Uretsky BF, Young JB, Shahidi FE, Yellen LG, Harrison MC, Jolly MK. Randomized study assessing the effect of digoxin withdrawal in patients with mild to moderate chronic congestive heart failure: results of the PROVED trial. PROVED Investigative Group. J Am Coll Cardiol 1993;22:955-962.
5. Packer M, Gheoghiade M, Young JB, et al. Withdrawal of digoxin from patients with chronic heart failure treated with angiotensin-converting-enzyme inhibitors. RADIANCE Study. N Engl J Med 1993;329:1-7.
6. Hunt SA, Baker DW, Chin MH, et al. ACC/AHA guidelines for the evaluation and management of chronic heart failure in the adult. Bethesda, Md: American College of Cardiology Foundation; 2001.
7. Heart Failure Society of America guidelines for management of patients with heart failure caused by left ventricular systolic dysfunction-pharmacological management. J Card Fail 1999;5:357-382.
8. Diagnosis and treatment of heart failure due to left ventricular systolic dysfunction. Edinburgh: Scottish Intercollegiate Guidelines Network; 1999. Available at: www.sign.ac.uk/guidelines/fulltext/35/index.html.
9. Remme WJ, Swedberg K. Task Force for the Diagnosis and Treatment of Chronic Heart Failure. European Guidelines for the diagnosis and treatment of chronic heart failure. Eur Heart J 2001;22:1527-1560.
10. American Medical Directors Association. Heart failure. Columbia, Md: American Medical Directors Association (AMDA); 2002.
1. Hood WB, Jr, Dans AL, Guyatt GH, Jaeschke R, McMurray JJV. Digitalis for treatment of congestive heart failure in patients in sinus rhythm. Cochrane Database Syst Rev 2004;(2):CD002901.-
2. Rathore SS, Wang Y, Krumholz HM. Sex based differences in the effect of digoxin for the treatment of heart failure. N Engl J Med 2002;347:1403-1411.
3. The effect of digoxin on mortality and morbidity in patients with heart failure. The Digitalis Investigation Group. N Engl J Med 1997;336:525-333.
4. Uretsky BF, Young JB, Shahidi FE, Yellen LG, Harrison MC, Jolly MK. Randomized study assessing the effect of digoxin withdrawal in patients with mild to moderate chronic congestive heart failure: results of the PROVED trial. PROVED Investigative Group. J Am Coll Cardiol 1993;22:955-962.
5. Packer M, Gheoghiade M, Young JB, et al. Withdrawal of digoxin from patients with chronic heart failure treated with angiotensin-converting-enzyme inhibitors. RADIANCE Study. N Engl J Med 1993;329:1-7.
6. Hunt SA, Baker DW, Chin MH, et al. ACC/AHA guidelines for the evaluation and management of chronic heart failure in the adult. Bethesda, Md: American College of Cardiology Foundation; 2001.
7. Heart Failure Society of America guidelines for management of patients with heart failure caused by left ventricular systolic dysfunction-pharmacological management. J Card Fail 1999;5:357-382.
8. Diagnosis and treatment of heart failure due to left ventricular systolic dysfunction. Edinburgh: Scottish Intercollegiate Guidelines Network; 1999. Available at: www.sign.ac.uk/guidelines/fulltext/35/index.html.
9. Remme WJ, Swedberg K. Task Force for the Diagnosis and Treatment of Chronic Heart Failure. European Guidelines for the diagnosis and treatment of chronic heart failure. Eur Heart J 2001;22:1527-1560.
10. American Medical Directors Association. Heart failure. Columbia, Md: American Medical Directors Association (AMDA); 2002.
Evidence-based answers from the Family Physicians Inquiries Network
What is the evaluation and treatment strategy for Raynaud’s phenomenon?
Raynaud’s phenomenon is diagnosed by history, which also plays a key role in distinguishing primary from secondary Raynaud’s phenomenon (strength of recommendation [SOR]: C, based on expert opinion). The initial treatment includes conservative measures such as the use of gloves, cold avoidance, and rapid rewarming (SOR: C, based on expert opinion); in refractory cases, the vasodilatory agents nifedipine or prazosin alleviate symptoms (SOR: A for both, based on multiple randomized controlled trials) ( TABLE ).
TABLE
Primary therapies for Raynaud’s phenomenon
TREATMENT | RECOMMENDATION LEVEL | COMMON ADVERSE EFFECTS |
---|---|---|
Nifedipine | A | Lower extremity edema, flushing, headache, dizziness |
Prazosin | A | Dizziness, hypotension, palpitations |
Conservative | C | — |
Evidence summary
Raynaud’s phenomenon is diagnosed by a history of cold temperatures or emotional stress precipitating episodic digital artery vasospasm, according to expert opinion. This presents as well-demarcated digital pallor and cyanosis, often followed by reactive hyperemia occurring 15 to 20 minutes after rewarming.1,2 No reliable office test confirms the diagnosis. By definition, primary Raynaud’s phenomenon occurs in the absence of associated diseases and is considered an exaggerated vasoconstrictive response to cold. It must be distinguished from normal mottling of the digits in response to cold temperatures, effects of vasoconstrictive medications, environmental injury (frostbite, use of vibrating tools), neuropathy, and thoracic outlet syndrome.1,2 Experts differ on whether laboratory evaluation with erythrocyte sedimentation rate and an antinuclear antibody test is necessary for patients with primary Raynaud’s phenomenon.1,3
Patients with secondary Raynaud’s phenomenon have an underlying cause or disease, such as scleroderma or systemic lupus erythematosis.2 The finding of distorted capillaries in the nail folds using an ophthalmoscope set at 40+ diopter magnification is the best predictor of an associated connective-tissue disease.4 A cold-water challenge to trigger an attack of Raynaud’s phenomenon produces inconsistent results and is not recommended. Research tools such as thermographic and laser Doppler imaging can measure digital artery blood flow but are rarely used clinically.1,5 Patients with secondary Raynaud’s phenomenon should have a complete blood count, biochemistry profile, and urinalysis. They may need additional tests as determined by the nature of their underlying disease.1
Conservative management is helpful for all patients with Raynaud’s phenomenon, and may be the only treatment needed. Experts advise dressing warmly, wearing gloves when appropriate, using abortive strategies such as placing the hands into warm water, and avoiding sudden cold exposure, emotional stress, and vasoconstrictive agents such as nicotine.6
Medication may be helpful for patients whose symptoms are not controlled with conservative measures. Six randomized, placebo-controlled trials involving 451 people with primary Raynaud’s phenomenon demonstrated that nifedipine decreases the mean frequency of vasospastic attacks. Three of these trials also showed subjective improvement in symptom severity with nifedipine vs placebo.7 A meta-analysis of 6 randomized crossover studies compared nifedipine or nicardipine with placebo in 59 patients with secondary Raynaud’s phenomenon and underlying systemic sclerosis. Nifedipine significantly decreased the frequency and severity of attacks. Nicardipine showed a trend towards reduced symptoms in 1 trial with only 15 patients.8 Another randomized trial comparing sustained-release nifedipine to placebo showed a 66% reduction in the number of attacks in the treatment group at 1 year; 19 of the 77 people in the nifedipine group dropped out of the study, as did 24 of the 81 people in the placebo group.9
A systematic review of 2 randomized controlled trials with a total of 40 patients found prazosin modestly effective in secondary Raynaud’s phenomenon, but it was less well-tolerated than calcium channel blockers.10 Single, small randomized crossover trials showed improved responses to both fluoxetine11 and losartan12 when compared with nifedipine. In general, these medications appear to reduce attacks by 30% to 40%.
Small, prospective studies of low-level laser irradiation, palmar sympathectomy, and endoscopic thoracic sympathectomy show some benefit for patients with digital ulcers.13-15 A randomized controlled trial showed biofeedback was ineffective for voluntary control of digital blood flow for patients with Raynaud’s phenomenon.9
Recommendations from others
The American College of Rheumatology does not offer recommendations for the diagnosis or treatment of Raynaud’s phenomenon. UpToDate recommends treatment with conservative measures; sustained-release nifedipine or amlodipine may be used if these are insufficient. Other vasodilators may be added or substituted in the event of an adverse reaction or poor response to the calciumchannel blocker.16
Reserve pharmacotherapy for cases that are resistant to conservative measures
Grant Hoekzema, MD
Mercy Family Medicine Residency, St. Louis, Mo
Raynaud’s phenomenon is one of those clinical syndromes that stirs the desire to find an exotic explanation, such as systemic lupus erythematosus, but most often yields less glamorous results. Usually I must tell patients that I can’t explain the cause and recommend that they keep their hands as warm as possible to avoid the symptoms. I have learned to discipline myself over the years to pursue a connective tissue cause only when other signs or symptoms of such a disease are also present. The use of the ophthalmoscope at high power to look for distorted nail-fold capillary loops is a helpful pearl.
I reserve pharmacotherapy for cases that are resistant to conservative measures due to the cost and side effects of the drug options.
1. Bowling JC, Dowd PM. Raynaud’s disease. Lancet 2003;361:2078-2080.
2. Wigley FM. Clinical practice. Raynaud’s phenomenon. N Engl J Med 2002;347:1001-1008.
3. Wigley FM. Clinical manifestations and diagnosis of the Raynaud phenomenon. UpToDate. Available at: www.uptodate.com. Accessed on May 9, 2005.
4. Nagy Z, Czirjak L. Nailfold digital capillaroscopy in 447 patients with connective tissue disease and Raynaud’s disease. Eur Acad Dermatol Venereol 2004;18:62-68.
5. Clark S, Dunn G, Moore T, Jayson M 4th, King TA, Herrick AL. Comparison of thermography and laser Doppler imaging in the assessment of Raynaud’s phenomenon. Microvasc Res 2003;66:73-76.
6. Brown KM, Middaugh SJ, Haythornthwaite JA, Bielory L. The effects of stress, anxiety, and outdoor temperature on the frequency and severity of Raynaud’s attacks: the Raynaud’s treatment study. J Behav Med 2001;24:137-153.
7. Pope J. Raynaud’s phenomenon (primary). Clin Evid 2004;11:1-2.
8. Thompson AE, Shea B, Welch B, Fenlon D, Pope J. Calcium-channel blockers for Raynaud’s phenomenon in systemic sclerosis. Arthritis Rheum 2001;44:1841-1847.
9. Comparison of sustained-release nifedipine and temperature biofeedback for treatment of primary Raynaud phenomenon. Results from a randomized clinical trial with 1-year follow-up. Arch Intern Med 2000;160:1101-1108.
10. Pope J, Fenlon D, Thompson A, et al. Prazosin for Raynaud’s phenomenon in progressive systemic sclerosis. Cochrane Database Syst Rev. 2000(2):CD000956.-
11. Coleiro B, Marshall SE, Denton CP, et al. Treatment of Raynaud’s phenomenon with the selective serotonin reuptake inhibitor fluoxetine. Rheumatol (Oxf) 2001;40:1038-1043.
12. Dziadzio M, Denton CP, Smith R, et al. Losartan therapy for Raynaud’s phenomenon and scleroderma: clinical and biochemical findings in a fifteen-week, randomized, parallel-group, controlled trial. Arthritis Rheum 1999;42:2646-2655.
13. al-Awami M, Schillinger M, Gschwandtner ME, Maca T, Haumer M, Minar E. Low level laser treatment of primary and secondary Raynaud’s phenomenon. VASA 2001;30:281-284.
14. Tomaino MM, Goitz RJ, Medsger TA. Surgery for ischemic pain and Raynaud’s phenomenon in scleroderma: a description of treatment protocol and evaluation of results. Microsurgery 2001;21:75-79.
15. Matsumoto Y, Ueyama T, Endo M, Sasaki H, Kasashima F, Abe Y, et al. Endoscopic thoracic sympathectomy for Raynaud’s phenomenon. J Vasc Surg 2002;36:57-61.
16. Wigley FM. Treatment of the Raynaud phenomenon. UpToDate. Available at: www.uptodate.com. Accessed on May 9, 2005.
Raynaud’s phenomenon is diagnosed by history, which also plays a key role in distinguishing primary from secondary Raynaud’s phenomenon (strength of recommendation [SOR]: C, based on expert opinion). The initial treatment includes conservative measures such as the use of gloves, cold avoidance, and rapid rewarming (SOR: C, based on expert opinion); in refractory cases, the vasodilatory agents nifedipine or prazosin alleviate symptoms (SOR: A for both, based on multiple randomized controlled trials) ( TABLE ).
TABLE
Primary therapies for Raynaud’s phenomenon
TREATMENT | RECOMMENDATION LEVEL | COMMON ADVERSE EFFECTS |
---|---|---|
Nifedipine | A | Lower extremity edema, flushing, headache, dizziness |
Prazosin | A | Dizziness, hypotension, palpitations |
Conservative | C | — |
Evidence summary
Raynaud’s phenomenon is diagnosed by a history of cold temperatures or emotional stress precipitating episodic digital artery vasospasm, according to expert opinion. This presents as well-demarcated digital pallor and cyanosis, often followed by reactive hyperemia occurring 15 to 20 minutes after rewarming.1,2 No reliable office test confirms the diagnosis. By definition, primary Raynaud’s phenomenon occurs in the absence of associated diseases and is considered an exaggerated vasoconstrictive response to cold. It must be distinguished from normal mottling of the digits in response to cold temperatures, effects of vasoconstrictive medications, environmental injury (frostbite, use of vibrating tools), neuropathy, and thoracic outlet syndrome.1,2 Experts differ on whether laboratory evaluation with erythrocyte sedimentation rate and an antinuclear antibody test is necessary for patients with primary Raynaud’s phenomenon.1,3
Patients with secondary Raynaud’s phenomenon have an underlying cause or disease, such as scleroderma or systemic lupus erythematosis.2 The finding of distorted capillaries in the nail folds using an ophthalmoscope set at 40+ diopter magnification is the best predictor of an associated connective-tissue disease.4 A cold-water challenge to trigger an attack of Raynaud’s phenomenon produces inconsistent results and is not recommended. Research tools such as thermographic and laser Doppler imaging can measure digital artery blood flow but are rarely used clinically.1,5 Patients with secondary Raynaud’s phenomenon should have a complete blood count, biochemistry profile, and urinalysis. They may need additional tests as determined by the nature of their underlying disease.1
Conservative management is helpful for all patients with Raynaud’s phenomenon, and may be the only treatment needed. Experts advise dressing warmly, wearing gloves when appropriate, using abortive strategies such as placing the hands into warm water, and avoiding sudden cold exposure, emotional stress, and vasoconstrictive agents such as nicotine.6
Medication may be helpful for patients whose symptoms are not controlled with conservative measures. Six randomized, placebo-controlled trials involving 451 people with primary Raynaud’s phenomenon demonstrated that nifedipine decreases the mean frequency of vasospastic attacks. Three of these trials also showed subjective improvement in symptom severity with nifedipine vs placebo.7 A meta-analysis of 6 randomized crossover studies compared nifedipine or nicardipine with placebo in 59 patients with secondary Raynaud’s phenomenon and underlying systemic sclerosis. Nifedipine significantly decreased the frequency and severity of attacks. Nicardipine showed a trend towards reduced symptoms in 1 trial with only 15 patients.8 Another randomized trial comparing sustained-release nifedipine to placebo showed a 66% reduction in the number of attacks in the treatment group at 1 year; 19 of the 77 people in the nifedipine group dropped out of the study, as did 24 of the 81 people in the placebo group.9
A systematic review of 2 randomized controlled trials with a total of 40 patients found prazosin modestly effective in secondary Raynaud’s phenomenon, but it was less well-tolerated than calcium channel blockers.10 Single, small randomized crossover trials showed improved responses to both fluoxetine11 and losartan12 when compared with nifedipine. In general, these medications appear to reduce attacks by 30% to 40%.
Small, prospective studies of low-level laser irradiation, palmar sympathectomy, and endoscopic thoracic sympathectomy show some benefit for patients with digital ulcers.13-15 A randomized controlled trial showed biofeedback was ineffective for voluntary control of digital blood flow for patients with Raynaud’s phenomenon.9
Recommendations from others
The American College of Rheumatology does not offer recommendations for the diagnosis or treatment of Raynaud’s phenomenon. UpToDate recommends treatment with conservative measures; sustained-release nifedipine or amlodipine may be used if these are insufficient. Other vasodilators may be added or substituted in the event of an adverse reaction or poor response to the calciumchannel blocker.16
Reserve pharmacotherapy for cases that are resistant to conservative measures
Grant Hoekzema, MD
Mercy Family Medicine Residency, St. Louis, Mo
Raynaud’s phenomenon is one of those clinical syndromes that stirs the desire to find an exotic explanation, such as systemic lupus erythematosus, but most often yields less glamorous results. Usually I must tell patients that I can’t explain the cause and recommend that they keep their hands as warm as possible to avoid the symptoms. I have learned to discipline myself over the years to pursue a connective tissue cause only when other signs or symptoms of such a disease are also present. The use of the ophthalmoscope at high power to look for distorted nail-fold capillary loops is a helpful pearl.
I reserve pharmacotherapy for cases that are resistant to conservative measures due to the cost and side effects of the drug options.
Raynaud’s phenomenon is diagnosed by history, which also plays a key role in distinguishing primary from secondary Raynaud’s phenomenon (strength of recommendation [SOR]: C, based on expert opinion). The initial treatment includes conservative measures such as the use of gloves, cold avoidance, and rapid rewarming (SOR: C, based on expert opinion); in refractory cases, the vasodilatory agents nifedipine or prazosin alleviate symptoms (SOR: A for both, based on multiple randomized controlled trials) ( TABLE ).
TABLE
Primary therapies for Raynaud’s phenomenon
TREATMENT | RECOMMENDATION LEVEL | COMMON ADVERSE EFFECTS |
---|---|---|
Nifedipine | A | Lower extremity edema, flushing, headache, dizziness |
Prazosin | A | Dizziness, hypotension, palpitations |
Conservative | C | — |
Evidence summary
Raynaud’s phenomenon is diagnosed by a history of cold temperatures or emotional stress precipitating episodic digital artery vasospasm, according to expert opinion. This presents as well-demarcated digital pallor and cyanosis, often followed by reactive hyperemia occurring 15 to 20 minutes after rewarming.1,2 No reliable office test confirms the diagnosis. By definition, primary Raynaud’s phenomenon occurs in the absence of associated diseases and is considered an exaggerated vasoconstrictive response to cold. It must be distinguished from normal mottling of the digits in response to cold temperatures, effects of vasoconstrictive medications, environmental injury (frostbite, use of vibrating tools), neuropathy, and thoracic outlet syndrome.1,2 Experts differ on whether laboratory evaluation with erythrocyte sedimentation rate and an antinuclear antibody test is necessary for patients with primary Raynaud’s phenomenon.1,3
Patients with secondary Raynaud’s phenomenon have an underlying cause or disease, such as scleroderma or systemic lupus erythematosis.2 The finding of distorted capillaries in the nail folds using an ophthalmoscope set at 40+ diopter magnification is the best predictor of an associated connective-tissue disease.4 A cold-water challenge to trigger an attack of Raynaud’s phenomenon produces inconsistent results and is not recommended. Research tools such as thermographic and laser Doppler imaging can measure digital artery blood flow but are rarely used clinically.1,5 Patients with secondary Raynaud’s phenomenon should have a complete blood count, biochemistry profile, and urinalysis. They may need additional tests as determined by the nature of their underlying disease.1
Conservative management is helpful for all patients with Raynaud’s phenomenon, and may be the only treatment needed. Experts advise dressing warmly, wearing gloves when appropriate, using abortive strategies such as placing the hands into warm water, and avoiding sudden cold exposure, emotional stress, and vasoconstrictive agents such as nicotine.6
Medication may be helpful for patients whose symptoms are not controlled with conservative measures. Six randomized, placebo-controlled trials involving 451 people with primary Raynaud’s phenomenon demonstrated that nifedipine decreases the mean frequency of vasospastic attacks. Three of these trials also showed subjective improvement in symptom severity with nifedipine vs placebo.7 A meta-analysis of 6 randomized crossover studies compared nifedipine or nicardipine with placebo in 59 patients with secondary Raynaud’s phenomenon and underlying systemic sclerosis. Nifedipine significantly decreased the frequency and severity of attacks. Nicardipine showed a trend towards reduced symptoms in 1 trial with only 15 patients.8 Another randomized trial comparing sustained-release nifedipine to placebo showed a 66% reduction in the number of attacks in the treatment group at 1 year; 19 of the 77 people in the nifedipine group dropped out of the study, as did 24 of the 81 people in the placebo group.9
A systematic review of 2 randomized controlled trials with a total of 40 patients found prazosin modestly effective in secondary Raynaud’s phenomenon, but it was less well-tolerated than calcium channel blockers.10 Single, small randomized crossover trials showed improved responses to both fluoxetine11 and losartan12 when compared with nifedipine. In general, these medications appear to reduce attacks by 30% to 40%.
Small, prospective studies of low-level laser irradiation, palmar sympathectomy, and endoscopic thoracic sympathectomy show some benefit for patients with digital ulcers.13-15 A randomized controlled trial showed biofeedback was ineffective for voluntary control of digital blood flow for patients with Raynaud’s phenomenon.9
Recommendations from others
The American College of Rheumatology does not offer recommendations for the diagnosis or treatment of Raynaud’s phenomenon. UpToDate recommends treatment with conservative measures; sustained-release nifedipine or amlodipine may be used if these are insufficient. Other vasodilators may be added or substituted in the event of an adverse reaction or poor response to the calciumchannel blocker.16
Reserve pharmacotherapy for cases that are resistant to conservative measures
Grant Hoekzema, MD
Mercy Family Medicine Residency, St. Louis, Mo
Raynaud’s phenomenon is one of those clinical syndromes that stirs the desire to find an exotic explanation, such as systemic lupus erythematosus, but most often yields less glamorous results. Usually I must tell patients that I can’t explain the cause and recommend that they keep their hands as warm as possible to avoid the symptoms. I have learned to discipline myself over the years to pursue a connective tissue cause only when other signs or symptoms of such a disease are also present. The use of the ophthalmoscope at high power to look for distorted nail-fold capillary loops is a helpful pearl.
I reserve pharmacotherapy for cases that are resistant to conservative measures due to the cost and side effects of the drug options.
1. Bowling JC, Dowd PM. Raynaud’s disease. Lancet 2003;361:2078-2080.
2. Wigley FM. Clinical practice. Raynaud’s phenomenon. N Engl J Med 2002;347:1001-1008.
3. Wigley FM. Clinical manifestations and diagnosis of the Raynaud phenomenon. UpToDate. Available at: www.uptodate.com. Accessed on May 9, 2005.
4. Nagy Z, Czirjak L. Nailfold digital capillaroscopy in 447 patients with connective tissue disease and Raynaud’s disease. Eur Acad Dermatol Venereol 2004;18:62-68.
5. Clark S, Dunn G, Moore T, Jayson M 4th, King TA, Herrick AL. Comparison of thermography and laser Doppler imaging in the assessment of Raynaud’s phenomenon. Microvasc Res 2003;66:73-76.
6. Brown KM, Middaugh SJ, Haythornthwaite JA, Bielory L. The effects of stress, anxiety, and outdoor temperature on the frequency and severity of Raynaud’s attacks: the Raynaud’s treatment study. J Behav Med 2001;24:137-153.
7. Pope J. Raynaud’s phenomenon (primary). Clin Evid 2004;11:1-2.
8. Thompson AE, Shea B, Welch B, Fenlon D, Pope J. Calcium-channel blockers for Raynaud’s phenomenon in systemic sclerosis. Arthritis Rheum 2001;44:1841-1847.
9. Comparison of sustained-release nifedipine and temperature biofeedback for treatment of primary Raynaud phenomenon. Results from a randomized clinical trial with 1-year follow-up. Arch Intern Med 2000;160:1101-1108.
10. Pope J, Fenlon D, Thompson A, et al. Prazosin for Raynaud’s phenomenon in progressive systemic sclerosis. Cochrane Database Syst Rev. 2000(2):CD000956.-
11. Coleiro B, Marshall SE, Denton CP, et al. Treatment of Raynaud’s phenomenon with the selective serotonin reuptake inhibitor fluoxetine. Rheumatol (Oxf) 2001;40:1038-1043.
12. Dziadzio M, Denton CP, Smith R, et al. Losartan therapy for Raynaud’s phenomenon and scleroderma: clinical and biochemical findings in a fifteen-week, randomized, parallel-group, controlled trial. Arthritis Rheum 1999;42:2646-2655.
13. al-Awami M, Schillinger M, Gschwandtner ME, Maca T, Haumer M, Minar E. Low level laser treatment of primary and secondary Raynaud’s phenomenon. VASA 2001;30:281-284.
14. Tomaino MM, Goitz RJ, Medsger TA. Surgery for ischemic pain and Raynaud’s phenomenon in scleroderma: a description of treatment protocol and evaluation of results. Microsurgery 2001;21:75-79.
15. Matsumoto Y, Ueyama T, Endo M, Sasaki H, Kasashima F, Abe Y, et al. Endoscopic thoracic sympathectomy for Raynaud’s phenomenon. J Vasc Surg 2002;36:57-61.
16. Wigley FM. Treatment of the Raynaud phenomenon. UpToDate. Available at: www.uptodate.com. Accessed on May 9, 2005.
1. Bowling JC, Dowd PM. Raynaud’s disease. Lancet 2003;361:2078-2080.
2. Wigley FM. Clinical practice. Raynaud’s phenomenon. N Engl J Med 2002;347:1001-1008.
3. Wigley FM. Clinical manifestations and diagnosis of the Raynaud phenomenon. UpToDate. Available at: www.uptodate.com. Accessed on May 9, 2005.
4. Nagy Z, Czirjak L. Nailfold digital capillaroscopy in 447 patients with connective tissue disease and Raynaud’s disease. Eur Acad Dermatol Venereol 2004;18:62-68.
5. Clark S, Dunn G, Moore T, Jayson M 4th, King TA, Herrick AL. Comparison of thermography and laser Doppler imaging in the assessment of Raynaud’s phenomenon. Microvasc Res 2003;66:73-76.
6. Brown KM, Middaugh SJ, Haythornthwaite JA, Bielory L. The effects of stress, anxiety, and outdoor temperature on the frequency and severity of Raynaud’s attacks: the Raynaud’s treatment study. J Behav Med 2001;24:137-153.
7. Pope J. Raynaud’s phenomenon (primary). Clin Evid 2004;11:1-2.
8. Thompson AE, Shea B, Welch B, Fenlon D, Pope J. Calcium-channel blockers for Raynaud’s phenomenon in systemic sclerosis. Arthritis Rheum 2001;44:1841-1847.
9. Comparison of sustained-release nifedipine and temperature biofeedback for treatment of primary Raynaud phenomenon. Results from a randomized clinical trial with 1-year follow-up. Arch Intern Med 2000;160:1101-1108.
10. Pope J, Fenlon D, Thompson A, et al. Prazosin for Raynaud’s phenomenon in progressive systemic sclerosis. Cochrane Database Syst Rev. 2000(2):CD000956.-
11. Coleiro B, Marshall SE, Denton CP, et al. Treatment of Raynaud’s phenomenon with the selective serotonin reuptake inhibitor fluoxetine. Rheumatol (Oxf) 2001;40:1038-1043.
12. Dziadzio M, Denton CP, Smith R, et al. Losartan therapy for Raynaud’s phenomenon and scleroderma: clinical and biochemical findings in a fifteen-week, randomized, parallel-group, controlled trial. Arthritis Rheum 1999;42:2646-2655.
13. al-Awami M, Schillinger M, Gschwandtner ME, Maca T, Haumer M, Minar E. Low level laser treatment of primary and secondary Raynaud’s phenomenon. VASA 2001;30:281-284.
14. Tomaino MM, Goitz RJ, Medsger TA. Surgery for ischemic pain and Raynaud’s phenomenon in scleroderma: a description of treatment protocol and evaluation of results. Microsurgery 2001;21:75-79.
15. Matsumoto Y, Ueyama T, Endo M, Sasaki H, Kasashima F, Abe Y, et al. Endoscopic thoracic sympathectomy for Raynaud’s phenomenon. J Vasc Surg 2002;36:57-61.
16. Wigley FM. Treatment of the Raynaud phenomenon. UpToDate. Available at: www.uptodate.com. Accessed on May 9, 2005.
Evidence-based answers from the Family Physicians Inquiries Network
What is the best way to identify patients with white-coat hypertension?
Ambulatory blood pressure monitoring is currently the gold standard for detecting patients with white-coat hypertension. Women and all patients with lower office systolic blood pressures, stage I hypertension, and no target organ damage are more likely to have white-coat hypertension (strength of recommendation [SOR]: B, based on prospective cohort studies) ( TABLE ).
Self or home blood pressure monitoring has also been used to detect patients with white-coat hypertension. However, it has a low sensitivity (61%–68%) and low positive predictive value (PV+) (33%–48%) (SOR: B, short-term prospective cohort studies).
Evidence summary
White coat hypertension, also known as isolated office hypertension, refers to elevated blood pressures in a medical setting and normal blood pressures during regular daily life. Patients with white-coat hypertension are defined as patients 1) with an office blood pressure of >140 mm Hg systolic or >90 mm Hg diastolic on at least 3 separate office visits with 2 measurements each visit and 2) mean daytime blood pressure of <135 mm Hg systolic and <85 mm Hg diastolic on ambulatory blood pressure monitoring.1 Other measures of normal blood pressure on ambulatory blood pressure monitoring are <130/80 mm Hg for full 24-hour blood pressure and <120/70 mm Hg for night-time blood pressure.2 A recent Clinical Inquiry summarized 3 cohort trials—2 showed white-coat hypertension patients had lower risk of cardiovascular events and 1 showed no difference between patients with white-coat hypertension and patients with sustained hypertension.3 Identifying patients with white-coat hypertension is important to avoid overtreating individuals at lower risk of cardiovascular events.
Which patients with elevated blood pressure on repeated visits have white-coat hypertension? In studies of patients, most of whom have Stage I hypertension (140–159/90–99 mm Hg), anywhere from 10% to 50% have white-coat hypertension ( TABLE ). In a joint multivariate analysis of 2 cohort studies, which enrolled 1564 subjects with uncomplicated stage I hypertension, white-coat hypertension was associated with lower office systolic blood pressure, female gender, and nonsmoking.2 Similarly, a large international database of 2492 subjects found that women, older subjects, and those with lower and fewer office systolic blood pressure measurements were more likely to have white-coat hypertension.4 In another analysis of 1333 Italian subjects, the prevalence of white-coat hypertension was 33.3% in those with stage I hypertension, 11% with stage II, and 3 % with stage III.5 A study of more than 600 men over 20 years in Finland compared those who developed white-coat hypertension and those with sustained hypertension. The hypertensive patients had more microalbuminuria, a greater left ventricular mass on echo, increased cholesterol esters, and a greater body-mass index (all P.05) than patients with white-coat hypertension. Smoking status was similar in both groups, in contrast to other studies.6 A recent study did not find body-mass index distinguished white-coat hyperten-sion from sustained hypertension.7
TABLE
Patient attributes and white-coat hypertension
ATTRIBUTE | SUBJECTS | COMPARISON | P VALUE |
---|---|---|---|
Gender, % with WCH | 5716* | 17% of females 14% of males | <.001 |
% female WCH v SH group | 1564† | 45% v 33% | .002 |
Ratio female: male with WCH | 2634§ | Odds ratio=1.92 (95% CI, 1.45–2.54) | <.001 |
Mean age, WCH vs SH | 1564† | 40 vs 39 years | .52 |
% with WCH in 4 age groups | 5716* | <35 y=12%, | <.001 |
35–50 y=14%, | |||
50–65 y=16%, | |||
>65 y=17% | |||
Currently smoking, % with WCH | 5716* | No=16.7%, Yes=11.3% | <.001 |
Currently smoking % WCH v SH | 1564† | 7% v 24% | .04 |
BMI, % WCH in 3 groups | 5716* | < 25=16% | NS |
25–30=15% | |||
>30=15% | |||
BMI, WCH group vs SH group | 1564† | 25.4 vs 25.9 | NS |
414‡ | 23.9 vs 24.7 | <.05 | |
Original clinic SBP, % with WCH | 5716* | 140–159=31.2% | <.001 |
160–170=18.7% | |||
171–180=11.8% | |||
2492§ | 140–150=65% | .004 | |
151–160=53% | |||
161–170=33% | |||
LV Mass (g), WCH v SH | 1564† | 160 vs 180 | .001 |
LV Mass Index (g/m2) WCH v SH | 414‡ | 126 vs 136 | <.01 |
WCH, white coat hypertension; SBP, systolic blood pressure; SH, sustained hypertension; CI, confidence interval; BMI, body-mass index; NS, not significant; LV, left ventricular. | |||
*Patients referred to a blood pressure unit over 22-year period.7 | |||
† A combination of 2 studies of clinic patients with stage I hypertension (140–159/90–99 mm Hg).2 | |||
‡ 50-year-old men in a community in Finland invited to a health survey with a 20-year follow-up.6 | |||
§ Data from 24 pooled studies of ambulatory blood pressure monitoring.4 |
Using home blood pressure as a screening tool is a problem because of the low sensitivity and poor PV+. In the THOP study (247 subjects), which used ambulatory blood pressure monitoring as the reference method, home blood pressure had a high specificity (89%) and high negative predictive value (PV–) (97%) but a lower sensitivity (68%) and low PPV (33%).8 In other words, if home blood pressure shows hypertension, there is a 97% chance the patient has sustained hypertension, but if home blood pressure returns to normal in patients with office hypertension, two thirds of patients will still have sustained hypertension. In another study that enrolled patients from a hypertension clinic, 133 untreated patients with dias-tolic blood pressure 90 to 115 mm Hg underwent ambulatory blood pressure monitoring for a reference standard. The sensitivity of home blood pressure monitoring in identifying white-coat hypertension was 61% and the PV+ was 48%.9
Recommendations from others
The European Society of Hypertension Working Group on Blood Pressure Monitoring recommends that subjects with blood pressure 140–159/90–99 mm Hg at several visits should have ambulatory blood pressure monitoring because 33% of those people will have white-coat hypertension. Women, nonsmokers, those with recent hypertension, a limited number of blood pressure determinations and small left ventricular mass on echo should also have ambulatory blood pressure monitoring. There should be a search for metabolic risk factors and target organ damage. Those patients aware that their blood pressures are lower outside the office should be considered for ambulatory blood pressure monitoring.10
The latest Joint National Committee report (JNC VII) indicates that ambulatory blood pressure monitoring may be useful to detect white-coat hypertension among patients with hypertension and no target organ damage, and those with episodic hypertension.11
Ambulatory BP monitoring better than home monitoring for ruling out white-coat hypertension
Joseph Saseen, PharmD, FCCP, BCPS
University of Colorado Health Sciences Center, Denver
Landmark placebo-controlled outcome-based trials demonstrating reduced morbidity and mortality with hypertension treatment did not differentiate essential from white-coat hypertension. Patients were included based on elevated office-based blood pressure measurements. Since we now know that the prevalence of white-coat hypertension is high, it should be ruled out before implementing antihypertensive therapy.
Ambulatory blood pressure monitoring is more accurate than home monitoring for ruling out white-coat hypertension. However, ease, simplicity, and availability makes home monitoring a more realistic option for routine clinical practice. When home blood pressure monitoring is used, reliable measurement devices (eg, newer automatic or manual home devices) should be used and patients should be instructed regarding proper use and documentation of blood pressure values to facilitate an appropriate clinical assessment.
1. O’Brien E, Asmar R, Beilin L, et al. on behalf of the European Society of Hypertension Working Group on Blood Pressure Monitoring. European Society of Hypertension recommendations for conventional, ambulatory and home blood pressure measurement. J Hypertens 2003;21:821-848.
2. Verdecchia P, Palatini P, Schillaci G, Mormino P, Porcellati C, Pessina AC. Independent predictors of isolated clinic (‘white-coat’) hypertension. J Hypertens 2001;19:1015-1020.
3. Rao S, Liu C-T, Wilder L. What is the best way to treat patients with white-coat hypertension?. J Fam Pract 2004;53:408-412.
4. Staessen JA, O’Brien ET, Atkins N, Anery AK. on behalf of the Ad-Hoc Working Group: Short report: Ambulatory blood pressure in normotensive compared with hypertensive subjects. J Hypertens 1993;11:1289-1297.
5. Verdecchia P, Schillaci G, Borgioni C, et al. White-coat hypertension and white-coat effect: Similarities and differences. Am J Hypertens 1995;8:790-798.
6. Bjorklund K, Lind L, Vessby B, Andren B, Lithell H. Different metabolic predictors of white-coat and sustained hypertension over a 20-year follow-up period. Circulation 2002;106:63-68.
7. Dolan E, Stanton A, Atkins N, et al. Determinants of white-coat hypertension. Blood Pressure Monit 2004;9:307-309.
8. Den Hond E, Celis H, Fagard R, et al. Self-measured versus ambulatory blood pressure in the diagnosis of hypertension. J Hypertens 2003;21:717-722.
9. Stergiou GS, Skeva II, Baibas NM, Kalkana CB, Roussias LG, Mountokalakis TD. Diagnosis of hyper-tension using home or ambulatory blood pressure monitoring: comparison with the conventional strategy based on repeated clinic blood pressure measurements. J Hypertens 2000;18:1745-1751.
10. Verdecchia P, O’Brien E, Pickering T, et al. When can the practicing physician suspect white coat hyperten-sion? Statement from the Working Group on Blood Pressure Monitoring of the European Society of Hypertension. Am J Hypertens 2003;16:87-91.
11. Chobanian AV, Bakris GL, Black HR, et al. Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Hypertension 2003;42:1206-1252.
Ambulatory blood pressure monitoring is currently the gold standard for detecting patients with white-coat hypertension. Women and all patients with lower office systolic blood pressures, stage I hypertension, and no target organ damage are more likely to have white-coat hypertension (strength of recommendation [SOR]: B, based on prospective cohort studies) ( TABLE ).
Self or home blood pressure monitoring has also been used to detect patients with white-coat hypertension. However, it has a low sensitivity (61%–68%) and low positive predictive value (PV+) (33%–48%) (SOR: B, short-term prospective cohort studies).
Evidence summary
White coat hypertension, also known as isolated office hypertension, refers to elevated blood pressures in a medical setting and normal blood pressures during regular daily life. Patients with white-coat hypertension are defined as patients 1) with an office blood pressure of >140 mm Hg systolic or >90 mm Hg diastolic on at least 3 separate office visits with 2 measurements each visit and 2) mean daytime blood pressure of <135 mm Hg systolic and <85 mm Hg diastolic on ambulatory blood pressure monitoring.1 Other measures of normal blood pressure on ambulatory blood pressure monitoring are <130/80 mm Hg for full 24-hour blood pressure and <120/70 mm Hg for night-time blood pressure.2 A recent Clinical Inquiry summarized 3 cohort trials—2 showed white-coat hypertension patients had lower risk of cardiovascular events and 1 showed no difference between patients with white-coat hypertension and patients with sustained hypertension.3 Identifying patients with white-coat hypertension is important to avoid overtreating individuals at lower risk of cardiovascular events.
Which patients with elevated blood pressure on repeated visits have white-coat hypertension? In studies of patients, most of whom have Stage I hypertension (140–159/90–99 mm Hg), anywhere from 10% to 50% have white-coat hypertension ( TABLE ). In a joint multivariate analysis of 2 cohort studies, which enrolled 1564 subjects with uncomplicated stage I hypertension, white-coat hypertension was associated with lower office systolic blood pressure, female gender, and nonsmoking.2 Similarly, a large international database of 2492 subjects found that women, older subjects, and those with lower and fewer office systolic blood pressure measurements were more likely to have white-coat hypertension.4 In another analysis of 1333 Italian subjects, the prevalence of white-coat hypertension was 33.3% in those with stage I hypertension, 11% with stage II, and 3 % with stage III.5 A study of more than 600 men over 20 years in Finland compared those who developed white-coat hypertension and those with sustained hypertension. The hypertensive patients had more microalbuminuria, a greater left ventricular mass on echo, increased cholesterol esters, and a greater body-mass index (all P.05) than patients with white-coat hypertension. Smoking status was similar in both groups, in contrast to other studies.6 A recent study did not find body-mass index distinguished white-coat hyperten-sion from sustained hypertension.7
TABLE
Patient attributes and white-coat hypertension
ATTRIBUTE | SUBJECTS | COMPARISON | P VALUE |
---|---|---|---|
Gender, % with WCH | 5716* | 17% of females 14% of males | <.001 |
% female WCH v SH group | 1564† | 45% v 33% | .002 |
Ratio female: male with WCH | 2634§ | Odds ratio=1.92 (95% CI, 1.45–2.54) | <.001 |
Mean age, WCH vs SH | 1564† | 40 vs 39 years | .52 |
% with WCH in 4 age groups | 5716* | <35 y=12%, | <.001 |
35–50 y=14%, | |||
50–65 y=16%, | |||
>65 y=17% | |||
Currently smoking, % with WCH | 5716* | No=16.7%, Yes=11.3% | <.001 |
Currently smoking % WCH v SH | 1564† | 7% v 24% | .04 |
BMI, % WCH in 3 groups | 5716* | < 25=16% | NS |
25–30=15% | |||
>30=15% | |||
BMI, WCH group vs SH group | 1564† | 25.4 vs 25.9 | NS |
414‡ | 23.9 vs 24.7 | <.05 | |
Original clinic SBP, % with WCH | 5716* | 140–159=31.2% | <.001 |
160–170=18.7% | |||
171–180=11.8% | |||
2492§ | 140–150=65% | .004 | |
151–160=53% | |||
161–170=33% | |||
LV Mass (g), WCH v SH | 1564† | 160 vs 180 | .001 |
LV Mass Index (g/m2) WCH v SH | 414‡ | 126 vs 136 | <.01 |
WCH, white coat hypertension; SBP, systolic blood pressure; SH, sustained hypertension; CI, confidence interval; BMI, body-mass index; NS, not significant; LV, left ventricular. | |||
*Patients referred to a blood pressure unit over 22-year period.7 | |||
† A combination of 2 studies of clinic patients with stage I hypertension (140–159/90–99 mm Hg).2 | |||
‡ 50-year-old men in a community in Finland invited to a health survey with a 20-year follow-up.6 | |||
§ Data from 24 pooled studies of ambulatory blood pressure monitoring.4 |
Using home blood pressure as a screening tool is a problem because of the low sensitivity and poor PV+. In the THOP study (247 subjects), which used ambulatory blood pressure monitoring as the reference method, home blood pressure had a high specificity (89%) and high negative predictive value (PV–) (97%) but a lower sensitivity (68%) and low PPV (33%).8 In other words, if home blood pressure shows hypertension, there is a 97% chance the patient has sustained hypertension, but if home blood pressure returns to normal in patients with office hypertension, two thirds of patients will still have sustained hypertension. In another study that enrolled patients from a hypertension clinic, 133 untreated patients with dias-tolic blood pressure 90 to 115 mm Hg underwent ambulatory blood pressure monitoring for a reference standard. The sensitivity of home blood pressure monitoring in identifying white-coat hypertension was 61% and the PV+ was 48%.9
Recommendations from others
The European Society of Hypertension Working Group on Blood Pressure Monitoring recommends that subjects with blood pressure 140–159/90–99 mm Hg at several visits should have ambulatory blood pressure monitoring because 33% of those people will have white-coat hypertension. Women, nonsmokers, those with recent hypertension, a limited number of blood pressure determinations and small left ventricular mass on echo should also have ambulatory blood pressure monitoring. There should be a search for metabolic risk factors and target organ damage. Those patients aware that their blood pressures are lower outside the office should be considered for ambulatory blood pressure monitoring.10
The latest Joint National Committee report (JNC VII) indicates that ambulatory blood pressure monitoring may be useful to detect white-coat hypertension among patients with hypertension and no target organ damage, and those with episodic hypertension.11
Ambulatory BP monitoring better than home monitoring for ruling out white-coat hypertension
Joseph Saseen, PharmD, FCCP, BCPS
University of Colorado Health Sciences Center, Denver
Landmark placebo-controlled outcome-based trials demonstrating reduced morbidity and mortality with hypertension treatment did not differentiate essential from white-coat hypertension. Patients were included based on elevated office-based blood pressure measurements. Since we now know that the prevalence of white-coat hypertension is high, it should be ruled out before implementing antihypertensive therapy.
Ambulatory blood pressure monitoring is more accurate than home monitoring for ruling out white-coat hypertension. However, ease, simplicity, and availability makes home monitoring a more realistic option for routine clinical practice. When home blood pressure monitoring is used, reliable measurement devices (eg, newer automatic or manual home devices) should be used and patients should be instructed regarding proper use and documentation of blood pressure values to facilitate an appropriate clinical assessment.
Ambulatory blood pressure monitoring is currently the gold standard for detecting patients with white-coat hypertension. Women and all patients with lower office systolic blood pressures, stage I hypertension, and no target organ damage are more likely to have white-coat hypertension (strength of recommendation [SOR]: B, based on prospective cohort studies) ( TABLE ).
Self or home blood pressure monitoring has also been used to detect patients with white-coat hypertension. However, it has a low sensitivity (61%–68%) and low positive predictive value (PV+) (33%–48%) (SOR: B, short-term prospective cohort studies).
Evidence summary
White coat hypertension, also known as isolated office hypertension, refers to elevated blood pressures in a medical setting and normal blood pressures during regular daily life. Patients with white-coat hypertension are defined as patients 1) with an office blood pressure of >140 mm Hg systolic or >90 mm Hg diastolic on at least 3 separate office visits with 2 measurements each visit and 2) mean daytime blood pressure of <135 mm Hg systolic and <85 mm Hg diastolic on ambulatory blood pressure monitoring.1 Other measures of normal blood pressure on ambulatory blood pressure monitoring are <130/80 mm Hg for full 24-hour blood pressure and <120/70 mm Hg for night-time blood pressure.2 A recent Clinical Inquiry summarized 3 cohort trials—2 showed white-coat hypertension patients had lower risk of cardiovascular events and 1 showed no difference between patients with white-coat hypertension and patients with sustained hypertension.3 Identifying patients with white-coat hypertension is important to avoid overtreating individuals at lower risk of cardiovascular events.
Which patients with elevated blood pressure on repeated visits have white-coat hypertension? In studies of patients, most of whom have Stage I hypertension (140–159/90–99 mm Hg), anywhere from 10% to 50% have white-coat hypertension ( TABLE ). In a joint multivariate analysis of 2 cohort studies, which enrolled 1564 subjects with uncomplicated stage I hypertension, white-coat hypertension was associated with lower office systolic blood pressure, female gender, and nonsmoking.2 Similarly, a large international database of 2492 subjects found that women, older subjects, and those with lower and fewer office systolic blood pressure measurements were more likely to have white-coat hypertension.4 In another analysis of 1333 Italian subjects, the prevalence of white-coat hypertension was 33.3% in those with stage I hypertension, 11% with stage II, and 3 % with stage III.5 A study of more than 600 men over 20 years in Finland compared those who developed white-coat hypertension and those with sustained hypertension. The hypertensive patients had more microalbuminuria, a greater left ventricular mass on echo, increased cholesterol esters, and a greater body-mass index (all P.05) than patients with white-coat hypertension. Smoking status was similar in both groups, in contrast to other studies.6 A recent study did not find body-mass index distinguished white-coat hyperten-sion from sustained hypertension.7
TABLE
Patient attributes and white-coat hypertension
ATTRIBUTE | SUBJECTS | COMPARISON | P VALUE |
---|---|---|---|
Gender, % with WCH | 5716* | 17% of females 14% of males | <.001 |
% female WCH v SH group | 1564† | 45% v 33% | .002 |
Ratio female: male with WCH | 2634§ | Odds ratio=1.92 (95% CI, 1.45–2.54) | <.001 |
Mean age, WCH vs SH | 1564† | 40 vs 39 years | .52 |
% with WCH in 4 age groups | 5716* | <35 y=12%, | <.001 |
35–50 y=14%, | |||
50–65 y=16%, | |||
>65 y=17% | |||
Currently smoking, % with WCH | 5716* | No=16.7%, Yes=11.3% | <.001 |
Currently smoking % WCH v SH | 1564† | 7% v 24% | .04 |
BMI, % WCH in 3 groups | 5716* | < 25=16% | NS |
25–30=15% | |||
>30=15% | |||
BMI, WCH group vs SH group | 1564† | 25.4 vs 25.9 | NS |
414‡ | 23.9 vs 24.7 | <.05 | |
Original clinic SBP, % with WCH | 5716* | 140–159=31.2% | <.001 |
160–170=18.7% | |||
171–180=11.8% | |||
2492§ | 140–150=65% | .004 | |
151–160=53% | |||
161–170=33% | |||
LV Mass (g), WCH v SH | 1564† | 160 vs 180 | .001 |
LV Mass Index (g/m2) WCH v SH | 414‡ | 126 vs 136 | <.01 |
WCH, white coat hypertension; SBP, systolic blood pressure; SH, sustained hypertension; CI, confidence interval; BMI, body-mass index; NS, not significant; LV, left ventricular. | |||
*Patients referred to a blood pressure unit over 22-year period.7 | |||
† A combination of 2 studies of clinic patients with stage I hypertension (140–159/90–99 mm Hg).2 | |||
‡ 50-year-old men in a community in Finland invited to a health survey with a 20-year follow-up.6 | |||
§ Data from 24 pooled studies of ambulatory blood pressure monitoring.4 |
Using home blood pressure as a screening tool is a problem because of the low sensitivity and poor PV+. In the THOP study (247 subjects), which used ambulatory blood pressure monitoring as the reference method, home blood pressure had a high specificity (89%) and high negative predictive value (PV–) (97%) but a lower sensitivity (68%) and low PPV (33%).8 In other words, if home blood pressure shows hypertension, there is a 97% chance the patient has sustained hypertension, but if home blood pressure returns to normal in patients with office hypertension, two thirds of patients will still have sustained hypertension. In another study that enrolled patients from a hypertension clinic, 133 untreated patients with dias-tolic blood pressure 90 to 115 mm Hg underwent ambulatory blood pressure monitoring for a reference standard. The sensitivity of home blood pressure monitoring in identifying white-coat hypertension was 61% and the PV+ was 48%.9
Recommendations from others
The European Society of Hypertension Working Group on Blood Pressure Monitoring recommends that subjects with blood pressure 140–159/90–99 mm Hg at several visits should have ambulatory blood pressure monitoring because 33% of those people will have white-coat hypertension. Women, nonsmokers, those with recent hypertension, a limited number of blood pressure determinations and small left ventricular mass on echo should also have ambulatory blood pressure monitoring. There should be a search for metabolic risk factors and target organ damage. Those patients aware that their blood pressures are lower outside the office should be considered for ambulatory blood pressure monitoring.10
The latest Joint National Committee report (JNC VII) indicates that ambulatory blood pressure monitoring may be useful to detect white-coat hypertension among patients with hypertension and no target organ damage, and those with episodic hypertension.11
Ambulatory BP monitoring better than home monitoring for ruling out white-coat hypertension
Joseph Saseen, PharmD, FCCP, BCPS
University of Colorado Health Sciences Center, Denver
Landmark placebo-controlled outcome-based trials demonstrating reduced morbidity and mortality with hypertension treatment did not differentiate essential from white-coat hypertension. Patients were included based on elevated office-based blood pressure measurements. Since we now know that the prevalence of white-coat hypertension is high, it should be ruled out before implementing antihypertensive therapy.
Ambulatory blood pressure monitoring is more accurate than home monitoring for ruling out white-coat hypertension. However, ease, simplicity, and availability makes home monitoring a more realistic option for routine clinical practice. When home blood pressure monitoring is used, reliable measurement devices (eg, newer automatic or manual home devices) should be used and patients should be instructed regarding proper use and documentation of blood pressure values to facilitate an appropriate clinical assessment.
1. O’Brien E, Asmar R, Beilin L, et al. on behalf of the European Society of Hypertension Working Group on Blood Pressure Monitoring. European Society of Hypertension recommendations for conventional, ambulatory and home blood pressure measurement. J Hypertens 2003;21:821-848.
2. Verdecchia P, Palatini P, Schillaci G, Mormino P, Porcellati C, Pessina AC. Independent predictors of isolated clinic (‘white-coat’) hypertension. J Hypertens 2001;19:1015-1020.
3. Rao S, Liu C-T, Wilder L. What is the best way to treat patients with white-coat hypertension?. J Fam Pract 2004;53:408-412.
4. Staessen JA, O’Brien ET, Atkins N, Anery AK. on behalf of the Ad-Hoc Working Group: Short report: Ambulatory blood pressure in normotensive compared with hypertensive subjects. J Hypertens 1993;11:1289-1297.
5. Verdecchia P, Schillaci G, Borgioni C, et al. White-coat hypertension and white-coat effect: Similarities and differences. Am J Hypertens 1995;8:790-798.
6. Bjorklund K, Lind L, Vessby B, Andren B, Lithell H. Different metabolic predictors of white-coat and sustained hypertension over a 20-year follow-up period. Circulation 2002;106:63-68.
7. Dolan E, Stanton A, Atkins N, et al. Determinants of white-coat hypertension. Blood Pressure Monit 2004;9:307-309.
8. Den Hond E, Celis H, Fagard R, et al. Self-measured versus ambulatory blood pressure in the diagnosis of hypertension. J Hypertens 2003;21:717-722.
9. Stergiou GS, Skeva II, Baibas NM, Kalkana CB, Roussias LG, Mountokalakis TD. Diagnosis of hyper-tension using home or ambulatory blood pressure monitoring: comparison with the conventional strategy based on repeated clinic blood pressure measurements. J Hypertens 2000;18:1745-1751.
10. Verdecchia P, O’Brien E, Pickering T, et al. When can the practicing physician suspect white coat hyperten-sion? Statement from the Working Group on Blood Pressure Monitoring of the European Society of Hypertension. Am J Hypertens 2003;16:87-91.
11. Chobanian AV, Bakris GL, Black HR, et al. Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Hypertension 2003;42:1206-1252.
1. O’Brien E, Asmar R, Beilin L, et al. on behalf of the European Society of Hypertension Working Group on Blood Pressure Monitoring. European Society of Hypertension recommendations for conventional, ambulatory and home blood pressure measurement. J Hypertens 2003;21:821-848.
2. Verdecchia P, Palatini P, Schillaci G, Mormino P, Porcellati C, Pessina AC. Independent predictors of isolated clinic (‘white-coat’) hypertension. J Hypertens 2001;19:1015-1020.
3. Rao S, Liu C-T, Wilder L. What is the best way to treat patients with white-coat hypertension?. J Fam Pract 2004;53:408-412.
4. Staessen JA, O’Brien ET, Atkins N, Anery AK. on behalf of the Ad-Hoc Working Group: Short report: Ambulatory blood pressure in normotensive compared with hypertensive subjects. J Hypertens 1993;11:1289-1297.
5. Verdecchia P, Schillaci G, Borgioni C, et al. White-coat hypertension and white-coat effect: Similarities and differences. Am J Hypertens 1995;8:790-798.
6. Bjorklund K, Lind L, Vessby B, Andren B, Lithell H. Different metabolic predictors of white-coat and sustained hypertension over a 20-year follow-up period. Circulation 2002;106:63-68.
7. Dolan E, Stanton A, Atkins N, et al. Determinants of white-coat hypertension. Blood Pressure Monit 2004;9:307-309.
8. Den Hond E, Celis H, Fagard R, et al. Self-measured versus ambulatory blood pressure in the diagnosis of hypertension. J Hypertens 2003;21:717-722.
9. Stergiou GS, Skeva II, Baibas NM, Kalkana CB, Roussias LG, Mountokalakis TD. Diagnosis of hyper-tension using home or ambulatory blood pressure monitoring: comparison with the conventional strategy based on repeated clinic blood pressure measurements. J Hypertens 2000;18:1745-1751.
10. Verdecchia P, O’Brien E, Pickering T, et al. When can the practicing physician suspect white coat hyperten-sion? Statement from the Working Group on Blood Pressure Monitoring of the European Society of Hypertension. Am J Hypertens 2003;16:87-91.
11. Chobanian AV, Bakris GL, Black HR, et al. Seventh Report of the Joint National Committee on Prevention, Detection, Evaluation, and Treatment of High Blood Pressure. Hypertension 2003;42:1206-1252.
Evidence-based answers from the Family Physicians Inquiries Network
Can we prevent splenic rupture for patients with infectious mononucleosis?
All patients with infectious mononucleosis should be considered at risk for splenic rupture since clinical severity, laboratory results, and physical exam are not reliable predictors of rupture (strength of recommendation [SOR]: B, case-control study). Clinical evidence indicates that most splenic ruptures occur within 4 weeks of symptom onset, which correlates with ultrasound data showing resolution of splenomegaly by 30 days from symptom onset (SOR: B, case-control study). Given the morbidity and mortality associated with splenic rupture, instruct patients to refrain from vigorous physical activity for 1 month after symptom onset (SOR: C, expert opinion).
Evidence summary
The annual incidence of infectious mononucleosis is somewhere between 345 and 671 cases per 100,000 in the US; it is highest in the adolescent age group.1 Splenic rupture is the leading cause of death in infectious mononucleosis, occurring in 0.1% to 0.2% of all cases.1- 4 Based on this figure, approximately 100 cases of rupture may occur yearly in the US, only a few of which are reported.
A retrospective analysis of 8116 patients with infectious mononucleosis at the Mayo Clinic estimated the risk of spontaneous splenic rupture to be 0.1% of cases, correlating with rates found in other studies. The study’s criteria for definite spontaneous rupture are: no recent trauma; recent symptoms; hematologic, serologic, and histologic (splenic) evidence of infectious mononucleosis. Five patients with rupture (average age, 22) were identified; 3 were male. Splenectomy was performed for all patients. Follow-up over 33-years found all patients healthy with minimal subsequent illness.3
A review of 55 cases found almost all splenic ruptures occurred between the fourth and twenty-first days of illness, and that all affected spleens were enlarged, although only half were palpable on exam. Ninety percent of the ruptures occurred in males, and more than half were nontraumatic. There was no correlation between severity of illness and susceptibility to splenic rupture. No specifics were given on duration of illness or how splenomegaly was diagnosed.4
The best technique for identifying splenic enlargement and determining risk of rupture is unclear. In a case-control study, 29 patients were admitted to an ear, nose, and throat department with infectious mononucleosis and were evaluated serially for splenic and hepatic enlargement by ultrasound. Diagnosis was based on clinical picture, a positive heterophile test, and other blood tests. Four patients were included despite negative serology due to compelling clinical presentations and symptoms. Serial ultrasound imaging showed that all had enlarged spleens (mean enlargement 50%–60%); 50% had hepatic enlargement (5%–20% enlargement). The patients were compared with a control group of 8 patients admitted with peritonsillar abscess, as verified by tonsillectomy. No controls had hepatic or splenic enlargement. Physical examinations detected splenomegaly in only 17% of the study patients. The exams were conducted by house staff without blinding, randomization, or tests of reproducibility. Ultrasound scanning was completed on days 1, 3, 5, 10, 20, 90, and 120. The spleen was significantly larger in the infectious mononucleosis group than in the control group for the first 30 days, and no difference in size was found over the subsequent 3 months. No correlation existed between laboratory values and enlargement of the spleen or liver.5
No quality studies evaluate the risks of physical activity in infectious mononucleosis. Case reports of rupture have found comparable rates between traumatic and nontraumatic causes. In addition, no clinical trials evaluate imaging in decisions regarding return to activity and its effect on splenic rupture. Ultrasound is often used in the athletic setting for these decisions but no evidence supports its use as routine practice. The routine use of ultrasound for this purpose would cost more than $1 million to prevent 1 traumatic rupture.6
Recommendations from others
Clinical Sports Medicine recommends athletes refrain from sporting activities until all acute symptoms resolve and contact sports avoided while the spleen is enlarged. No recommendation is given on determining spleen size.7
Team Physician Handbook recommends athletes do no cardiovascular work, lifting, strength training, or contact sports for 2 weeks because of the risk of splenic rupture. Activity is then gradually increased as the athlete improves. Athletes are to avoid contact or weight-lifting for 4 weeks unless they feel well and ultrasound reveals a normal-sized spleen.8
Sports Medicine Secrets advises ultrasound or CT of the spleen to be obtained if there is any suspicion of splenomegaly or if return to play before 4 weeks is contemplated. Light athletic activity may be resumed approximately 3 weeks after symptom onset if the spleen is not tender or enlarged on examination, the patient is afebrile, liver enzymes are normal, and all other complications are resolved. Contact sports may be resumed 4 weeks after symptom onset if there is no documentation of splenomegaly, the athlete feels ready, and all other complications have resolved.9
Mononucleosis patients should restrict strenuous activity for 4 weeks from onset
Drew E. Malloy, MD
University of Arizona Campus Health Center, Tucson
At the University of Arizona student health center, we see over 120 new cases of mononucleosis each year. No clinicians in our group can recall a single splenic rupture in 18 years. The quoted rupture rate of 0.1% based on a study of 8116 patients may be a high estimate and it is likely that most physicians go through their entire career without seeing a single case.
For our group, the value of this review was to point out the lack of correlation with illness severity, lab abnormalities, or a palpable spleen in predicting this rare event. Based on this review, we amended our patient handout to make more specific our advice about restricting strenuous physical activity for 4 weeks from the onset of symptoms.
1. Auwaerter PG. Infectious mononucleosis in middle age. JAMA 1999;281:454-459.
2. Maki DG, Reich RM. Infectious mononucleosis in the athlete. Diagnosis, complications, and management. Am J Sports Med 1982;10:162-173.
3. Farley DR, Zietlow SP, Bannon MP, Farnell MB. Spontaneous rupture of the spleen due to infectious mononucleosis. Mayo Clin Proc 1992;67:846-853.
4. Asgari MM, Begos DG. Spontaneous splenic rupture in infectious mononucleosis: a review. Yale J Biol Med 1997;70:175-182.
5. Dommerby H, Stangerup SE, Stangerup M, Hancke S. Hepatosplenomegaly in infectious mononucleosis assessed by ultrasonic scanning. J Laryngol Otol 1986;100:573-579.
6. Ebell MH. Epstein-Barr virus infectious mononucleosis. Am Fam Physician 2004;70:1279-1287.
7. Brukner P, Khan K. Clinical Sports Medicine. 2nd ed, rev. Australia: McGraw-Hill; 2002.
8. Martin TJ. Infections in athletes. In: Mellion MB et al, eds. Team Physician’s Handbook. 3rd ed. Philadelphia, Pa: Hanley & Belfus; 2001;226-228.
9. Grindel SH, Shea MA. Infections in athletes. In: Mellion MB, Putukian M, Madden CC, eds. Sports MedicineSecrets. 3rd ed. Philadelphia, Pa: Hanley & Belfus; 2003:207.
All patients with infectious mononucleosis should be considered at risk for splenic rupture since clinical severity, laboratory results, and physical exam are not reliable predictors of rupture (strength of recommendation [SOR]: B, case-control study). Clinical evidence indicates that most splenic ruptures occur within 4 weeks of symptom onset, which correlates with ultrasound data showing resolution of splenomegaly by 30 days from symptom onset (SOR: B, case-control study). Given the morbidity and mortality associated with splenic rupture, instruct patients to refrain from vigorous physical activity for 1 month after symptom onset (SOR: C, expert opinion).
Evidence summary
The annual incidence of infectious mononucleosis is somewhere between 345 and 671 cases per 100,000 in the US; it is highest in the adolescent age group.1 Splenic rupture is the leading cause of death in infectious mononucleosis, occurring in 0.1% to 0.2% of all cases.1- 4 Based on this figure, approximately 100 cases of rupture may occur yearly in the US, only a few of which are reported.
A retrospective analysis of 8116 patients with infectious mononucleosis at the Mayo Clinic estimated the risk of spontaneous splenic rupture to be 0.1% of cases, correlating with rates found in other studies. The study’s criteria for definite spontaneous rupture are: no recent trauma; recent symptoms; hematologic, serologic, and histologic (splenic) evidence of infectious mononucleosis. Five patients with rupture (average age, 22) were identified; 3 were male. Splenectomy was performed for all patients. Follow-up over 33-years found all patients healthy with minimal subsequent illness.3
A review of 55 cases found almost all splenic ruptures occurred between the fourth and twenty-first days of illness, and that all affected spleens were enlarged, although only half were palpable on exam. Ninety percent of the ruptures occurred in males, and more than half were nontraumatic. There was no correlation between severity of illness and susceptibility to splenic rupture. No specifics were given on duration of illness or how splenomegaly was diagnosed.4
The best technique for identifying splenic enlargement and determining risk of rupture is unclear. In a case-control study, 29 patients were admitted to an ear, nose, and throat department with infectious mononucleosis and were evaluated serially for splenic and hepatic enlargement by ultrasound. Diagnosis was based on clinical picture, a positive heterophile test, and other blood tests. Four patients were included despite negative serology due to compelling clinical presentations and symptoms. Serial ultrasound imaging showed that all had enlarged spleens (mean enlargement 50%–60%); 50% had hepatic enlargement (5%–20% enlargement). The patients were compared with a control group of 8 patients admitted with peritonsillar abscess, as verified by tonsillectomy. No controls had hepatic or splenic enlargement. Physical examinations detected splenomegaly in only 17% of the study patients. The exams were conducted by house staff without blinding, randomization, or tests of reproducibility. Ultrasound scanning was completed on days 1, 3, 5, 10, 20, 90, and 120. The spleen was significantly larger in the infectious mononucleosis group than in the control group for the first 30 days, and no difference in size was found over the subsequent 3 months. No correlation existed between laboratory values and enlargement of the spleen or liver.5
No quality studies evaluate the risks of physical activity in infectious mononucleosis. Case reports of rupture have found comparable rates between traumatic and nontraumatic causes. In addition, no clinical trials evaluate imaging in decisions regarding return to activity and its effect on splenic rupture. Ultrasound is often used in the athletic setting for these decisions but no evidence supports its use as routine practice. The routine use of ultrasound for this purpose would cost more than $1 million to prevent 1 traumatic rupture.6
Recommendations from others
Clinical Sports Medicine recommends athletes refrain from sporting activities until all acute symptoms resolve and contact sports avoided while the spleen is enlarged. No recommendation is given on determining spleen size.7
Team Physician Handbook recommends athletes do no cardiovascular work, lifting, strength training, or contact sports for 2 weeks because of the risk of splenic rupture. Activity is then gradually increased as the athlete improves. Athletes are to avoid contact or weight-lifting for 4 weeks unless they feel well and ultrasound reveals a normal-sized spleen.8
Sports Medicine Secrets advises ultrasound or CT of the spleen to be obtained if there is any suspicion of splenomegaly or if return to play before 4 weeks is contemplated. Light athletic activity may be resumed approximately 3 weeks after symptom onset if the spleen is not tender or enlarged on examination, the patient is afebrile, liver enzymes are normal, and all other complications are resolved. Contact sports may be resumed 4 weeks after symptom onset if there is no documentation of splenomegaly, the athlete feels ready, and all other complications have resolved.9
Mononucleosis patients should restrict strenuous activity for 4 weeks from onset
Drew E. Malloy, MD
University of Arizona Campus Health Center, Tucson
At the University of Arizona student health center, we see over 120 new cases of mononucleosis each year. No clinicians in our group can recall a single splenic rupture in 18 years. The quoted rupture rate of 0.1% based on a study of 8116 patients may be a high estimate and it is likely that most physicians go through their entire career without seeing a single case.
For our group, the value of this review was to point out the lack of correlation with illness severity, lab abnormalities, or a palpable spleen in predicting this rare event. Based on this review, we amended our patient handout to make more specific our advice about restricting strenuous physical activity for 4 weeks from the onset of symptoms.
All patients with infectious mononucleosis should be considered at risk for splenic rupture since clinical severity, laboratory results, and physical exam are not reliable predictors of rupture (strength of recommendation [SOR]: B, case-control study). Clinical evidence indicates that most splenic ruptures occur within 4 weeks of symptom onset, which correlates with ultrasound data showing resolution of splenomegaly by 30 days from symptom onset (SOR: B, case-control study). Given the morbidity and mortality associated with splenic rupture, instruct patients to refrain from vigorous physical activity for 1 month after symptom onset (SOR: C, expert opinion).
Evidence summary
The annual incidence of infectious mononucleosis is somewhere between 345 and 671 cases per 100,000 in the US; it is highest in the adolescent age group.1 Splenic rupture is the leading cause of death in infectious mononucleosis, occurring in 0.1% to 0.2% of all cases.1- 4 Based on this figure, approximately 100 cases of rupture may occur yearly in the US, only a few of which are reported.
A retrospective analysis of 8116 patients with infectious mononucleosis at the Mayo Clinic estimated the risk of spontaneous splenic rupture to be 0.1% of cases, correlating with rates found in other studies. The study’s criteria for definite spontaneous rupture are: no recent trauma; recent symptoms; hematologic, serologic, and histologic (splenic) evidence of infectious mononucleosis. Five patients with rupture (average age, 22) were identified; 3 were male. Splenectomy was performed for all patients. Follow-up over 33-years found all patients healthy with minimal subsequent illness.3
A review of 55 cases found almost all splenic ruptures occurred between the fourth and twenty-first days of illness, and that all affected spleens were enlarged, although only half were palpable on exam. Ninety percent of the ruptures occurred in males, and more than half were nontraumatic. There was no correlation between severity of illness and susceptibility to splenic rupture. No specifics were given on duration of illness or how splenomegaly was diagnosed.4
The best technique for identifying splenic enlargement and determining risk of rupture is unclear. In a case-control study, 29 patients were admitted to an ear, nose, and throat department with infectious mononucleosis and were evaluated serially for splenic and hepatic enlargement by ultrasound. Diagnosis was based on clinical picture, a positive heterophile test, and other blood tests. Four patients were included despite negative serology due to compelling clinical presentations and symptoms. Serial ultrasound imaging showed that all had enlarged spleens (mean enlargement 50%–60%); 50% had hepatic enlargement (5%–20% enlargement). The patients were compared with a control group of 8 patients admitted with peritonsillar abscess, as verified by tonsillectomy. No controls had hepatic or splenic enlargement. Physical examinations detected splenomegaly in only 17% of the study patients. The exams were conducted by house staff without blinding, randomization, or tests of reproducibility. Ultrasound scanning was completed on days 1, 3, 5, 10, 20, 90, and 120. The spleen was significantly larger in the infectious mononucleosis group than in the control group for the first 30 days, and no difference in size was found over the subsequent 3 months. No correlation existed between laboratory values and enlargement of the spleen or liver.5
No quality studies evaluate the risks of physical activity in infectious mononucleosis. Case reports of rupture have found comparable rates between traumatic and nontraumatic causes. In addition, no clinical trials evaluate imaging in decisions regarding return to activity and its effect on splenic rupture. Ultrasound is often used in the athletic setting for these decisions but no evidence supports its use as routine practice. The routine use of ultrasound for this purpose would cost more than $1 million to prevent 1 traumatic rupture.6
Recommendations from others
Clinical Sports Medicine recommends athletes refrain from sporting activities until all acute symptoms resolve and contact sports avoided while the spleen is enlarged. No recommendation is given on determining spleen size.7
Team Physician Handbook recommends athletes do no cardiovascular work, lifting, strength training, or contact sports for 2 weeks because of the risk of splenic rupture. Activity is then gradually increased as the athlete improves. Athletes are to avoid contact or weight-lifting for 4 weeks unless they feel well and ultrasound reveals a normal-sized spleen.8
Sports Medicine Secrets advises ultrasound or CT of the spleen to be obtained if there is any suspicion of splenomegaly or if return to play before 4 weeks is contemplated. Light athletic activity may be resumed approximately 3 weeks after symptom onset if the spleen is not tender or enlarged on examination, the patient is afebrile, liver enzymes are normal, and all other complications are resolved. Contact sports may be resumed 4 weeks after symptom onset if there is no documentation of splenomegaly, the athlete feels ready, and all other complications have resolved.9
Mononucleosis patients should restrict strenuous activity for 4 weeks from onset
Drew E. Malloy, MD
University of Arizona Campus Health Center, Tucson
At the University of Arizona student health center, we see over 120 new cases of mononucleosis each year. No clinicians in our group can recall a single splenic rupture in 18 years. The quoted rupture rate of 0.1% based on a study of 8116 patients may be a high estimate and it is likely that most physicians go through their entire career without seeing a single case.
For our group, the value of this review was to point out the lack of correlation with illness severity, lab abnormalities, or a palpable spleen in predicting this rare event. Based on this review, we amended our patient handout to make more specific our advice about restricting strenuous physical activity for 4 weeks from the onset of symptoms.
1. Auwaerter PG. Infectious mononucleosis in middle age. JAMA 1999;281:454-459.
2. Maki DG, Reich RM. Infectious mononucleosis in the athlete. Diagnosis, complications, and management. Am J Sports Med 1982;10:162-173.
3. Farley DR, Zietlow SP, Bannon MP, Farnell MB. Spontaneous rupture of the spleen due to infectious mononucleosis. Mayo Clin Proc 1992;67:846-853.
4. Asgari MM, Begos DG. Spontaneous splenic rupture in infectious mononucleosis: a review. Yale J Biol Med 1997;70:175-182.
5. Dommerby H, Stangerup SE, Stangerup M, Hancke S. Hepatosplenomegaly in infectious mononucleosis assessed by ultrasonic scanning. J Laryngol Otol 1986;100:573-579.
6. Ebell MH. Epstein-Barr virus infectious mononucleosis. Am Fam Physician 2004;70:1279-1287.
7. Brukner P, Khan K. Clinical Sports Medicine. 2nd ed, rev. Australia: McGraw-Hill; 2002.
8. Martin TJ. Infections in athletes. In: Mellion MB et al, eds. Team Physician’s Handbook. 3rd ed. Philadelphia, Pa: Hanley & Belfus; 2001;226-228.
9. Grindel SH, Shea MA. Infections in athletes. In: Mellion MB, Putukian M, Madden CC, eds. Sports MedicineSecrets. 3rd ed. Philadelphia, Pa: Hanley & Belfus; 2003:207.
1. Auwaerter PG. Infectious mononucleosis in middle age. JAMA 1999;281:454-459.
2. Maki DG, Reich RM. Infectious mononucleosis in the athlete. Diagnosis, complications, and management. Am J Sports Med 1982;10:162-173.
3. Farley DR, Zietlow SP, Bannon MP, Farnell MB. Spontaneous rupture of the spleen due to infectious mononucleosis. Mayo Clin Proc 1992;67:846-853.
4. Asgari MM, Begos DG. Spontaneous splenic rupture in infectious mononucleosis: a review. Yale J Biol Med 1997;70:175-182.
5. Dommerby H, Stangerup SE, Stangerup M, Hancke S. Hepatosplenomegaly in infectious mononucleosis assessed by ultrasonic scanning. J Laryngol Otol 1986;100:573-579.
6. Ebell MH. Epstein-Barr virus infectious mononucleosis. Am Fam Physician 2004;70:1279-1287.
7. Brukner P, Khan K. Clinical Sports Medicine. 2nd ed, rev. Australia: McGraw-Hill; 2002.
8. Martin TJ. Infections in athletes. In: Mellion MB et al, eds. Team Physician’s Handbook. 3rd ed. Philadelphia, Pa: Hanley & Belfus; 2001;226-228.
9. Grindel SH, Shea MA. Infections in athletes. In: Mellion MB, Putukian M, Madden CC, eds. Sports MedicineSecrets. 3rd ed. Philadelphia, Pa: Hanley & Belfus; 2003:207.
Evidence-based answers from the Family Physicians Inquiries Network
What is the best strategy for impaired glucose tolerance in nonpregnant adults?
The best treatment strategy for impaired glucose tolerance (IGT) and impaired fasting glucose (IFG) is lifestyle intervention with a structured weight loss program of diet and exercise (strength of recommendation [SOR]: B, based on high-quality randomized controlled trials [RCTs] for disease-oriented outcomes). Patients with IGT and IFG should be counseled to lose 5% to 7% of their body weight and instructed on moderate intensity physical activity for ~150 minutes per week.
Metformin (Glucophage), acarbose (Precose), and troglitazone (Rezulin) are also effective, but lifestyle interventions appear superior ( TABLE ) (SOR: B, based on single high quality randomized controlled trials). The American Diabetes Association defines IFG as a fasting glucose of between 100 and 125 mg/dL, and IGT as glucose between 140 and 199 mg/dL after a 2-hour oral glucose challenge.
Adults with IGT or IFG should have laboratory screening for diabetes every 1 to 2 years (SOR: C, based on expert opinion), using the fasting plasma glucose (FPG) as a screening test (SOR: C, based on expert opinion). For individuals whose FPG exceeds 125 mg/dL, oral glucose tolerance testing is considered superior to glycohemoglobin testing for ruling out progression to diabetes (SOR: C, based on expert opinion).
Evidence Summary
Both IGT and IFG are associated with a significant risk of developing diabetes and its associated cardiovascular comorbidities; thus, the primary goal for treatment is to prevent or delay the onset of diabetes. Recent well-designed studies have demonstrated benefits of lifestyle interventions for patients with IGT.
In the US Diabetes Prevention Program (DPP), 3234 patients with IGT and a body-mass index (BMI) of at least 24 kg/m2 were randomly assigned to one of the following groups: placebo, metformin, or intensive lifestyle modification. After an average follow-up of 2.8 years, there was a 14% absolute risk reduction in the progression to diabetes in the lifestyle intervention group compared with placebo (number needed to treat [NNT]=7).1 In the Finnish Diabetes Prevention Study, the lifestyle intervention group had a 12.5% absolute risk reduction compared with the control group (NNT=8).2 Successful lifestyle interventions in these studies included weight loss of 5% to 7%, decreased fat intake, increased fiber intake, and 150 minutes of exercise per week.1-2
Drug therapy with metformin, acarbose, and troglitazone has also been successful in preventing or delaying diabetes in people with IGT.1,3,4 In the placebo-controlled DPP trial, metformin use was associated with a reduction in progression to diabetes mellitus (NNT=14).1 In the STOP-NIDDM trial of 1429 persons over 3.3 years of follow-up, acarbose 100 mg 3 times daily resulted in a 9% reduction of progression to diabetes, compared with placebo (NNT=11).3
In the TRIPOD study, troglitazone use was associated with a 17% absolute risk reduction in the incidence of diabetes in high-risk Hispanic women (NNT=6 over an average of 30 months).4 The preventive effect of the drug was maintained more than 8 months after troglitazone therapy was discontinued (due to withdrawal from the US market). Current trials with other thiazolidinediones are underway.
TABLE
Comparison of major lifestyle and pharmacologic trials in IGT and IFG
INTERVENTION | RELATIVE RISK REDUCTION IN INCIDENCE OF DIABETES MELLITIS (95% CI) | NUMBER NEEDED TO TREAT | ABSOLUTE RISK REDUCTION |
---|---|---|---|
Lifestyle1 | 58% (48%–66%) | 7 | 14% |
Lifestyle2 | 58% (hazard ratio 0.4; 95% CI, 0.3%–0.7%) | 8 | 12.5% |
Metformin 850 mg twice daily (Glucophage)1 | 31% (17%–43%) | 14 | 7% |
Acarbose 100 mg three times daily (Precose)3 | 25% (10%–37%) | 11 | 9% |
Troglitazone 400 mg daily4 (Rezulin [withdrawn]) | 56% (17%–75%) | 6 | 16.7% |
NNT, number needed to treat; ARR, absolute risk reduction. | |||
Adapted from Davies et al, Diabetic Medicine 2004.9 |
Recommendations from others
The American Diabetes Association (ADA) recommends counseling on weight loss and instructing on increased physical activity in people with IGT.5 The United States Preventive Services Task Force recommends intensive programs of lifestyle modification (diet, exercise, and behavior) for patients who have pre-diabetes.6
The ADA recommends regular monitoring (every 1 to 2 years) for the development of diabetes in people with prediabetes, and prefers FPG to screen for diabetes since it is faster, cost-effective, and more reproducible than the more sensitive 2-hour oral glucose tolerance test.5,7 The ADA also recommends that if the FPG is <126 mg/dL and there is a high suspicion for diabetes, a 2-hour oral glucose tolerance test should be performed.
Glycosylated hemoglobin (HbA1C) is not recommended as a screening tool, because individuals with IFG or IGT may have normal or near-normal HbA1C levels; these individuals often manifest hyperglycemia only when challenged with the oral glucose load use in the standardized oral glucose tolerance test.8
Lifestyle modification clearly works; medication may have a role as well
James Meza, MD, MSA
Saeed Tarokh, MD
Wayne State University, Detroit, Mich
While lifestyle interventions are clearly efficacious, clinicians will need appropriate resources to help patients exercise and maintain weight loss if they are to achieve similar results. This Clinical Inquiry helps practitioners realize that diabetes mellitus, impaired fasting glucose, impaired glucose tolerance, and obesity probably constitute a spectrum disorder and that we should treat all of these patients more aggressively. This is particularly true considering the epidemic proportion of obesity in the United States. Physicians’ attitudes towards obese patients might be a barrier to effective care. It is important for clinicians to realize that monitoring hemoglobin A1c levels is not recommended for IGT and IFG. Putting evidence into practice will mean that physicians need to be aware of the efficacy of both lifestyle and medical interventions in IGT and IFG.
1. Knowler WC, Barrett-Connor E, Fowler SE, et al. Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin. N Engl J Med 2002;346:393-403.
2. Tuomilehto J, Lindstrom J, Eriksson JG, et al. Prevention of type 2 diabetes mellitus by changes in lifestyle among subjects with impaired glucose tolerance. N Engl J Med 2001;344:1343-1350.
3. Chiasson JL, Josse RG, Gomis R, Hanefeld M, Karasik A, Laakso M. Acarbose for prevention of type 2 diabetes mellitus: the STOP-NIDDM randomised trial. Lancet 2002;359:2072-2077.
4. Buchanan TA, Xiang AH, Peters RK, et al. Preservation of pancreatic beta-cell function and prevention of type 2 diabetes by pharmacological treatment of insulin resistance in high-risk hispanic women. Diabetes 2002;51:2796-2803.
5. American Diabetes Association and National Institute of Diabetes. Digestive and Kidney Diseases. The prevention or delay of type 2 diabetes. Diabetes Care 2002;25:742-749.
6. US Preventive Services Task Force. Screening for type 2 diabetes mellitus in adults: recommendations and rationale. Ann Intern Med 2003;138:212-214.
7. American Diabetes Association. Screening for type 2 diabetes. Diabetes Care 2004;27 Suppl 1:S11-S14.
8. American Diabetes Association. Diagnosis and classification of diabetes mellitus. Diabetes Care 2004;27 Suppl 1:S5-S10.
9. Davies MJ, Tringham JR, Troughton J, Khunti KK. Prevention of type 2 diabetes mellitus. A review of the evidence and its application in a UK setting. Diabetic Medicine 2004;403-414.
The best treatment strategy for impaired glucose tolerance (IGT) and impaired fasting glucose (IFG) is lifestyle intervention with a structured weight loss program of diet and exercise (strength of recommendation [SOR]: B, based on high-quality randomized controlled trials [RCTs] for disease-oriented outcomes). Patients with IGT and IFG should be counseled to lose 5% to 7% of their body weight and instructed on moderate intensity physical activity for ~150 minutes per week.
Metformin (Glucophage), acarbose (Precose), and troglitazone (Rezulin) are also effective, but lifestyle interventions appear superior ( TABLE ) (SOR: B, based on single high quality randomized controlled trials). The American Diabetes Association defines IFG as a fasting glucose of between 100 and 125 mg/dL, and IGT as glucose between 140 and 199 mg/dL after a 2-hour oral glucose challenge.
Adults with IGT or IFG should have laboratory screening for diabetes every 1 to 2 years (SOR: C, based on expert opinion), using the fasting plasma glucose (FPG) as a screening test (SOR: C, based on expert opinion). For individuals whose FPG exceeds 125 mg/dL, oral glucose tolerance testing is considered superior to glycohemoglobin testing for ruling out progression to diabetes (SOR: C, based on expert opinion).
Evidence Summary
Both IGT and IFG are associated with a significant risk of developing diabetes and its associated cardiovascular comorbidities; thus, the primary goal for treatment is to prevent or delay the onset of diabetes. Recent well-designed studies have demonstrated benefits of lifestyle interventions for patients with IGT.
In the US Diabetes Prevention Program (DPP), 3234 patients with IGT and a body-mass index (BMI) of at least 24 kg/m2 were randomly assigned to one of the following groups: placebo, metformin, or intensive lifestyle modification. After an average follow-up of 2.8 years, there was a 14% absolute risk reduction in the progression to diabetes in the lifestyle intervention group compared with placebo (number needed to treat [NNT]=7).1 In the Finnish Diabetes Prevention Study, the lifestyle intervention group had a 12.5% absolute risk reduction compared with the control group (NNT=8).2 Successful lifestyle interventions in these studies included weight loss of 5% to 7%, decreased fat intake, increased fiber intake, and 150 minutes of exercise per week.1-2
Drug therapy with metformin, acarbose, and troglitazone has also been successful in preventing or delaying diabetes in people with IGT.1,3,4 In the placebo-controlled DPP trial, metformin use was associated with a reduction in progression to diabetes mellitus (NNT=14).1 In the STOP-NIDDM trial of 1429 persons over 3.3 years of follow-up, acarbose 100 mg 3 times daily resulted in a 9% reduction of progression to diabetes, compared with placebo (NNT=11).3
In the TRIPOD study, troglitazone use was associated with a 17% absolute risk reduction in the incidence of diabetes in high-risk Hispanic women (NNT=6 over an average of 30 months).4 The preventive effect of the drug was maintained more than 8 months after troglitazone therapy was discontinued (due to withdrawal from the US market). Current trials with other thiazolidinediones are underway.
TABLE
Comparison of major lifestyle and pharmacologic trials in IGT and IFG
INTERVENTION | RELATIVE RISK REDUCTION IN INCIDENCE OF DIABETES MELLITIS (95% CI) | NUMBER NEEDED TO TREAT | ABSOLUTE RISK REDUCTION |
---|---|---|---|
Lifestyle1 | 58% (48%–66%) | 7 | 14% |
Lifestyle2 | 58% (hazard ratio 0.4; 95% CI, 0.3%–0.7%) | 8 | 12.5% |
Metformin 850 mg twice daily (Glucophage)1 | 31% (17%–43%) | 14 | 7% |
Acarbose 100 mg three times daily (Precose)3 | 25% (10%–37%) | 11 | 9% |
Troglitazone 400 mg daily4 (Rezulin [withdrawn]) | 56% (17%–75%) | 6 | 16.7% |
NNT, number needed to treat; ARR, absolute risk reduction. | |||
Adapted from Davies et al, Diabetic Medicine 2004.9 |
Recommendations from others
The American Diabetes Association (ADA) recommends counseling on weight loss and instructing on increased physical activity in people with IGT.5 The United States Preventive Services Task Force recommends intensive programs of lifestyle modification (diet, exercise, and behavior) for patients who have pre-diabetes.6
The ADA recommends regular monitoring (every 1 to 2 years) for the development of diabetes in people with prediabetes, and prefers FPG to screen for diabetes since it is faster, cost-effective, and more reproducible than the more sensitive 2-hour oral glucose tolerance test.5,7 The ADA also recommends that if the FPG is <126 mg/dL and there is a high suspicion for diabetes, a 2-hour oral glucose tolerance test should be performed.
Glycosylated hemoglobin (HbA1C) is not recommended as a screening tool, because individuals with IFG or IGT may have normal or near-normal HbA1C levels; these individuals often manifest hyperglycemia only when challenged with the oral glucose load use in the standardized oral glucose tolerance test.8
Lifestyle modification clearly works; medication may have a role as well
James Meza, MD, MSA
Saeed Tarokh, MD
Wayne State University, Detroit, Mich
While lifestyle interventions are clearly efficacious, clinicians will need appropriate resources to help patients exercise and maintain weight loss if they are to achieve similar results. This Clinical Inquiry helps practitioners realize that diabetes mellitus, impaired fasting glucose, impaired glucose tolerance, and obesity probably constitute a spectrum disorder and that we should treat all of these patients more aggressively. This is particularly true considering the epidemic proportion of obesity in the United States. Physicians’ attitudes towards obese patients might be a barrier to effective care. It is important for clinicians to realize that monitoring hemoglobin A1c levels is not recommended for IGT and IFG. Putting evidence into practice will mean that physicians need to be aware of the efficacy of both lifestyle and medical interventions in IGT and IFG.
The best treatment strategy for impaired glucose tolerance (IGT) and impaired fasting glucose (IFG) is lifestyle intervention with a structured weight loss program of diet and exercise (strength of recommendation [SOR]: B, based on high-quality randomized controlled trials [RCTs] for disease-oriented outcomes). Patients with IGT and IFG should be counseled to lose 5% to 7% of their body weight and instructed on moderate intensity physical activity for ~150 minutes per week.
Metformin (Glucophage), acarbose (Precose), and troglitazone (Rezulin) are also effective, but lifestyle interventions appear superior ( TABLE ) (SOR: B, based on single high quality randomized controlled trials). The American Diabetes Association defines IFG as a fasting glucose of between 100 and 125 mg/dL, and IGT as glucose between 140 and 199 mg/dL after a 2-hour oral glucose challenge.
Adults with IGT or IFG should have laboratory screening for diabetes every 1 to 2 years (SOR: C, based on expert opinion), using the fasting plasma glucose (FPG) as a screening test (SOR: C, based on expert opinion). For individuals whose FPG exceeds 125 mg/dL, oral glucose tolerance testing is considered superior to glycohemoglobin testing for ruling out progression to diabetes (SOR: C, based on expert opinion).
Evidence Summary
Both IGT and IFG are associated with a significant risk of developing diabetes and its associated cardiovascular comorbidities; thus, the primary goal for treatment is to prevent or delay the onset of diabetes. Recent well-designed studies have demonstrated benefits of lifestyle interventions for patients with IGT.
In the US Diabetes Prevention Program (DPP), 3234 patients with IGT and a body-mass index (BMI) of at least 24 kg/m2 were randomly assigned to one of the following groups: placebo, metformin, or intensive lifestyle modification. After an average follow-up of 2.8 years, there was a 14% absolute risk reduction in the progression to diabetes in the lifestyle intervention group compared with placebo (number needed to treat [NNT]=7).1 In the Finnish Diabetes Prevention Study, the lifestyle intervention group had a 12.5% absolute risk reduction compared with the control group (NNT=8).2 Successful lifestyle interventions in these studies included weight loss of 5% to 7%, decreased fat intake, increased fiber intake, and 150 minutes of exercise per week.1-2
Drug therapy with metformin, acarbose, and troglitazone has also been successful in preventing or delaying diabetes in people with IGT.1,3,4 In the placebo-controlled DPP trial, metformin use was associated with a reduction in progression to diabetes mellitus (NNT=14).1 In the STOP-NIDDM trial of 1429 persons over 3.3 years of follow-up, acarbose 100 mg 3 times daily resulted in a 9% reduction of progression to diabetes, compared with placebo (NNT=11).3
In the TRIPOD study, troglitazone use was associated with a 17% absolute risk reduction in the incidence of diabetes in high-risk Hispanic women (NNT=6 over an average of 30 months).4 The preventive effect of the drug was maintained more than 8 months after troglitazone therapy was discontinued (due to withdrawal from the US market). Current trials with other thiazolidinediones are underway.
TABLE
Comparison of major lifestyle and pharmacologic trials in IGT and IFG
INTERVENTION | RELATIVE RISK REDUCTION IN INCIDENCE OF DIABETES MELLITIS (95% CI) | NUMBER NEEDED TO TREAT | ABSOLUTE RISK REDUCTION |
---|---|---|---|
Lifestyle1 | 58% (48%–66%) | 7 | 14% |
Lifestyle2 | 58% (hazard ratio 0.4; 95% CI, 0.3%–0.7%) | 8 | 12.5% |
Metformin 850 mg twice daily (Glucophage)1 | 31% (17%–43%) | 14 | 7% |
Acarbose 100 mg three times daily (Precose)3 | 25% (10%–37%) | 11 | 9% |
Troglitazone 400 mg daily4 (Rezulin [withdrawn]) | 56% (17%–75%) | 6 | 16.7% |
NNT, number needed to treat; ARR, absolute risk reduction. | |||
Adapted from Davies et al, Diabetic Medicine 2004.9 |
Recommendations from others
The American Diabetes Association (ADA) recommends counseling on weight loss and instructing on increased physical activity in people with IGT.5 The United States Preventive Services Task Force recommends intensive programs of lifestyle modification (diet, exercise, and behavior) for patients who have pre-diabetes.6
The ADA recommends regular monitoring (every 1 to 2 years) for the development of diabetes in people with prediabetes, and prefers FPG to screen for diabetes since it is faster, cost-effective, and more reproducible than the more sensitive 2-hour oral glucose tolerance test.5,7 The ADA also recommends that if the FPG is <126 mg/dL and there is a high suspicion for diabetes, a 2-hour oral glucose tolerance test should be performed.
Glycosylated hemoglobin (HbA1C) is not recommended as a screening tool, because individuals with IFG or IGT may have normal or near-normal HbA1C levels; these individuals often manifest hyperglycemia only when challenged with the oral glucose load use in the standardized oral glucose tolerance test.8
Lifestyle modification clearly works; medication may have a role as well
James Meza, MD, MSA
Saeed Tarokh, MD
Wayne State University, Detroit, Mich
While lifestyle interventions are clearly efficacious, clinicians will need appropriate resources to help patients exercise and maintain weight loss if they are to achieve similar results. This Clinical Inquiry helps practitioners realize that diabetes mellitus, impaired fasting glucose, impaired glucose tolerance, and obesity probably constitute a spectrum disorder and that we should treat all of these patients more aggressively. This is particularly true considering the epidemic proportion of obesity in the United States. Physicians’ attitudes towards obese patients might be a barrier to effective care. It is important for clinicians to realize that monitoring hemoglobin A1c levels is not recommended for IGT and IFG. Putting evidence into practice will mean that physicians need to be aware of the efficacy of both lifestyle and medical interventions in IGT and IFG.
1. Knowler WC, Barrett-Connor E, Fowler SE, et al. Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin. N Engl J Med 2002;346:393-403.
2. Tuomilehto J, Lindstrom J, Eriksson JG, et al. Prevention of type 2 diabetes mellitus by changes in lifestyle among subjects with impaired glucose tolerance. N Engl J Med 2001;344:1343-1350.
3. Chiasson JL, Josse RG, Gomis R, Hanefeld M, Karasik A, Laakso M. Acarbose for prevention of type 2 diabetes mellitus: the STOP-NIDDM randomised trial. Lancet 2002;359:2072-2077.
4. Buchanan TA, Xiang AH, Peters RK, et al. Preservation of pancreatic beta-cell function and prevention of type 2 diabetes by pharmacological treatment of insulin resistance in high-risk hispanic women. Diabetes 2002;51:2796-2803.
5. American Diabetes Association and National Institute of Diabetes. Digestive and Kidney Diseases. The prevention or delay of type 2 diabetes. Diabetes Care 2002;25:742-749.
6. US Preventive Services Task Force. Screening for type 2 diabetes mellitus in adults: recommendations and rationale. Ann Intern Med 2003;138:212-214.
7. American Diabetes Association. Screening for type 2 diabetes. Diabetes Care 2004;27 Suppl 1:S11-S14.
8. American Diabetes Association. Diagnosis and classification of diabetes mellitus. Diabetes Care 2004;27 Suppl 1:S5-S10.
9. Davies MJ, Tringham JR, Troughton J, Khunti KK. Prevention of type 2 diabetes mellitus. A review of the evidence and its application in a UK setting. Diabetic Medicine 2004;403-414.
1. Knowler WC, Barrett-Connor E, Fowler SE, et al. Reduction in the incidence of type 2 diabetes with lifestyle intervention or metformin. N Engl J Med 2002;346:393-403.
2. Tuomilehto J, Lindstrom J, Eriksson JG, et al. Prevention of type 2 diabetes mellitus by changes in lifestyle among subjects with impaired glucose tolerance. N Engl J Med 2001;344:1343-1350.
3. Chiasson JL, Josse RG, Gomis R, Hanefeld M, Karasik A, Laakso M. Acarbose for prevention of type 2 diabetes mellitus: the STOP-NIDDM randomised trial. Lancet 2002;359:2072-2077.
4. Buchanan TA, Xiang AH, Peters RK, et al. Preservation of pancreatic beta-cell function and prevention of type 2 diabetes by pharmacological treatment of insulin resistance in high-risk hispanic women. Diabetes 2002;51:2796-2803.
5. American Diabetes Association and National Institute of Diabetes. Digestive and Kidney Diseases. The prevention or delay of type 2 diabetes. Diabetes Care 2002;25:742-749.
6. US Preventive Services Task Force. Screening for type 2 diabetes mellitus in adults: recommendations and rationale. Ann Intern Med 2003;138:212-214.
7. American Diabetes Association. Screening for type 2 diabetes. Diabetes Care 2004;27 Suppl 1:S11-S14.
8. American Diabetes Association. Diagnosis and classification of diabetes mellitus. Diabetes Care 2004;27 Suppl 1:S5-S10.
9. Davies MJ, Tringham JR, Troughton J, Khunti KK. Prevention of type 2 diabetes mellitus. A review of the evidence and its application in a UK setting. Diabetic Medicine 2004;403-414.
Evidence-based answers from the Family Physicians Inquiries Network
Which imaging modality is best for suspected stroke?
Patients exhibiting stroke symptoms should have brain imaging immediately within 3 hours of symptom onset (strength of recommendation [SOR]: A, based on systematic review). In the first 3 hours after a suspected cerebrovascular accident (CVA), noncontrast head computerized tomography (CT) is the gold standard for diagnosis of acute hemorrhagic stroke (SOR: C, based on expert panel consensus). However, the sensitivity for hemorrhage declines steeply 8 to 10 days after the event. Eligibility guidelines for acute thrombolytic therapy are currently based on use of CT to rule out acute hemorrhagic stroke.
Magnetic resonance imaging (MRI) may be equally accurate in diagnosing an acute hemorrhagic stroke if completed within 90 minutes of presentation for patients whose symptoms began fewer than 6 hours earlier (SOR: B, based on a single high-quality cohort study). MRI is more sensitive than CT for ischemic stroke in the first 24 hours of symptoms (SOR: B, based on systematic review of low-quality studies with consistent findings) and is more sensitive than CT in the diagnosis of hemorrhagic or ischemic stroke greater than 1 week after symptom onset (SOR: B, based on 1 high-quality prospective cohort study).
Evidence summary
The British National Health Service Health Technology Assessment (HTA) Programme published a systematic review on optimal brain imaging strategies for the diagnosis of stroke in July 2004.2 The HTA searched Medline and EMBASE from 1980 to 1999 and found 1903 studies relevant to diagnostic imaging for stroke. Only 25 studies reported the type of stroke diagnosed and the imaging reference standard. Thirteen of these 25 studies describe the time interval from symptom onset to imaging.
The HTA found a wide range of sensitivities for CT and MRI for both hemorrhagic and ischemic strokes at different time periods (TABLE) and noted that the quality of most of these studies was poor. Most of the studies identified were performed in academic stroke centers and had small sample sizes. Interpretation was masked in only 58% of studies. Few data were available on interobserver reliability, and neuroradiologists usually interpreted images. Two studies, totaling 165 patients, compared CT and MRI scans performed on the same day. However, the most “acute” time period reported was within 48 hours from symptom onset; neither study reported the order in which scans were performed, and only 1 study masked the neuroradiologist to the interpretation of the other modality.
After this systematic review, the HTA performed a prospective cohort study comparing CT with MRI obtained in random order on the day of presentation. They enrolled 228 patients presenting to a general hospital with stroke symptoms: (1) lasting longer than 1 day, but causing little or no decrease in function, or (2) lasting longer than 5 days. The mean time from onset of symptoms to scanning was 21.5 days. CT detected hemorrhagic stroke in 50%, and late hemorrhagic transformation in 20%, of those patients found to have hemorrhagic stroke on MRI, which was considered the criterion standard of chronic stroke diagnosis. The earliest hemorrhagic stroke missed by CT was 11 days old, and the latest hemorrhage correctly identified by CT was 14 days old.
An additional prospective cohort study comparing imaging modalities in the acute time frame has been published since the HTA review.2 The study enrolled 129 patients with stroke symptoms of less than 3 hours, as well as 71 patients with between 3 and 6 hours of symptoms. Patients underwent multimodal MRI (including gradient recalled echo and diffusion-weighted imaging) and noncontrast CT within 90 minutes of presentation. Two stroke specialists and 2 neuroradiologists, masked to clinical information, read the scans independently at a later date. Interrater reliability was good (= 0.75–0.94) for identifying acute hemorrhage. There was 96% concordance between the MRI and CT interpretations. The 4 hemorrhages “missed” by CT were hemorrhagic transformations of acute infarct, and the 4 hemorrhages “missed” by MRI were misclassified as chronic when they were acute.
The clinical implications of this study are uncertain. Without recognition of 1 imaging modality as the reference standard, it is difficult to say whether a hemorrhage “missed” on CT was a false negative on CT or a false positive on MRI.
TABLE
Range of sensitivities of CT and MRI for hemorrhagic and ischemic stroke
NEUROIMAGING METHOD | TYPE OF STROKE | TIME SINCE SYMPTOM ONSET | |||
---|---|---|---|---|---|
>3 HOURS | >6 HOURS | <48 HOURS | >48 HOURS | ||
Head CT without contrast | Sensitivity for hemorrhagic stroke | Evidence limited, assumed 100% in many studies | 86%–90%*3 | 93% | 17%–58% |
Sensitivity for ischemic stroke | 64%–85% | 47%–80% | 23%–81% | 53%–74% | |
Brain MRI | Sensitivity for hemorrhagic stroke | No MRI studies identified for this time frame | 86%–90%*3 | 46% | 38%–97% |
Sensitivity for ischemic stroke | 65% | 84%–88% | 94%–98% | ||
*Sensitivity=86% when other imaging modality used as gold standard, 90% when discharge diagnosis of acute hemorrhagic stroke used as gold standard. | |||||
Source: Wardlaw et al, Health Technol Assess 20042; Kidwell et al, JAMA 2004.3 |
Recommendations from others
In 2003, the Stroke Council, appointed by the American Heart Association, stated: “For most cases and at most institutions, CT remains the most important brain imaging test. A physician skilled in assessing CT studies should be available to interpret the scan (strength of recommendation grade B).” The Stroke Council further recommends that “[i]n patients seen within 6 hours of onset, CT currently may be preferred as the first imaging study because MRI detection of acute intracerebral hemorrhage has not been fully validated (strength of recommendation grade A).”4
The British National Health Service HTA Programme advises “scan all immediately” for diagnosing new neurological deficits with the understanding that CT scans were most available and cost-effective.2
CT without contrast still the best choice for assessing suspected acute stroke
Fred Grover, Jr, MD
Department of Family Medicine, University of Colorado
CT without contrast remains the best choice when assessing a patient for suspected stroke. For patients who are candidates for rtPA, this should be performed and read within 45 minutes of entering the emergency department. Remember that IV thrombolytics must be administered within 3 hours of stroke onset to be effective.
As Xenon-enhanced CT (XeCT) and Single Photon Emission CT (SPECT) become more available, these may be considered an adjunct to help risk-stratify patients prior to revascularization with a thrombolytic.1 After 48 hours, an MRI shows greater sensitivity in detecting both hemorrhagic and ischemic strokes.
1. Latchaw RE, Yonas H, Hunter GJ, et al. Guidelines and recommendations for perfusion imaging in cerebral ischemia: A scientific statement for healthcare professionals by the writing group on perfusion imaging, from the Council on Cardiovascular Radiology of the American Heart Association. Stroke 2003;34:1084-1104.
2. Wardlaw JM, Keir SL, Seymour J, et al. What is the best imaging strategy for acute stroke?. Health Technol Assess 2004;8:1-180.
3. Kidwell CS, Chalela JA, Saver JL, et al. Comparison of MRI and CT for detection of acute intracerebral hemorrhage. JAMA 2004;292:1823-1830.
4. Adams HP, Adams RJ, Brott T, et al. Guidelines for the early management of patients with ischemic stroke. Stroke 2003;34:1056-1083.
Patients exhibiting stroke symptoms should have brain imaging immediately within 3 hours of symptom onset (strength of recommendation [SOR]: A, based on systematic review). In the first 3 hours after a suspected cerebrovascular accident (CVA), noncontrast head computerized tomography (CT) is the gold standard for diagnosis of acute hemorrhagic stroke (SOR: C, based on expert panel consensus). However, the sensitivity for hemorrhage declines steeply 8 to 10 days after the event. Eligibility guidelines for acute thrombolytic therapy are currently based on use of CT to rule out acute hemorrhagic stroke.
Magnetic resonance imaging (MRI) may be equally accurate in diagnosing an acute hemorrhagic stroke if completed within 90 minutes of presentation for patients whose symptoms began fewer than 6 hours earlier (SOR: B, based on a single high-quality cohort study). MRI is more sensitive than CT for ischemic stroke in the first 24 hours of symptoms (SOR: B, based on systematic review of low-quality studies with consistent findings) and is more sensitive than CT in the diagnosis of hemorrhagic or ischemic stroke greater than 1 week after symptom onset (SOR: B, based on 1 high-quality prospective cohort study).
Evidence summary
The British National Health Service Health Technology Assessment (HTA) Programme published a systematic review on optimal brain imaging strategies for the diagnosis of stroke in July 2004.2 The HTA searched Medline and EMBASE from 1980 to 1999 and found 1903 studies relevant to diagnostic imaging for stroke. Only 25 studies reported the type of stroke diagnosed and the imaging reference standard. Thirteen of these 25 studies describe the time interval from symptom onset to imaging.
The HTA found a wide range of sensitivities for CT and MRI for both hemorrhagic and ischemic strokes at different time periods (TABLE) and noted that the quality of most of these studies was poor. Most of the studies identified were performed in academic stroke centers and had small sample sizes. Interpretation was masked in only 58% of studies. Few data were available on interobserver reliability, and neuroradiologists usually interpreted images. Two studies, totaling 165 patients, compared CT and MRI scans performed on the same day. However, the most “acute” time period reported was within 48 hours from symptom onset; neither study reported the order in which scans were performed, and only 1 study masked the neuroradiologist to the interpretation of the other modality.
After this systematic review, the HTA performed a prospective cohort study comparing CT with MRI obtained in random order on the day of presentation. They enrolled 228 patients presenting to a general hospital with stroke symptoms: (1) lasting longer than 1 day, but causing little or no decrease in function, or (2) lasting longer than 5 days. The mean time from onset of symptoms to scanning was 21.5 days. CT detected hemorrhagic stroke in 50%, and late hemorrhagic transformation in 20%, of those patients found to have hemorrhagic stroke on MRI, which was considered the criterion standard of chronic stroke diagnosis. The earliest hemorrhagic stroke missed by CT was 11 days old, and the latest hemorrhage correctly identified by CT was 14 days old.
An additional prospective cohort study comparing imaging modalities in the acute time frame has been published since the HTA review.2 The study enrolled 129 patients with stroke symptoms of less than 3 hours, as well as 71 patients with between 3 and 6 hours of symptoms. Patients underwent multimodal MRI (including gradient recalled echo and diffusion-weighted imaging) and noncontrast CT within 90 minutes of presentation. Two stroke specialists and 2 neuroradiologists, masked to clinical information, read the scans independently at a later date. Interrater reliability was good (= 0.75–0.94) for identifying acute hemorrhage. There was 96% concordance between the MRI and CT interpretations. The 4 hemorrhages “missed” by CT were hemorrhagic transformations of acute infarct, and the 4 hemorrhages “missed” by MRI were misclassified as chronic when they were acute.
The clinical implications of this study are uncertain. Without recognition of 1 imaging modality as the reference standard, it is difficult to say whether a hemorrhage “missed” on CT was a false negative on CT or a false positive on MRI.
TABLE
Range of sensitivities of CT and MRI for hemorrhagic and ischemic stroke
NEUROIMAGING METHOD | TYPE OF STROKE | TIME SINCE SYMPTOM ONSET | |||
---|---|---|---|---|---|
>3 HOURS | >6 HOURS | <48 HOURS | >48 HOURS | ||
Head CT without contrast | Sensitivity for hemorrhagic stroke | Evidence limited, assumed 100% in many studies | 86%–90%*3 | 93% | 17%–58% |
Sensitivity for ischemic stroke | 64%–85% | 47%–80% | 23%–81% | 53%–74% | |
Brain MRI | Sensitivity for hemorrhagic stroke | No MRI studies identified for this time frame | 86%–90%*3 | 46% | 38%–97% |
Sensitivity for ischemic stroke | 65% | 84%–88% | 94%–98% | ||
*Sensitivity=86% when other imaging modality used as gold standard, 90% when discharge diagnosis of acute hemorrhagic stroke used as gold standard. | |||||
Source: Wardlaw et al, Health Technol Assess 20042; Kidwell et al, JAMA 2004.3 |
Recommendations from others
In 2003, the Stroke Council, appointed by the American Heart Association, stated: “For most cases and at most institutions, CT remains the most important brain imaging test. A physician skilled in assessing CT studies should be available to interpret the scan (strength of recommendation grade B).” The Stroke Council further recommends that “[i]n patients seen within 6 hours of onset, CT currently may be preferred as the first imaging study because MRI detection of acute intracerebral hemorrhage has not been fully validated (strength of recommendation grade A).”4
The British National Health Service HTA Programme advises “scan all immediately” for diagnosing new neurological deficits with the understanding that CT scans were most available and cost-effective.2
CT without contrast still the best choice for assessing suspected acute stroke
Fred Grover, Jr, MD
Department of Family Medicine, University of Colorado
CT without contrast remains the best choice when assessing a patient for suspected stroke. For patients who are candidates for rtPA, this should be performed and read within 45 minutes of entering the emergency department. Remember that IV thrombolytics must be administered within 3 hours of stroke onset to be effective.
As Xenon-enhanced CT (XeCT) and Single Photon Emission CT (SPECT) become more available, these may be considered an adjunct to help risk-stratify patients prior to revascularization with a thrombolytic.1 After 48 hours, an MRI shows greater sensitivity in detecting both hemorrhagic and ischemic strokes.
Patients exhibiting stroke symptoms should have brain imaging immediately within 3 hours of symptom onset (strength of recommendation [SOR]: A, based on systematic review). In the first 3 hours after a suspected cerebrovascular accident (CVA), noncontrast head computerized tomography (CT) is the gold standard for diagnosis of acute hemorrhagic stroke (SOR: C, based on expert panel consensus). However, the sensitivity for hemorrhage declines steeply 8 to 10 days after the event. Eligibility guidelines for acute thrombolytic therapy are currently based on use of CT to rule out acute hemorrhagic stroke.
Magnetic resonance imaging (MRI) may be equally accurate in diagnosing an acute hemorrhagic stroke if completed within 90 minutes of presentation for patients whose symptoms began fewer than 6 hours earlier (SOR: B, based on a single high-quality cohort study). MRI is more sensitive than CT for ischemic stroke in the first 24 hours of symptoms (SOR: B, based on systematic review of low-quality studies with consistent findings) and is more sensitive than CT in the diagnosis of hemorrhagic or ischemic stroke greater than 1 week after symptom onset (SOR: B, based on 1 high-quality prospective cohort study).
Evidence summary
The British National Health Service Health Technology Assessment (HTA) Programme published a systematic review on optimal brain imaging strategies for the diagnosis of stroke in July 2004.2 The HTA searched Medline and EMBASE from 1980 to 1999 and found 1903 studies relevant to diagnostic imaging for stroke. Only 25 studies reported the type of stroke diagnosed and the imaging reference standard. Thirteen of these 25 studies describe the time interval from symptom onset to imaging.
The HTA found a wide range of sensitivities for CT and MRI for both hemorrhagic and ischemic strokes at different time periods (TABLE) and noted that the quality of most of these studies was poor. Most of the studies identified were performed in academic stroke centers and had small sample sizes. Interpretation was masked in only 58% of studies. Few data were available on interobserver reliability, and neuroradiologists usually interpreted images. Two studies, totaling 165 patients, compared CT and MRI scans performed on the same day. However, the most “acute” time period reported was within 48 hours from symptom onset; neither study reported the order in which scans were performed, and only 1 study masked the neuroradiologist to the interpretation of the other modality.
After this systematic review, the HTA performed a prospective cohort study comparing CT with MRI obtained in random order on the day of presentation. They enrolled 228 patients presenting to a general hospital with stroke symptoms: (1) lasting longer than 1 day, but causing little or no decrease in function, or (2) lasting longer than 5 days. The mean time from onset of symptoms to scanning was 21.5 days. CT detected hemorrhagic stroke in 50%, and late hemorrhagic transformation in 20%, of those patients found to have hemorrhagic stroke on MRI, which was considered the criterion standard of chronic stroke diagnosis. The earliest hemorrhagic stroke missed by CT was 11 days old, and the latest hemorrhage correctly identified by CT was 14 days old.
An additional prospective cohort study comparing imaging modalities in the acute time frame has been published since the HTA review.2 The study enrolled 129 patients with stroke symptoms of less than 3 hours, as well as 71 patients with between 3 and 6 hours of symptoms. Patients underwent multimodal MRI (including gradient recalled echo and diffusion-weighted imaging) and noncontrast CT within 90 minutes of presentation. Two stroke specialists and 2 neuroradiologists, masked to clinical information, read the scans independently at a later date. Interrater reliability was good (= 0.75–0.94) for identifying acute hemorrhage. There was 96% concordance between the MRI and CT interpretations. The 4 hemorrhages “missed” by CT were hemorrhagic transformations of acute infarct, and the 4 hemorrhages “missed” by MRI were misclassified as chronic when they were acute.
The clinical implications of this study are uncertain. Without recognition of 1 imaging modality as the reference standard, it is difficult to say whether a hemorrhage “missed” on CT was a false negative on CT or a false positive on MRI.
TABLE
Range of sensitivities of CT and MRI for hemorrhagic and ischemic stroke
NEUROIMAGING METHOD | TYPE OF STROKE | TIME SINCE SYMPTOM ONSET | |||
---|---|---|---|---|---|
>3 HOURS | >6 HOURS | <48 HOURS | >48 HOURS | ||
Head CT without contrast | Sensitivity for hemorrhagic stroke | Evidence limited, assumed 100% in many studies | 86%–90%*3 | 93% | 17%–58% |
Sensitivity for ischemic stroke | 64%–85% | 47%–80% | 23%–81% | 53%–74% | |
Brain MRI | Sensitivity for hemorrhagic stroke | No MRI studies identified for this time frame | 86%–90%*3 | 46% | 38%–97% |
Sensitivity for ischemic stroke | 65% | 84%–88% | 94%–98% | ||
*Sensitivity=86% when other imaging modality used as gold standard, 90% when discharge diagnosis of acute hemorrhagic stroke used as gold standard. | |||||
Source: Wardlaw et al, Health Technol Assess 20042; Kidwell et al, JAMA 2004.3 |
Recommendations from others
In 2003, the Stroke Council, appointed by the American Heart Association, stated: “For most cases and at most institutions, CT remains the most important brain imaging test. A physician skilled in assessing CT studies should be available to interpret the scan (strength of recommendation grade B).” The Stroke Council further recommends that “[i]n patients seen within 6 hours of onset, CT currently may be preferred as the first imaging study because MRI detection of acute intracerebral hemorrhage has not been fully validated (strength of recommendation grade A).”4
The British National Health Service HTA Programme advises “scan all immediately” for diagnosing new neurological deficits with the understanding that CT scans were most available and cost-effective.2
CT without contrast still the best choice for assessing suspected acute stroke
Fred Grover, Jr, MD
Department of Family Medicine, University of Colorado
CT without contrast remains the best choice when assessing a patient for suspected stroke. For patients who are candidates for rtPA, this should be performed and read within 45 minutes of entering the emergency department. Remember that IV thrombolytics must be administered within 3 hours of stroke onset to be effective.
As Xenon-enhanced CT (XeCT) and Single Photon Emission CT (SPECT) become more available, these may be considered an adjunct to help risk-stratify patients prior to revascularization with a thrombolytic.1 After 48 hours, an MRI shows greater sensitivity in detecting both hemorrhagic and ischemic strokes.
1. Latchaw RE, Yonas H, Hunter GJ, et al. Guidelines and recommendations for perfusion imaging in cerebral ischemia: A scientific statement for healthcare professionals by the writing group on perfusion imaging, from the Council on Cardiovascular Radiology of the American Heart Association. Stroke 2003;34:1084-1104.
2. Wardlaw JM, Keir SL, Seymour J, et al. What is the best imaging strategy for acute stroke?. Health Technol Assess 2004;8:1-180.
3. Kidwell CS, Chalela JA, Saver JL, et al. Comparison of MRI and CT for detection of acute intracerebral hemorrhage. JAMA 2004;292:1823-1830.
4. Adams HP, Adams RJ, Brott T, et al. Guidelines for the early management of patients with ischemic stroke. Stroke 2003;34:1056-1083.
1. Latchaw RE, Yonas H, Hunter GJ, et al. Guidelines and recommendations for perfusion imaging in cerebral ischemia: A scientific statement for healthcare professionals by the writing group on perfusion imaging, from the Council on Cardiovascular Radiology of the American Heart Association. Stroke 2003;34:1084-1104.
2. Wardlaw JM, Keir SL, Seymour J, et al. What is the best imaging strategy for acute stroke?. Health Technol Assess 2004;8:1-180.
3. Kidwell CS, Chalela JA, Saver JL, et al. Comparison of MRI and CT for detection of acute intracerebral hemorrhage. JAMA 2004;292:1823-1830.
4. Adams HP, Adams RJ, Brott T, et al. Guidelines for the early management of patients with ischemic stroke. Stroke 2003;34:1056-1083.
Evidence-based answers from the Family Physicians Inquiries Network
Do beta-blockers worsen respiratory status for patients with COPD?
Patients with chronic obstructive pulmonary disease (COPD) who use cardioselective beta-blockers (beta1-blockers) do not experience a significant worsening of their short-term pulmonary status as measured by changes in forced expiratory volume in 1 second (FEV1), or by changes in patients’ self-reported symptoms. If such harmful effects do exist, they are likely to be less clinically important than the substantial proven benefits of beta-blockade for patients with concomitant cardiovascular disease (strength of recommendation: A, based on a high-quality meta-analysis of controlled trials).
Limited evidence suggests that most patients with congestive heart failure and COPD without reversible airflow obstruction tolerate carvedilol, which causes both nonselective beta- and alpha-adrenergic blockade (SOR: B, based on limited-quality cohort studies).
Evidence summary
In recent years, beta-blockers have been shown to substantially decrease mortality in patients with congestive heart failure, coronary heart disease, and hypertension. Patients with both cardiovascular disease and COPD, however, are much less likely to receive beta-blocker therapy than comparable patients without COPD. Clinicians may be fearful of using beta-blockers in these patients because of the possibility of worsening respiratory function from the potential side effect of bronchoconstriction.1
A 2004 meta-analysis synthesized the data of 19 clinical controlled trials that compared active therapy with either placebo or prior-to-treatment controls, assessing differences in FEV1, response to a beta2-agonist, and patient-reported respiratory symptoms.2 Trials included in the meta-analysis used cardioselective beta-blockers and evaluated either single-dose treatments or therapy of longer duration (2 days to 3.3 months). The authors concluded that patients with COPD who received cardioselective beta-blockers (such as metoprolol, atenolol, or bisoprolol) did not experience a statistically significant short-term deterioration in FEV1, worsening of COPD symptoms, or decreased responsiveness to beta2-agonists. The authors reported similar results for an analysis restricted to only those patients with severe COPD.
This meta-analysis was limited by the relatively small number of participants (N=141 in single-dose treatment studies; N=126 in studies of longer duration treatment) in the handful of eligible studies. Consequently, rare or minimally harmful effects could have gone undetected.
A retrospective analysis of a cohort study analyzed the tolerability of carvedilol, a nonselective beta- and alpha-adrenergic blocker, in patients with COPD who had been taking the medication for at least 3 months. Eighty-five percent of the 89 patients with COPD tolerated carvedilol. The authors of the study (which was funded by the manufacturer of carvedilol) did not state why the other 15% of patients did not tolerate carvedilol, nor did they mention whether the patients with COPD had reversible airflow obstruction.3
One of the sites that participated in this study subsequently published a smaller retrospective analysis of a cohort study that examined the outcomes of 31 patients with heart failure and COPD without reversible airflow obstruction who were started on carvedilol therapy. Over the 2.4 years that the patients were followed, 1 patient stopped taking carvedilol (mean dose 29 ± 19 mg daily) due to wheezing.4 Whether these 31 patients were also included in the larger study is unclear.
A 2004 narrative review article cited these 2 studies and concluded that carvedilol was well-tolerated in patients with COPD without reversible airflow obstruction, but no evidence exists regarding its tolerability in patients with reversible airflow obstruction.5
Recommendations from others
A 2002 evidence-based clinical guideline on the diagnosis and management of COPD reported that the use of cardioselective beta-blockers in patients with COPD did not significantly worsen respiratory status, citing a previous version of the meta-analysis reviewed above as its source of evidence.6 The American College of Cardiology and the American Heart Association recommended the cautious administration of low-dose, short-acting cardioselective beta-blockers for acute coronary syndrome in patients with COPD.7
A recent consensus workshop summary report issued by experts convened by the National Heart, Lung, and Blood Institute, cited continuing uncertainty regarding the use of beta-blockers for COPD patients with heart disease, and called for additional studies of management strategies for these often-coexisting conditions.8
Benefits outweigh risks for beta-blockade for patients with CV disease, comorbid COPD
It appears that the benefits outweigh the risks for the use of cardioselective beta-blocker therapy in patients with cardiovascular disease and comorbid COPD. Prudent management of these patients dictates that beta-blocker therapy should be initiated with a low-dose cardioselective beta-blocker, that the respiratory status of these patients should be monitored closely, and that any otherwise unexplained decline in respiratory status should warrant a reevaluation of the appropriateness of beta-blocker therapy.
1. Andrus MR, Holloway KP, Clark DB. Use of beta-blockers in patients with COPD. Ann Pharmacother 2004;38:142-145.
2. Salpeter SS, Ormiston T, Salpeter E, Poole P, Cates C. Cardioselective beta-blockers for chronic obstructive pulmonary disease (Cochrane Review). Cochrane Database Syst Rev2005 (1).
3. Krum H, Ninio D, Macdonald P. Baseline predictors of tolerability to carvedilol in patients with chronic heart failure. Heart 2000;84:615-619
4. Kotlyar E, Keogh AM, Macdonald PS, Arnold RH, McCaffrey DJ, Glanville AR. Tolerability of carvedilol in patients with heart failure and concomitant chronic obstructive pulmonary disease or asthma. J Heart Lung Transplant 2002;21:1290-1295.
5. Sirak TE, Jelic S, Le Jemtel TH. Therapeutic update: non-selective beta- and alpha-adrenergic blockage in patients with coexisting chronic obstructive pulmonary disease and chronic heart failure. J Am Coll Cardiol 2004;44:497-502.
6. Finnish Medical Society Duodecim. Chronic Obstructive Pulmonary Disease (COPD). Helsinki, Finland: Duodecim Medical; 2002.
7. Braunwald E, Antman EM, Beasley JW, et al. ACC/AHA 2002 guideline update for the management of patients with unstable angina and non-ST-segment elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on the Management of Patients With Unstable Angina). 2002. Available at: www.acc.org/clinical/guidelines/unstable/unstable.pdf.
8. Croxton TL, Weinmann GG, Senior RM, Wise RA, Crapo JD, Buist AS. Clinical research in chronic obstructive pulmonary disease: needs and opportunities. Am J Respir Crit Care Med 2003;167:1142-1149.
Patients with chronic obstructive pulmonary disease (COPD) who use cardioselective beta-blockers (beta1-blockers) do not experience a significant worsening of their short-term pulmonary status as measured by changes in forced expiratory volume in 1 second (FEV1), or by changes in patients’ self-reported symptoms. If such harmful effects do exist, they are likely to be less clinically important than the substantial proven benefits of beta-blockade for patients with concomitant cardiovascular disease (strength of recommendation: A, based on a high-quality meta-analysis of controlled trials).
Limited evidence suggests that most patients with congestive heart failure and COPD without reversible airflow obstruction tolerate carvedilol, which causes both nonselective beta- and alpha-adrenergic blockade (SOR: B, based on limited-quality cohort studies).
Evidence summary
In recent years, beta-blockers have been shown to substantially decrease mortality in patients with congestive heart failure, coronary heart disease, and hypertension. Patients with both cardiovascular disease and COPD, however, are much less likely to receive beta-blocker therapy than comparable patients without COPD. Clinicians may be fearful of using beta-blockers in these patients because of the possibility of worsening respiratory function from the potential side effect of bronchoconstriction.1
A 2004 meta-analysis synthesized the data of 19 clinical controlled trials that compared active therapy with either placebo or prior-to-treatment controls, assessing differences in FEV1, response to a beta2-agonist, and patient-reported respiratory symptoms.2 Trials included in the meta-analysis used cardioselective beta-blockers and evaluated either single-dose treatments or therapy of longer duration (2 days to 3.3 months). The authors concluded that patients with COPD who received cardioselective beta-blockers (such as metoprolol, atenolol, or bisoprolol) did not experience a statistically significant short-term deterioration in FEV1, worsening of COPD symptoms, or decreased responsiveness to beta2-agonists. The authors reported similar results for an analysis restricted to only those patients with severe COPD.
This meta-analysis was limited by the relatively small number of participants (N=141 in single-dose treatment studies; N=126 in studies of longer duration treatment) in the handful of eligible studies. Consequently, rare or minimally harmful effects could have gone undetected.
A retrospective analysis of a cohort study analyzed the tolerability of carvedilol, a nonselective beta- and alpha-adrenergic blocker, in patients with COPD who had been taking the medication for at least 3 months. Eighty-five percent of the 89 patients with COPD tolerated carvedilol. The authors of the study (which was funded by the manufacturer of carvedilol) did not state why the other 15% of patients did not tolerate carvedilol, nor did they mention whether the patients with COPD had reversible airflow obstruction.3
One of the sites that participated in this study subsequently published a smaller retrospective analysis of a cohort study that examined the outcomes of 31 patients with heart failure and COPD without reversible airflow obstruction who were started on carvedilol therapy. Over the 2.4 years that the patients were followed, 1 patient stopped taking carvedilol (mean dose 29 ± 19 mg daily) due to wheezing.4 Whether these 31 patients were also included in the larger study is unclear.
A 2004 narrative review article cited these 2 studies and concluded that carvedilol was well-tolerated in patients with COPD without reversible airflow obstruction, but no evidence exists regarding its tolerability in patients with reversible airflow obstruction.5
Recommendations from others
A 2002 evidence-based clinical guideline on the diagnosis and management of COPD reported that the use of cardioselective beta-blockers in patients with COPD did not significantly worsen respiratory status, citing a previous version of the meta-analysis reviewed above as its source of evidence.6 The American College of Cardiology and the American Heart Association recommended the cautious administration of low-dose, short-acting cardioselective beta-blockers for acute coronary syndrome in patients with COPD.7
A recent consensus workshop summary report issued by experts convened by the National Heart, Lung, and Blood Institute, cited continuing uncertainty regarding the use of beta-blockers for COPD patients with heart disease, and called for additional studies of management strategies for these often-coexisting conditions.8
Benefits outweigh risks for beta-blockade for patients with CV disease, comorbid COPD
It appears that the benefits outweigh the risks for the use of cardioselective beta-blocker therapy in patients with cardiovascular disease and comorbid COPD. Prudent management of these patients dictates that beta-blocker therapy should be initiated with a low-dose cardioselective beta-blocker, that the respiratory status of these patients should be monitored closely, and that any otherwise unexplained decline in respiratory status should warrant a reevaluation of the appropriateness of beta-blocker therapy.
Patients with chronic obstructive pulmonary disease (COPD) who use cardioselective beta-blockers (beta1-blockers) do not experience a significant worsening of their short-term pulmonary status as measured by changes in forced expiratory volume in 1 second (FEV1), or by changes in patients’ self-reported symptoms. If such harmful effects do exist, they are likely to be less clinically important than the substantial proven benefits of beta-blockade for patients with concomitant cardiovascular disease (strength of recommendation: A, based on a high-quality meta-analysis of controlled trials).
Limited evidence suggests that most patients with congestive heart failure and COPD without reversible airflow obstruction tolerate carvedilol, which causes both nonselective beta- and alpha-adrenergic blockade (SOR: B, based on limited-quality cohort studies).
Evidence summary
In recent years, beta-blockers have been shown to substantially decrease mortality in patients with congestive heart failure, coronary heart disease, and hypertension. Patients with both cardiovascular disease and COPD, however, are much less likely to receive beta-blocker therapy than comparable patients without COPD. Clinicians may be fearful of using beta-blockers in these patients because of the possibility of worsening respiratory function from the potential side effect of bronchoconstriction.1
A 2004 meta-analysis synthesized the data of 19 clinical controlled trials that compared active therapy with either placebo or prior-to-treatment controls, assessing differences in FEV1, response to a beta2-agonist, and patient-reported respiratory symptoms.2 Trials included in the meta-analysis used cardioselective beta-blockers and evaluated either single-dose treatments or therapy of longer duration (2 days to 3.3 months). The authors concluded that patients with COPD who received cardioselective beta-blockers (such as metoprolol, atenolol, or bisoprolol) did not experience a statistically significant short-term deterioration in FEV1, worsening of COPD symptoms, or decreased responsiveness to beta2-agonists. The authors reported similar results for an analysis restricted to only those patients with severe COPD.
This meta-analysis was limited by the relatively small number of participants (N=141 in single-dose treatment studies; N=126 in studies of longer duration treatment) in the handful of eligible studies. Consequently, rare or minimally harmful effects could have gone undetected.
A retrospective analysis of a cohort study analyzed the tolerability of carvedilol, a nonselective beta- and alpha-adrenergic blocker, in patients with COPD who had been taking the medication for at least 3 months. Eighty-five percent of the 89 patients with COPD tolerated carvedilol. The authors of the study (which was funded by the manufacturer of carvedilol) did not state why the other 15% of patients did not tolerate carvedilol, nor did they mention whether the patients with COPD had reversible airflow obstruction.3
One of the sites that participated in this study subsequently published a smaller retrospective analysis of a cohort study that examined the outcomes of 31 patients with heart failure and COPD without reversible airflow obstruction who were started on carvedilol therapy. Over the 2.4 years that the patients were followed, 1 patient stopped taking carvedilol (mean dose 29 ± 19 mg daily) due to wheezing.4 Whether these 31 patients were also included in the larger study is unclear.
A 2004 narrative review article cited these 2 studies and concluded that carvedilol was well-tolerated in patients with COPD without reversible airflow obstruction, but no evidence exists regarding its tolerability in patients with reversible airflow obstruction.5
Recommendations from others
A 2002 evidence-based clinical guideline on the diagnosis and management of COPD reported that the use of cardioselective beta-blockers in patients with COPD did not significantly worsen respiratory status, citing a previous version of the meta-analysis reviewed above as its source of evidence.6 The American College of Cardiology and the American Heart Association recommended the cautious administration of low-dose, short-acting cardioselective beta-blockers for acute coronary syndrome in patients with COPD.7
A recent consensus workshop summary report issued by experts convened by the National Heart, Lung, and Blood Institute, cited continuing uncertainty regarding the use of beta-blockers for COPD patients with heart disease, and called for additional studies of management strategies for these often-coexisting conditions.8
Benefits outweigh risks for beta-blockade for patients with CV disease, comorbid COPD
It appears that the benefits outweigh the risks for the use of cardioselective beta-blocker therapy in patients with cardiovascular disease and comorbid COPD. Prudent management of these patients dictates that beta-blocker therapy should be initiated with a low-dose cardioselective beta-blocker, that the respiratory status of these patients should be monitored closely, and that any otherwise unexplained decline in respiratory status should warrant a reevaluation of the appropriateness of beta-blocker therapy.
1. Andrus MR, Holloway KP, Clark DB. Use of beta-blockers in patients with COPD. Ann Pharmacother 2004;38:142-145.
2. Salpeter SS, Ormiston T, Salpeter E, Poole P, Cates C. Cardioselective beta-blockers for chronic obstructive pulmonary disease (Cochrane Review). Cochrane Database Syst Rev2005 (1).
3. Krum H, Ninio D, Macdonald P. Baseline predictors of tolerability to carvedilol in patients with chronic heart failure. Heart 2000;84:615-619
4. Kotlyar E, Keogh AM, Macdonald PS, Arnold RH, McCaffrey DJ, Glanville AR. Tolerability of carvedilol in patients with heart failure and concomitant chronic obstructive pulmonary disease or asthma. J Heart Lung Transplant 2002;21:1290-1295.
5. Sirak TE, Jelic S, Le Jemtel TH. Therapeutic update: non-selective beta- and alpha-adrenergic blockage in patients with coexisting chronic obstructive pulmonary disease and chronic heart failure. J Am Coll Cardiol 2004;44:497-502.
6. Finnish Medical Society Duodecim. Chronic Obstructive Pulmonary Disease (COPD). Helsinki, Finland: Duodecim Medical; 2002.
7. Braunwald E, Antman EM, Beasley JW, et al. ACC/AHA 2002 guideline update for the management of patients with unstable angina and non-ST-segment elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on the Management of Patients With Unstable Angina). 2002. Available at: www.acc.org/clinical/guidelines/unstable/unstable.pdf.
8. Croxton TL, Weinmann GG, Senior RM, Wise RA, Crapo JD, Buist AS. Clinical research in chronic obstructive pulmonary disease: needs and opportunities. Am J Respir Crit Care Med 2003;167:1142-1149.
1. Andrus MR, Holloway KP, Clark DB. Use of beta-blockers in patients with COPD. Ann Pharmacother 2004;38:142-145.
2. Salpeter SS, Ormiston T, Salpeter E, Poole P, Cates C. Cardioselective beta-blockers for chronic obstructive pulmonary disease (Cochrane Review). Cochrane Database Syst Rev2005 (1).
3. Krum H, Ninio D, Macdonald P. Baseline predictors of tolerability to carvedilol in patients with chronic heart failure. Heart 2000;84:615-619
4. Kotlyar E, Keogh AM, Macdonald PS, Arnold RH, McCaffrey DJ, Glanville AR. Tolerability of carvedilol in patients with heart failure and concomitant chronic obstructive pulmonary disease or asthma. J Heart Lung Transplant 2002;21:1290-1295.
5. Sirak TE, Jelic S, Le Jemtel TH. Therapeutic update: non-selective beta- and alpha-adrenergic blockage in patients with coexisting chronic obstructive pulmonary disease and chronic heart failure. J Am Coll Cardiol 2004;44:497-502.
6. Finnish Medical Society Duodecim. Chronic Obstructive Pulmonary Disease (COPD). Helsinki, Finland: Duodecim Medical; 2002.
7. Braunwald E, Antman EM, Beasley JW, et al. ACC/AHA 2002 guideline update for the management of patients with unstable angina and non-ST-segment elevation myocardial infarction: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines (Committee on the Management of Patients With Unstable Angina). 2002. Available at: www.acc.org/clinical/guidelines/unstable/unstable.pdf.
8. Croxton TL, Weinmann GG, Senior RM, Wise RA, Crapo JD, Buist AS. Clinical research in chronic obstructive pulmonary disease: needs and opportunities. Am J Respir Crit Care Med 2003;167:1142-1149.
Evidence-based answers from the Family Physicians Inquiries Network
What is angular cheilitis and how is it treated?
Cheilitis is a broad term that describes inflammation of the lip surface characterized by dry scaling and fissuring. Specific types are atopic, angular, granulomatous, and actinic. Angular cheilitis is commonly seen in primary care settings, and it specifically refers to cheilitis that radiates from the commissures or corners of the mouth. Other terms synonymous with angular cheilitis are perlèche, commissural cheilitis, and angular stomatitis. Evidence reveals that topical ointment preparations of nystatin or amphotericin B treat angular cheilitis (strength of recommendation [SOR]: A, 2 small placebo-controlled studies).
Improving oral health through regular use of xylitol or xylitol/chlorhexidine acetate containing chewing gums decreases angular cheilitis in nursing home patients (SOR: B, 1 cluster randomized, placebo-controlled trial).
Evidence summary
There is some evidence demonstrating that antifungals effectively treat angular cheilitis. A prospective, double-blind, placebo-controlled study of 8 patients compared the efficacy of nystatin with placebo ointment. These patients were referred to a Department of Oral Diagnosis for sore lips with detected Candida albicans lesions located bilaterally.1 All of the patients were instructed to use one ointment on the right side and the other on the left side. Contamination was prevented by the use of gloves changed between applications. All 8 patients demonstrated complete healing after 1 to 4 weeks of treatment by nystatin, whereas only 1 patient had complete healing after the placebo, giving a number needed to treat (NNT) of 1.14 (P<.001).
A second study compared antifungal treatments with placebo. This randomized-controlled trial from 1975 studied lozenge use of nystatin or amphotericin B in 52 patients with red palate, angular cheilitis, or both.2 These patients were identified through screening of 600 consecutive patients attending the prosthetic clinic for examination or treatment. Patients were randomly given a 1-month supply of nystatin, amphotericin B, or placebo and instructed to dissolve 4 lozenges a day in their mouth. The study did not describe any blinding procedure. Both nystatin and amphotericin B had statistically significant cures rates at 1 month compared with placebo (P=.05 and P=0.01, respectively). The NNT was 2.7 for the nystatin group and 2.0 for the amphotericin B group at 1 month. A comparison of the 2 anti-fungals found no difference in cure rate. Recurrence rates at 2 months after discontinuing therapy were the same. The only adverse effect reported was the unpleasant taste of the lozenges, especially nystatin.
Improving oral health is another method proposed to treat angular cheilitis. Many modalities have been suggested including emphasis on denture cleaning, mouthwashes, or medicated chewing gums.
A randomized controlled, double-blind study, performed in 21 English nursing facilities, enrolled 164 patients aged 60 years and older with some natural teeth and evaluated the effects of medicated chewing gum on oral health.3 At the end of 1 year, the 111 patients (67%) completed the study. Fifty-seven percent of the participants wore dentures.
Several aspects were measured including the presence of angular cheilitis. There were 3 arms: no gum, xylitol gum, and chlorhexidine acetate/xylitol gum. The gums were used after breakfast and the evening meal and consisted of 2 pellets to be chewed for 15 minutes. Adherence was described as chewing gum at least 12 times per week for 12 months. A blinded investigator examined patients at baseline, 3, 6, 9, and 12 months.
The results demonstrated a decrease in angular cheilitis in both the xylitol and chlorhexidine acetate/xylitol group at 12 months when compared to the no gum group (P<.01). Cheilitis was found in 14% of the xylitol group (compared with 27% at baseline), 7% of the chlorhexidine acetate/xylitol group (a reduction from 28%), and 32% of the no gum group (no change). The NNT was 7.7 for the xylitol group and 4.8 for the chlorhexidine acetate/xylitol group. This effect size may be exaggerated as the study randomized by nursing home not individual patients, and there was no statistical adjustment for the cluster randomization.
Chewing gum impregnated with chlorhexidine is not readily available in the United States, whereas xylitol-containing gums are available in many retail stores and on-line centers.
Recommendations from others
We found no clinical guidelines regarding the treatment of angular cheilitis. The American Dental Association does mention topical antifungal creams for the treatment of angular cheilitis when discussing oral health and diabetes.4 In addition, Taylor’s Family Medicine recommends antifungals, including nystatin pastilles, clotrimazole troches, or a single 200-mg dose of fluconazole.5 Geriatric Medicine also recommends topical antifungals to treat angular cheilitis.6
To prevent recurrence, use xylitol gum or lip balms/petroleum jelly in the skin folds
Richard Hoffman, MD
Chesterfield Family Practice, Chesterfield, Va
Angular cheilitis is often mistakenly thought to be caused by a vitamin deficiency. As noted in this Clinical Inquiry, Candida infections in the moist skin folds around the mouth are the cause in elderly patients. The controlled trials show that antifungal preparations clearly work. In my experience, most topical anti-candidal agents work. To prevent recurrence, xylitol gum or aggressive use of lip balms or petroleum jelly in the skin folds is needed since these areas will invariably stay moist.
1. Ohman SC, Jontell M. Treatment of angular cheilitis: The significance of microbial analysis, antimicrobial treatment, and interfering factors. Acta Odontol Scand 1988;46:267-272.
2. Nairn RI. Nystatin and amphotericin B in the treatment of denture-related candidiasis. Oral Surg Oral Med Oral Pathol 1975;40:68-75.
3. Simons D, Brailsford SR, Kidd EA, Beighton D. The effects of medicated chewing gums on oral health in frail older people: a 1-year clinical trial. J Am Geriatr Soc 2002;50:1348-1353.
4. Vernillo AT. Dental considerations for the treatment of patients with diabetes mellitus. JADA 2003;134:24S-33S.
5. Taylor RB, ed. Family Medicine: Principles and Practice. 6th ed. New York: Springer; 2003.
6. Cassel CK, ed. Geriatric Medicine: An Evidence-Based Approach. 4th ed. New York: Springer; 2003.
Cheilitis is a broad term that describes inflammation of the lip surface characterized by dry scaling and fissuring. Specific types are atopic, angular, granulomatous, and actinic. Angular cheilitis is commonly seen in primary care settings, and it specifically refers to cheilitis that radiates from the commissures or corners of the mouth. Other terms synonymous with angular cheilitis are perlèche, commissural cheilitis, and angular stomatitis. Evidence reveals that topical ointment preparations of nystatin or amphotericin B treat angular cheilitis (strength of recommendation [SOR]: A, 2 small placebo-controlled studies).
Improving oral health through regular use of xylitol or xylitol/chlorhexidine acetate containing chewing gums decreases angular cheilitis in nursing home patients (SOR: B, 1 cluster randomized, placebo-controlled trial).
Evidence summary
There is some evidence demonstrating that antifungals effectively treat angular cheilitis. A prospective, double-blind, placebo-controlled study of 8 patients compared the efficacy of nystatin with placebo ointment. These patients were referred to a Department of Oral Diagnosis for sore lips with detected Candida albicans lesions located bilaterally.1 All of the patients were instructed to use one ointment on the right side and the other on the left side. Contamination was prevented by the use of gloves changed between applications. All 8 patients demonstrated complete healing after 1 to 4 weeks of treatment by nystatin, whereas only 1 patient had complete healing after the placebo, giving a number needed to treat (NNT) of 1.14 (P<.001).
A second study compared antifungal treatments with placebo. This randomized-controlled trial from 1975 studied lozenge use of nystatin or amphotericin B in 52 patients with red palate, angular cheilitis, or both.2 These patients were identified through screening of 600 consecutive patients attending the prosthetic clinic for examination or treatment. Patients were randomly given a 1-month supply of nystatin, amphotericin B, or placebo and instructed to dissolve 4 lozenges a day in their mouth. The study did not describe any blinding procedure. Both nystatin and amphotericin B had statistically significant cures rates at 1 month compared with placebo (P=.05 and P=0.01, respectively). The NNT was 2.7 for the nystatin group and 2.0 for the amphotericin B group at 1 month. A comparison of the 2 anti-fungals found no difference in cure rate. Recurrence rates at 2 months after discontinuing therapy were the same. The only adverse effect reported was the unpleasant taste of the lozenges, especially nystatin.
Improving oral health is another method proposed to treat angular cheilitis. Many modalities have been suggested including emphasis on denture cleaning, mouthwashes, or medicated chewing gums.
A randomized controlled, double-blind study, performed in 21 English nursing facilities, enrolled 164 patients aged 60 years and older with some natural teeth and evaluated the effects of medicated chewing gum on oral health.3 At the end of 1 year, the 111 patients (67%) completed the study. Fifty-seven percent of the participants wore dentures.
Several aspects were measured including the presence of angular cheilitis. There were 3 arms: no gum, xylitol gum, and chlorhexidine acetate/xylitol gum. The gums were used after breakfast and the evening meal and consisted of 2 pellets to be chewed for 15 minutes. Adherence was described as chewing gum at least 12 times per week for 12 months. A blinded investigator examined patients at baseline, 3, 6, 9, and 12 months.
The results demonstrated a decrease in angular cheilitis in both the xylitol and chlorhexidine acetate/xylitol group at 12 months when compared to the no gum group (P<.01). Cheilitis was found in 14% of the xylitol group (compared with 27% at baseline), 7% of the chlorhexidine acetate/xylitol group (a reduction from 28%), and 32% of the no gum group (no change). The NNT was 7.7 for the xylitol group and 4.8 for the chlorhexidine acetate/xylitol group. This effect size may be exaggerated as the study randomized by nursing home not individual patients, and there was no statistical adjustment for the cluster randomization.
Chewing gum impregnated with chlorhexidine is not readily available in the United States, whereas xylitol-containing gums are available in many retail stores and on-line centers.
Recommendations from others
We found no clinical guidelines regarding the treatment of angular cheilitis. The American Dental Association does mention topical antifungal creams for the treatment of angular cheilitis when discussing oral health and diabetes.4 In addition, Taylor’s Family Medicine recommends antifungals, including nystatin pastilles, clotrimazole troches, or a single 200-mg dose of fluconazole.5 Geriatric Medicine also recommends topical antifungals to treat angular cheilitis.6
To prevent recurrence, use xylitol gum or lip balms/petroleum jelly in the skin folds
Richard Hoffman, MD
Chesterfield Family Practice, Chesterfield, Va
Angular cheilitis is often mistakenly thought to be caused by a vitamin deficiency. As noted in this Clinical Inquiry, Candida infections in the moist skin folds around the mouth are the cause in elderly patients. The controlled trials show that antifungal preparations clearly work. In my experience, most topical anti-candidal agents work. To prevent recurrence, xylitol gum or aggressive use of lip balms or petroleum jelly in the skin folds is needed since these areas will invariably stay moist.
Cheilitis is a broad term that describes inflammation of the lip surface characterized by dry scaling and fissuring. Specific types are atopic, angular, granulomatous, and actinic. Angular cheilitis is commonly seen in primary care settings, and it specifically refers to cheilitis that radiates from the commissures or corners of the mouth. Other terms synonymous with angular cheilitis are perlèche, commissural cheilitis, and angular stomatitis. Evidence reveals that topical ointment preparations of nystatin or amphotericin B treat angular cheilitis (strength of recommendation [SOR]: A, 2 small placebo-controlled studies).
Improving oral health through regular use of xylitol or xylitol/chlorhexidine acetate containing chewing gums decreases angular cheilitis in nursing home patients (SOR: B, 1 cluster randomized, placebo-controlled trial).
Evidence summary
There is some evidence demonstrating that antifungals effectively treat angular cheilitis. A prospective, double-blind, placebo-controlled study of 8 patients compared the efficacy of nystatin with placebo ointment. These patients were referred to a Department of Oral Diagnosis for sore lips with detected Candida albicans lesions located bilaterally.1 All of the patients were instructed to use one ointment on the right side and the other on the left side. Contamination was prevented by the use of gloves changed between applications. All 8 patients demonstrated complete healing after 1 to 4 weeks of treatment by nystatin, whereas only 1 patient had complete healing after the placebo, giving a number needed to treat (NNT) of 1.14 (P<.001).
A second study compared antifungal treatments with placebo. This randomized-controlled trial from 1975 studied lozenge use of nystatin or amphotericin B in 52 patients with red palate, angular cheilitis, or both.2 These patients were identified through screening of 600 consecutive patients attending the prosthetic clinic for examination or treatment. Patients were randomly given a 1-month supply of nystatin, amphotericin B, or placebo and instructed to dissolve 4 lozenges a day in their mouth. The study did not describe any blinding procedure. Both nystatin and amphotericin B had statistically significant cures rates at 1 month compared with placebo (P=.05 and P=0.01, respectively). The NNT was 2.7 for the nystatin group and 2.0 for the amphotericin B group at 1 month. A comparison of the 2 anti-fungals found no difference in cure rate. Recurrence rates at 2 months after discontinuing therapy were the same. The only adverse effect reported was the unpleasant taste of the lozenges, especially nystatin.
Improving oral health is another method proposed to treat angular cheilitis. Many modalities have been suggested including emphasis on denture cleaning, mouthwashes, or medicated chewing gums.
A randomized controlled, double-blind study, performed in 21 English nursing facilities, enrolled 164 patients aged 60 years and older with some natural teeth and evaluated the effects of medicated chewing gum on oral health.3 At the end of 1 year, the 111 patients (67%) completed the study. Fifty-seven percent of the participants wore dentures.
Several aspects were measured including the presence of angular cheilitis. There were 3 arms: no gum, xylitol gum, and chlorhexidine acetate/xylitol gum. The gums were used after breakfast and the evening meal and consisted of 2 pellets to be chewed for 15 minutes. Adherence was described as chewing gum at least 12 times per week for 12 months. A blinded investigator examined patients at baseline, 3, 6, 9, and 12 months.
The results demonstrated a decrease in angular cheilitis in both the xylitol and chlorhexidine acetate/xylitol group at 12 months when compared to the no gum group (P<.01). Cheilitis was found in 14% of the xylitol group (compared with 27% at baseline), 7% of the chlorhexidine acetate/xylitol group (a reduction from 28%), and 32% of the no gum group (no change). The NNT was 7.7 for the xylitol group and 4.8 for the chlorhexidine acetate/xylitol group. This effect size may be exaggerated as the study randomized by nursing home not individual patients, and there was no statistical adjustment for the cluster randomization.
Chewing gum impregnated with chlorhexidine is not readily available in the United States, whereas xylitol-containing gums are available in many retail stores and on-line centers.
Recommendations from others
We found no clinical guidelines regarding the treatment of angular cheilitis. The American Dental Association does mention topical antifungal creams for the treatment of angular cheilitis when discussing oral health and diabetes.4 In addition, Taylor’s Family Medicine recommends antifungals, including nystatin pastilles, clotrimazole troches, or a single 200-mg dose of fluconazole.5 Geriatric Medicine also recommends topical antifungals to treat angular cheilitis.6
To prevent recurrence, use xylitol gum or lip balms/petroleum jelly in the skin folds
Richard Hoffman, MD
Chesterfield Family Practice, Chesterfield, Va
Angular cheilitis is often mistakenly thought to be caused by a vitamin deficiency. As noted in this Clinical Inquiry, Candida infections in the moist skin folds around the mouth are the cause in elderly patients. The controlled trials show that antifungal preparations clearly work. In my experience, most topical anti-candidal agents work. To prevent recurrence, xylitol gum or aggressive use of lip balms or petroleum jelly in the skin folds is needed since these areas will invariably stay moist.
1. Ohman SC, Jontell M. Treatment of angular cheilitis: The significance of microbial analysis, antimicrobial treatment, and interfering factors. Acta Odontol Scand 1988;46:267-272.
2. Nairn RI. Nystatin and amphotericin B in the treatment of denture-related candidiasis. Oral Surg Oral Med Oral Pathol 1975;40:68-75.
3. Simons D, Brailsford SR, Kidd EA, Beighton D. The effects of medicated chewing gums on oral health in frail older people: a 1-year clinical trial. J Am Geriatr Soc 2002;50:1348-1353.
4. Vernillo AT. Dental considerations for the treatment of patients with diabetes mellitus. JADA 2003;134:24S-33S.
5. Taylor RB, ed. Family Medicine: Principles and Practice. 6th ed. New York: Springer; 2003.
6. Cassel CK, ed. Geriatric Medicine: An Evidence-Based Approach. 4th ed. New York: Springer; 2003.
1. Ohman SC, Jontell M. Treatment of angular cheilitis: The significance of microbial analysis, antimicrobial treatment, and interfering factors. Acta Odontol Scand 1988;46:267-272.
2. Nairn RI. Nystatin and amphotericin B in the treatment of denture-related candidiasis. Oral Surg Oral Med Oral Pathol 1975;40:68-75.
3. Simons D, Brailsford SR, Kidd EA, Beighton D. The effects of medicated chewing gums on oral health in frail older people: a 1-year clinical trial. J Am Geriatr Soc 2002;50:1348-1353.
4. Vernillo AT. Dental considerations for the treatment of patients with diabetes mellitus. JADA 2003;134:24S-33S.
5. Taylor RB, ed. Family Medicine: Principles and Practice. 6th ed. New York: Springer; 2003.
6. Cassel CK, ed. Geriatric Medicine: An Evidence-Based Approach. 4th ed. New York: Springer; 2003.
Evidence-based answers from the Family Physicians Inquiries Network
Is DEET safe for children?
Reported evidence suggests that DEET use is safe for children older than 2 months, with only very rare incidence of major adverse effects (strength of recommendation [SOR]: C). Typically, a topical concentration between 10% and 30% should be used (SOR: C). Increasing DEET concentration does not improve protection, but does increase the duration of action (SOR: A).
Evidence summary
The increasing prevalence of mosquito-borne diseases, including West Nile virus, has raised concerns about safe and effective forms of prevention. For decades, parents have used the insect repellent DEET (N,N-diethyl-metatoluamide), but questions remain regarding adverse effects, including seizures, particularly when used in children.
Two large case series suggested that the risk of DEET is low. The first collected poison control center reports during the 1980s. The report concluded that DEET exposure rarely led to adverse effects and that the route of administration (ie, ingestion) was more closely linked to toxicity than age or gender.1 There were 5 major adverse reactions reported from 9086 exposures to DEET (0.05%); these included hypotension, hypotonic reaction, and syncope, and 1 death (a suicide ingestion).
The second series, also collected from poison control centers, included roughly 21,000 reports of DEET exposures during the 1990s. The authors concluded that the risk of toxicity was low and that there was no clear dose-dependent relationship between exposure and extent of severity of neurologic manifestations.2 This report found a rate of major adverse reactions (0.1%) from DEET that was similar to the first case series. The major reactions reported included hypotension, seizures, respiratory distress, and 2 deaths (0.01%). When limiting the data to infants and children only, there were 10 major events among 17,252 reported exposures (0.06%), and no deaths. Although infants and children accounted for 83.1% of all reported exposures, the majority of the serious outcomes (including the deaths) occurred in adults. About half of all those exposed reportedly had no ill effects, the other half had minor effects (transient effects that resolved without treatment). Only 4% experienced moderate effects (non–life threatening problem, but one that would likely require treatment). There were no data presented on the overall size of the exposed population, eg, users of DEET in the US.
Two recent narrative reviews also concluded that DEET toxicity is rare in children. The first review found that DEET posed essentially no risk in children.3 The second review was sponsored by SC Johnson and Company, the makers of OFF! brand insect repellent. It assessed animal studies, epidemiologic data, and case reports, and supported the safety of DEET in children.4
A theoretical risk is that DEET toxicity could be enhanced by coapplication with other agents. Some studies have uncovered dangerous interactions with military and industrial chemicals, but such exposures are unlikely in most children. The most practical concern regards sun-screen. One study reported that use of sun-screen increased the penetration of DEET.5 However, since the poison control center studies indicated that toxicity did not occur in a dose-dependent manner; the clinical significance of increased penetration is not clear.1,2
Increasing the concentration of DEET does not improve protection but does provide longer duration. Concentrations of 6.65% protect for about 2 hours while 23.8% DEET can last about 5 hours.6 By understanding this relationship, parents can apply lowest concentration necessary to provide the protection needed.
Recommendations from others
The American Academy of Pediatrics recommends avoiding DEET in children under 2 months of age. For all other children, it advises using DEET with a concentration between 10% and 30%.7
Counsel parents to take 3 steps to prevent bites—avoid, cover up, and repel
Paul Crawford, MD
USAF-Eglin Family Practice Residency, Eglin Air Force Base, Fla
The emergence of West Nile virus has heightened awareness of mosquitoes, and I often field questions about how to protect children from bites. I counsel parents to take 3 steps to prevent bites—avoid, cover up, and repel. Mosquitoes are active at dawn and dusk, so staying indoors during these times is protective. Covering up with long sleeves, pants, and socks protects from most bites. Lastly, DEET repellent protects exposed areas from mosquitoes. Lotions make it easier to apply DEET to children. Commonly, parents express fear of DEET due to media reports. This review will help me ease their fears.
1. Veltri JC, Osimitz TG, Bradford DC, Page BC. Retrospective analysis of calls to poison control centers resulting from exposure to the insect repellent N,N-diethyl-m-toluamide (DEET) from 1985–1989. J Toxicol Clin Toxicol 1994;32:1-16.
2. Bell JW, Veltri JC, Page BC. Human exposures to N,N-diethyl-m-toluamide insect repellents reported to the American Association of Poison Control Centers 1993–1997. Int J Toxicol 2002;21:341-352.
3. Koren G, Matsui D, Bailey B. DEET-based insect repellents: safety implications for children and pregnant and lactating women. CMAJ 2003;169:209-212.Erratum in: CMAJ 2003;169:283.
4. Osimitz TG, Murphy JV. Neurological effects associated with use of the insect repellent N,N-diethyl-m-toluamide (DEET). J Toxicol Clin Toxicol 1997;35:443-445.
5. Ross EA, Savage KA, Utley LJ, Tebbett IR. Insect repellent interactions: sunscreens enhance DEET (N,N-diethyl-m-toluamide) absorption. Drug Metab Dispos 2004;32:783-785.
6. Fradin MS, Day JF. Comparative efficacy of insect repellents against mosquito bites. N Engl J Med 2002;347:13-18.
7. American Academy of Pediatrics. West Nile virus information. Available at: www.aap.org/family/wnv-jun03.htm. Accessed on April 8, 2005.
Reported evidence suggests that DEET use is safe for children older than 2 months, with only very rare incidence of major adverse effects (strength of recommendation [SOR]: C). Typically, a topical concentration between 10% and 30% should be used (SOR: C). Increasing DEET concentration does not improve protection, but does increase the duration of action (SOR: A).
Evidence summary
The increasing prevalence of mosquito-borne diseases, including West Nile virus, has raised concerns about safe and effective forms of prevention. For decades, parents have used the insect repellent DEET (N,N-diethyl-metatoluamide), but questions remain regarding adverse effects, including seizures, particularly when used in children.
Two large case series suggested that the risk of DEET is low. The first collected poison control center reports during the 1980s. The report concluded that DEET exposure rarely led to adverse effects and that the route of administration (ie, ingestion) was more closely linked to toxicity than age or gender.1 There were 5 major adverse reactions reported from 9086 exposures to DEET (0.05%); these included hypotension, hypotonic reaction, and syncope, and 1 death (a suicide ingestion).
The second series, also collected from poison control centers, included roughly 21,000 reports of DEET exposures during the 1990s. The authors concluded that the risk of toxicity was low and that there was no clear dose-dependent relationship between exposure and extent of severity of neurologic manifestations.2 This report found a rate of major adverse reactions (0.1%) from DEET that was similar to the first case series. The major reactions reported included hypotension, seizures, respiratory distress, and 2 deaths (0.01%). When limiting the data to infants and children only, there were 10 major events among 17,252 reported exposures (0.06%), and no deaths. Although infants and children accounted for 83.1% of all reported exposures, the majority of the serious outcomes (including the deaths) occurred in adults. About half of all those exposed reportedly had no ill effects, the other half had minor effects (transient effects that resolved without treatment). Only 4% experienced moderate effects (non–life threatening problem, but one that would likely require treatment). There were no data presented on the overall size of the exposed population, eg, users of DEET in the US.
Two recent narrative reviews also concluded that DEET toxicity is rare in children. The first review found that DEET posed essentially no risk in children.3 The second review was sponsored by SC Johnson and Company, the makers of OFF! brand insect repellent. It assessed animal studies, epidemiologic data, and case reports, and supported the safety of DEET in children.4
A theoretical risk is that DEET toxicity could be enhanced by coapplication with other agents. Some studies have uncovered dangerous interactions with military and industrial chemicals, but such exposures are unlikely in most children. The most practical concern regards sun-screen. One study reported that use of sun-screen increased the penetration of DEET.5 However, since the poison control center studies indicated that toxicity did not occur in a dose-dependent manner; the clinical significance of increased penetration is not clear.1,2
Increasing the concentration of DEET does not improve protection but does provide longer duration. Concentrations of 6.65% protect for about 2 hours while 23.8% DEET can last about 5 hours.6 By understanding this relationship, parents can apply lowest concentration necessary to provide the protection needed.
Recommendations from others
The American Academy of Pediatrics recommends avoiding DEET in children under 2 months of age. For all other children, it advises using DEET with a concentration between 10% and 30%.7
Counsel parents to take 3 steps to prevent bites—avoid, cover up, and repel
Paul Crawford, MD
USAF-Eglin Family Practice Residency, Eglin Air Force Base, Fla
The emergence of West Nile virus has heightened awareness of mosquitoes, and I often field questions about how to protect children from bites. I counsel parents to take 3 steps to prevent bites—avoid, cover up, and repel. Mosquitoes are active at dawn and dusk, so staying indoors during these times is protective. Covering up with long sleeves, pants, and socks protects from most bites. Lastly, DEET repellent protects exposed areas from mosquitoes. Lotions make it easier to apply DEET to children. Commonly, parents express fear of DEET due to media reports. This review will help me ease their fears.
Reported evidence suggests that DEET use is safe for children older than 2 months, with only very rare incidence of major adverse effects (strength of recommendation [SOR]: C). Typically, a topical concentration between 10% and 30% should be used (SOR: C). Increasing DEET concentration does not improve protection, but does increase the duration of action (SOR: A).
Evidence summary
The increasing prevalence of mosquito-borne diseases, including West Nile virus, has raised concerns about safe and effective forms of prevention. For decades, parents have used the insect repellent DEET (N,N-diethyl-metatoluamide), but questions remain regarding adverse effects, including seizures, particularly when used in children.
Two large case series suggested that the risk of DEET is low. The first collected poison control center reports during the 1980s. The report concluded that DEET exposure rarely led to adverse effects and that the route of administration (ie, ingestion) was more closely linked to toxicity than age or gender.1 There were 5 major adverse reactions reported from 9086 exposures to DEET (0.05%); these included hypotension, hypotonic reaction, and syncope, and 1 death (a suicide ingestion).
The second series, also collected from poison control centers, included roughly 21,000 reports of DEET exposures during the 1990s. The authors concluded that the risk of toxicity was low and that there was no clear dose-dependent relationship between exposure and extent of severity of neurologic manifestations.2 This report found a rate of major adverse reactions (0.1%) from DEET that was similar to the first case series. The major reactions reported included hypotension, seizures, respiratory distress, and 2 deaths (0.01%). When limiting the data to infants and children only, there were 10 major events among 17,252 reported exposures (0.06%), and no deaths. Although infants and children accounted for 83.1% of all reported exposures, the majority of the serious outcomes (including the deaths) occurred in adults. About half of all those exposed reportedly had no ill effects, the other half had minor effects (transient effects that resolved without treatment). Only 4% experienced moderate effects (non–life threatening problem, but one that would likely require treatment). There were no data presented on the overall size of the exposed population, eg, users of DEET in the US.
Two recent narrative reviews also concluded that DEET toxicity is rare in children. The first review found that DEET posed essentially no risk in children.3 The second review was sponsored by SC Johnson and Company, the makers of OFF! brand insect repellent. It assessed animal studies, epidemiologic data, and case reports, and supported the safety of DEET in children.4
A theoretical risk is that DEET toxicity could be enhanced by coapplication with other agents. Some studies have uncovered dangerous interactions with military and industrial chemicals, but such exposures are unlikely in most children. The most practical concern regards sun-screen. One study reported that use of sun-screen increased the penetration of DEET.5 However, since the poison control center studies indicated that toxicity did not occur in a dose-dependent manner; the clinical significance of increased penetration is not clear.1,2
Increasing the concentration of DEET does not improve protection but does provide longer duration. Concentrations of 6.65% protect for about 2 hours while 23.8% DEET can last about 5 hours.6 By understanding this relationship, parents can apply lowest concentration necessary to provide the protection needed.
Recommendations from others
The American Academy of Pediatrics recommends avoiding DEET in children under 2 months of age. For all other children, it advises using DEET with a concentration between 10% and 30%.7
Counsel parents to take 3 steps to prevent bites—avoid, cover up, and repel
Paul Crawford, MD
USAF-Eglin Family Practice Residency, Eglin Air Force Base, Fla
The emergence of West Nile virus has heightened awareness of mosquitoes, and I often field questions about how to protect children from bites. I counsel parents to take 3 steps to prevent bites—avoid, cover up, and repel. Mosquitoes are active at dawn and dusk, so staying indoors during these times is protective. Covering up with long sleeves, pants, and socks protects from most bites. Lastly, DEET repellent protects exposed areas from mosquitoes. Lotions make it easier to apply DEET to children. Commonly, parents express fear of DEET due to media reports. This review will help me ease their fears.
1. Veltri JC, Osimitz TG, Bradford DC, Page BC. Retrospective analysis of calls to poison control centers resulting from exposure to the insect repellent N,N-diethyl-m-toluamide (DEET) from 1985–1989. J Toxicol Clin Toxicol 1994;32:1-16.
2. Bell JW, Veltri JC, Page BC. Human exposures to N,N-diethyl-m-toluamide insect repellents reported to the American Association of Poison Control Centers 1993–1997. Int J Toxicol 2002;21:341-352.
3. Koren G, Matsui D, Bailey B. DEET-based insect repellents: safety implications for children and pregnant and lactating women. CMAJ 2003;169:209-212.Erratum in: CMAJ 2003;169:283.
4. Osimitz TG, Murphy JV. Neurological effects associated with use of the insect repellent N,N-diethyl-m-toluamide (DEET). J Toxicol Clin Toxicol 1997;35:443-445.
5. Ross EA, Savage KA, Utley LJ, Tebbett IR. Insect repellent interactions: sunscreens enhance DEET (N,N-diethyl-m-toluamide) absorption. Drug Metab Dispos 2004;32:783-785.
6. Fradin MS, Day JF. Comparative efficacy of insect repellents against mosquito bites. N Engl J Med 2002;347:13-18.
7. American Academy of Pediatrics. West Nile virus information. Available at: www.aap.org/family/wnv-jun03.htm. Accessed on April 8, 2005.
1. Veltri JC, Osimitz TG, Bradford DC, Page BC. Retrospective analysis of calls to poison control centers resulting from exposure to the insect repellent N,N-diethyl-m-toluamide (DEET) from 1985–1989. J Toxicol Clin Toxicol 1994;32:1-16.
2. Bell JW, Veltri JC, Page BC. Human exposures to N,N-diethyl-m-toluamide insect repellents reported to the American Association of Poison Control Centers 1993–1997. Int J Toxicol 2002;21:341-352.
3. Koren G, Matsui D, Bailey B. DEET-based insect repellents: safety implications for children and pregnant and lactating women. CMAJ 2003;169:209-212.Erratum in: CMAJ 2003;169:283.
4. Osimitz TG, Murphy JV. Neurological effects associated with use of the insect repellent N,N-diethyl-m-toluamide (DEET). J Toxicol Clin Toxicol 1997;35:443-445.
5. Ross EA, Savage KA, Utley LJ, Tebbett IR. Insect repellent interactions: sunscreens enhance DEET (N,N-diethyl-m-toluamide) absorption. Drug Metab Dispos 2004;32:783-785.
6. Fradin MS, Day JF. Comparative efficacy of insect repellents against mosquito bites. N Engl J Med 2002;347:13-18.
7. American Academy of Pediatrics. West Nile virus information. Available at: www.aap.org/family/wnv-jun03.htm. Accessed on April 8, 2005.
Evidence-based answers from the Family Physicians Inquiries Network
What is the best approach to the evaluation of hirsutism?
The evaluation of hirsutism should begin with a history and physical examination to identify signs and symptoms suggestive of diseases such as polycystic ovarian syndrome (PCOS), hypothyroidism, hyperprolactinemia, hyperandrogenic insulin-resistant acanthosis nigricans (HAIR-AN) syndrome, androgenic tumors, Cushing’s syndrome, or congenital adrenal hyperplasia (CAH). Findings suggestive of these diseases include rapid or early-onset hirsutism, menstrual irregularities, hypertension, severe hirsutism, virilization, or pelvic masses (strength of recommendation [SOR]: B, based on a cohort study in a referral population) (TABLE). Hirsutism with unremarkable history and physical exam findings should be evaluated with a serum total testosterone and dehydroepiandrosterone sulfate (DHEAS) level (SOR: B, based on a cohort study in a referral population).
TABLE
Differential diagnosis of clinically apparent androgen excess
DIAGNOSIS | INCIDENCE | KEY HISTORY/EXAM FINDINGS | ADDITIONAL TESTING |
---|---|---|---|
Polycystic ovarian syndrome | 82.0% | ± irregular menses, slow-onset hirsutism, obesity, infertility, diabetes, hypertension, family history of PCOS, diabetes | Fasting glucose, insulin and lipid profile, blood pressure, ultrasound positive for multiple ovarian cysts |
Hyperandrogenism with hirsutism, normal ovulation | 6.8% | Regular menses, acne, hirsutism without detectable endocrine cause | Elevated androgen levels and normal serum progesterone in luteal phase |
Idiopathic hirsutism | 4.7% | Regular menses, hirsutism, possible overactive 5 alpha-reductase activity in skin and hair follicle | Normal androgen levels, normal serum progesterone in luteal phase |
Hyperandrogenic insulin-resistant acanthosis nigricans (HAIR-AN) | 3.1% | Brown velvety patches of skin (acanthosis nigricans), obesity, hypertension, hyperlipidemia, strong family history of diabetes | Fasting glucose and lipid profile, BP, fasting insulin level >80 μIU/mL or insulin level >300 on 3-hour glucose tolerance test |
21-hydroxylase non-classic adrenal hyperplasia (late-onset CAH) | 1.6% | Severe hirsutism or virilization, strong family history of CAH, short stature, signs of defeminization, more common in Ashkenazi Jews and Eastern European decent | 17-HP level before and after ACTH stimulation test >10 ng/dL, CYP21 genotyping. |
21-hydroxylase-deficient congenital adrenal hyperplasia | 0.7% | See Late-onset CAH. Congenital virilization | 17-HP levels >30 ng/dL |
Hypothyroidism | 0.7%* | Fatigue, weight gain, history of thyroid ablation and untreated hypothyroidism, amenorrhea | TSH |
Hyperprolactinemia | 0.3%† | Amenorrhea, galactorrhea, infertility | Prolactin |
Androgenic secreting neoplasm | 0.2% | Pelvic masses, rapid-onset hirsutism or virilization, over age 30 with onset of symptoms | Pelvic ultrasound or abdomen/pelvic CT scan |
Cushing’s syndrome | 0%‡ | Hypertension, buffalo hump, purple striae, truncal obesity | Elevated blood pressure, positive dexamethasone suppression test |
*Five patients were previously diagnosed with hypothyroidism and 1 patient was diagnosed as part of the work-up for a total prevalence of 6 in 873 or 0.7% although the de novo incidence was only 0.1%. | |||
†Two patients were previously diagnosed with hyperprolactinemia and 1 was detected during the work-up for a total prevalence of 3 in 873 or 0.3% although the de novo incidence was 0.1%. | |||
‡No patients were identified with Cushing’s syndrome in this study. Other published reports vary from 0-1% (3). | |||
Source: Azziz et al, J Clin Endocrinol Metab 20042; Azziz, Obstet Gynecol 2003.8 |
Evidence summary
Hirsutism is the presence of excess terminal hairs in androgen-dependent areas on a female, and can be measured objectively using a scoring system such as the modified Ferriman-Gallway (mF-G) score. This test is done by adding hair scores (0=none, 4=frankly virile) in 9 different body locations. A total score >8 is considered hirsute. The incidence of hirsutism in the US is about 8%, based on a prospective study of 369 consecutive women of reproductive age seeking pre-employment physicals in the southeastern US using the mF-G criteria.1
The causes of clinically apparent androgen excess, including acne and hirsutism, were evaluated in 1281 consecutive patients presenting to a university endocrinology clinic.2 Researchers excluded 408 subjects due to the inability to assess hormone status or ovulatory function. The remaining 873 women were assessed by clinical exam, mF-G score, serum total and free testosterone, DHEAS, and 17-hydroxyprogesterone (17-HP). Hyperandrogenism was defined as an androgen value above the 95th percentile of 98 healthy control women (total testosterone ≥88 ng/dL, free testosterone ≥0.75 ng/dL, or DHEAS ≥2750 ng/dL). Those with a 17-HP level >2 ng/mL had either a repeat 17-HP or adrenocorticotropic hormone (ACTH) stimulation test. Those with at least 2 total testosterone levels above 250 ng/dL or those with signs of an androgen-secreting neoplasm (eg, virilization) underwent a transvaginal sonogram and a CT scan of the adrenals. Patients with ovulatory dysfunction had a thyroid-stimulating hormone (TSH) and prolactin level drawn. If Cushing’s syndrome was suspected clinically, the subjects underwent an overnight 1-mg dexamethasone suppression test (TABLE). Of 873 patients, 75.5% had hirsutism and 77.8% had hyperandrogenemia. An identifiable disorder of androgen excess was found in 7%; functional androgen excess (principally PCOS) was identified in the remainder.
The incidence of endocrine disorders among patients presenting with hirsutism or androgenic alopecia was evaluated during a prospective study of 350 consecutive patients referred to an endocrine clinic in the UK.3 Testing included serum total testosterone, androstenedione, 17-HP, and DHEAS on 2 occasions. Patients also underwent high-resolution pelvic ultrasound. Further investigations were done only for those with abnormal hormone levels or clinical findings suggestive of a tumor. Of 350 women tested, 13 had a markedly elevated serum total testosterone level >5 nmol/L (150 ng/dL). A single total testosterone test identified 6 of 8 patients with an underlying endocrine disorder. The other 2 had either acromegaly or prolactinoma. The researchers concluded that clinical assessment and a single serum total testosterone level were sufficient to exclude enzyme deficiencies and virilizing tumors.
A retrospective study of 84 consecutive women presenting to an endocrinology clinic in the Netherlands was conducted to determine hormone level sensitivity and specificity to identify virilizing adrenal tumors.4 Hormone levels of 14 women with either an adrenal carcinoma (n=12) or an adrenal adenoma (n=2) were compared with the hormone levels of the women with hirsutism (n=73) as well as to the controls (n=31). Serum levels of total testosterone, androstenedione, DHEAS, DHEA, and cortisol were measured. A 24-hour urinary 17-ketosteroid excretion was also measured. A 5-day dexamethasone suppression study was conducted and a urinary sample was obtained between 8 and 9 A.M. on Day 6. An elevated basal total testosterone (normal range, 29–84 ng/dL) or DHEAS level (normal range, 118–431 ng/dL) detected all 14 women with adrenal carcinomas or adenomas and 36 of 73 women with hirsutism of non-neoplastic origin. The combined test sensitivity was 100% (95% confidence interval [CI], 77–100) and specificity was 50% (95% CI, 38–62) for the detection of adrenal tumors.
A prospective study of the incidence of late-onset CAH among hirsute women evaluated 83 consecutive patients with hirsutism from an endocrinology clinic in California with an ACTH stimulation test.5 They found 1 patient with late-onset CAH. Because CAH had an incidence of only 1.2% (95% CI, 0.0–3.4), the authors concluded that routine testing with the ACTH stimulation test is not cost-effective for the evaluation of hirsutism.
Recommendations from others
The American College of Obstetrics and Gynecology 1995 technical bulletin recommended using the clinical examination to guide the evaluation, and laboratory testing to rule out androgen-producing tumors including a serum total testosterone and DHEAS.6 The Society of Obstetricians and Gynaecologists of Canada advised using the clinical examination to guide the assessment, and a total serum testosterone level and a DHEAS level.7
Referral is recommended in the presence of virilism or if the total testosterone or DHEAS level is over twice the upper limit of normal or if there are signs of Cushing’s disease.
Early work on expectations by physician and patient leads to a better outcome
Tim Huber, MD
Naval Hospital, Camp Pendleton, Calif
Primary care physicians field questions about nonspecific findings on a day-to-day basis. Hirsutism is a common complaint and physical finding in women. Most diagnoses related to hirsutism are not life-threatening and have a relatively straightforward workup. There is the occasional patient with a zebra-type diagnosis that demands more detailed evaluation. As with most physical findings that have a large subjective component, I find that early management of expectations both on the part of the physician and patient leads to a better outcome whether or not a million-dollar workup shows any definitive pathology.
1. Knochenhauer ES, Key TJ, Kahsar-Miller M, Waggoner W, Boots LR, Azziz R. Prevalence of the polycystic ovary syndrome in unselected black and white women of the southeastern United States: a prospective study. J Clin Endocrinol Metab 1998;83:3078-3082.
2. Azziz R, Sanchez A, Knochenhauer ES, et al. Androgen excess in women: experience with over 1000 consecutive patients. J Clin Endocrinol Metab 2004;89:453-462.
3. O’Driscoll JB, Mamtora H, Higginson J, Pollack A, Kane J, Anderson DC. A prospective study of the prevalence of clear-cut endocrine disorders and polycystic ovaries in 350 patients presenting with hirsutism or androgenic alopecia. Clin Endocrinol (Oxf) 1994;41:231-236.
4. Derksen J, Nagesser SK, Meinders AE, Haak HR, van de Velde CJH. Identification of virilizing adrenal tumors in hirsute women. N Engl J Med 1994;331:968-973.
5. Chetkowski RJ, DeFazio J, Shamonki I, Juss HL, Chang RJ. The incidence of late-onset congenital adrenal hyperplasia due to 21-hydroxylase deficiency among hirsute women. Clin Endocrinol Metab 1984;58:595-598.
6. ACOG technical bulletin. Evaluation and treatment of hirsute women. Int J Gynecol Obstet 1995;49:341-346.
7. Claman P, Graves GR, Kredentser JV, Sagle MA, Tummon TS, Fluker M. SOGC Clinical Practice Guidelines. Hirsutism: Evaluation and treatment. J Obstet Gynaecology Canada 2002;24:62-73.
8. Azziz R. The evaluation and management of hirsutism. Obstet Gynecol 2003;101:995-1006.
The evaluation of hirsutism should begin with a history and physical examination to identify signs and symptoms suggestive of diseases such as polycystic ovarian syndrome (PCOS), hypothyroidism, hyperprolactinemia, hyperandrogenic insulin-resistant acanthosis nigricans (HAIR-AN) syndrome, androgenic tumors, Cushing’s syndrome, or congenital adrenal hyperplasia (CAH). Findings suggestive of these diseases include rapid or early-onset hirsutism, menstrual irregularities, hypertension, severe hirsutism, virilization, or pelvic masses (strength of recommendation [SOR]: B, based on a cohort study in a referral population) (TABLE). Hirsutism with unremarkable history and physical exam findings should be evaluated with a serum total testosterone and dehydroepiandrosterone sulfate (DHEAS) level (SOR: B, based on a cohort study in a referral population).
TABLE
Differential diagnosis of clinically apparent androgen excess
DIAGNOSIS | INCIDENCE | KEY HISTORY/EXAM FINDINGS | ADDITIONAL TESTING |
---|---|---|---|
Polycystic ovarian syndrome | 82.0% | ± irregular menses, slow-onset hirsutism, obesity, infertility, diabetes, hypertension, family history of PCOS, diabetes | Fasting glucose, insulin and lipid profile, blood pressure, ultrasound positive for multiple ovarian cysts |
Hyperandrogenism with hirsutism, normal ovulation | 6.8% | Regular menses, acne, hirsutism without detectable endocrine cause | Elevated androgen levels and normal serum progesterone in luteal phase |
Idiopathic hirsutism | 4.7% | Regular menses, hirsutism, possible overactive 5 alpha-reductase activity in skin and hair follicle | Normal androgen levels, normal serum progesterone in luteal phase |
Hyperandrogenic insulin-resistant acanthosis nigricans (HAIR-AN) | 3.1% | Brown velvety patches of skin (acanthosis nigricans), obesity, hypertension, hyperlipidemia, strong family history of diabetes | Fasting glucose and lipid profile, BP, fasting insulin level >80 μIU/mL or insulin level >300 on 3-hour glucose tolerance test |
21-hydroxylase non-classic adrenal hyperplasia (late-onset CAH) | 1.6% | Severe hirsutism or virilization, strong family history of CAH, short stature, signs of defeminization, more common in Ashkenazi Jews and Eastern European decent | 17-HP level before and after ACTH stimulation test >10 ng/dL, CYP21 genotyping. |
21-hydroxylase-deficient congenital adrenal hyperplasia | 0.7% | See Late-onset CAH. Congenital virilization | 17-HP levels >30 ng/dL |
Hypothyroidism | 0.7%* | Fatigue, weight gain, history of thyroid ablation and untreated hypothyroidism, amenorrhea | TSH |
Hyperprolactinemia | 0.3%† | Amenorrhea, galactorrhea, infertility | Prolactin |
Androgenic secreting neoplasm | 0.2% | Pelvic masses, rapid-onset hirsutism or virilization, over age 30 with onset of symptoms | Pelvic ultrasound or abdomen/pelvic CT scan |
Cushing’s syndrome | 0%‡ | Hypertension, buffalo hump, purple striae, truncal obesity | Elevated blood pressure, positive dexamethasone suppression test |
*Five patients were previously diagnosed with hypothyroidism and 1 patient was diagnosed as part of the work-up for a total prevalence of 6 in 873 or 0.7% although the de novo incidence was only 0.1%. | |||
†Two patients were previously diagnosed with hyperprolactinemia and 1 was detected during the work-up for a total prevalence of 3 in 873 or 0.3% although the de novo incidence was 0.1%. | |||
‡No patients were identified with Cushing’s syndrome in this study. Other published reports vary from 0-1% (3). | |||
Source: Azziz et al, J Clin Endocrinol Metab 20042; Azziz, Obstet Gynecol 2003.8 |
Evidence summary
Hirsutism is the presence of excess terminal hairs in androgen-dependent areas on a female, and can be measured objectively using a scoring system such as the modified Ferriman-Gallway (mF-G) score. This test is done by adding hair scores (0=none, 4=frankly virile) in 9 different body locations. A total score >8 is considered hirsute. The incidence of hirsutism in the US is about 8%, based on a prospective study of 369 consecutive women of reproductive age seeking pre-employment physicals in the southeastern US using the mF-G criteria.1
The causes of clinically apparent androgen excess, including acne and hirsutism, were evaluated in 1281 consecutive patients presenting to a university endocrinology clinic.2 Researchers excluded 408 subjects due to the inability to assess hormone status or ovulatory function. The remaining 873 women were assessed by clinical exam, mF-G score, serum total and free testosterone, DHEAS, and 17-hydroxyprogesterone (17-HP). Hyperandrogenism was defined as an androgen value above the 95th percentile of 98 healthy control women (total testosterone ≥88 ng/dL, free testosterone ≥0.75 ng/dL, or DHEAS ≥2750 ng/dL). Those with a 17-HP level >2 ng/mL had either a repeat 17-HP or adrenocorticotropic hormone (ACTH) stimulation test. Those with at least 2 total testosterone levels above 250 ng/dL or those with signs of an androgen-secreting neoplasm (eg, virilization) underwent a transvaginal sonogram and a CT scan of the adrenals. Patients with ovulatory dysfunction had a thyroid-stimulating hormone (TSH) and prolactin level drawn. If Cushing’s syndrome was suspected clinically, the subjects underwent an overnight 1-mg dexamethasone suppression test (TABLE). Of 873 patients, 75.5% had hirsutism and 77.8% had hyperandrogenemia. An identifiable disorder of androgen excess was found in 7%; functional androgen excess (principally PCOS) was identified in the remainder.
The incidence of endocrine disorders among patients presenting with hirsutism or androgenic alopecia was evaluated during a prospective study of 350 consecutive patients referred to an endocrine clinic in the UK.3 Testing included serum total testosterone, androstenedione, 17-HP, and DHEAS on 2 occasions. Patients also underwent high-resolution pelvic ultrasound. Further investigations were done only for those with abnormal hormone levels or clinical findings suggestive of a tumor. Of 350 women tested, 13 had a markedly elevated serum total testosterone level >5 nmol/L (150 ng/dL). A single total testosterone test identified 6 of 8 patients with an underlying endocrine disorder. The other 2 had either acromegaly or prolactinoma. The researchers concluded that clinical assessment and a single serum total testosterone level were sufficient to exclude enzyme deficiencies and virilizing tumors.
A retrospective study of 84 consecutive women presenting to an endocrinology clinic in the Netherlands was conducted to determine hormone level sensitivity and specificity to identify virilizing adrenal tumors.4 Hormone levels of 14 women with either an adrenal carcinoma (n=12) or an adrenal adenoma (n=2) were compared with the hormone levels of the women with hirsutism (n=73) as well as to the controls (n=31). Serum levels of total testosterone, androstenedione, DHEAS, DHEA, and cortisol were measured. A 24-hour urinary 17-ketosteroid excretion was also measured. A 5-day dexamethasone suppression study was conducted and a urinary sample was obtained between 8 and 9 A.M. on Day 6. An elevated basal total testosterone (normal range, 29–84 ng/dL) or DHEAS level (normal range, 118–431 ng/dL) detected all 14 women with adrenal carcinomas or adenomas and 36 of 73 women with hirsutism of non-neoplastic origin. The combined test sensitivity was 100% (95% confidence interval [CI], 77–100) and specificity was 50% (95% CI, 38–62) for the detection of adrenal tumors.
A prospective study of the incidence of late-onset CAH among hirsute women evaluated 83 consecutive patients with hirsutism from an endocrinology clinic in California with an ACTH stimulation test.5 They found 1 patient with late-onset CAH. Because CAH had an incidence of only 1.2% (95% CI, 0.0–3.4), the authors concluded that routine testing with the ACTH stimulation test is not cost-effective for the evaluation of hirsutism.
Recommendations from others
The American College of Obstetrics and Gynecology 1995 technical bulletin recommended using the clinical examination to guide the evaluation, and laboratory testing to rule out androgen-producing tumors including a serum total testosterone and DHEAS.6 The Society of Obstetricians and Gynaecologists of Canada advised using the clinical examination to guide the assessment, and a total serum testosterone level and a DHEAS level.7
Referral is recommended in the presence of virilism or if the total testosterone or DHEAS level is over twice the upper limit of normal or if there are signs of Cushing’s disease.
Early work on expectations by physician and patient leads to a better outcome
Tim Huber, MD
Naval Hospital, Camp Pendleton, Calif
Primary care physicians field questions about nonspecific findings on a day-to-day basis. Hirsutism is a common complaint and physical finding in women. Most diagnoses related to hirsutism are not life-threatening and have a relatively straightforward workup. There is the occasional patient with a zebra-type diagnosis that demands more detailed evaluation. As with most physical findings that have a large subjective component, I find that early management of expectations both on the part of the physician and patient leads to a better outcome whether or not a million-dollar workup shows any definitive pathology.
The evaluation of hirsutism should begin with a history and physical examination to identify signs and symptoms suggestive of diseases such as polycystic ovarian syndrome (PCOS), hypothyroidism, hyperprolactinemia, hyperandrogenic insulin-resistant acanthosis nigricans (HAIR-AN) syndrome, androgenic tumors, Cushing’s syndrome, or congenital adrenal hyperplasia (CAH). Findings suggestive of these diseases include rapid or early-onset hirsutism, menstrual irregularities, hypertension, severe hirsutism, virilization, or pelvic masses (strength of recommendation [SOR]: B, based on a cohort study in a referral population) (TABLE). Hirsutism with unremarkable history and physical exam findings should be evaluated with a serum total testosterone and dehydroepiandrosterone sulfate (DHEAS) level (SOR: B, based on a cohort study in a referral population).
TABLE
Differential diagnosis of clinically apparent androgen excess
DIAGNOSIS | INCIDENCE | KEY HISTORY/EXAM FINDINGS | ADDITIONAL TESTING |
---|---|---|---|
Polycystic ovarian syndrome | 82.0% | ± irregular menses, slow-onset hirsutism, obesity, infertility, diabetes, hypertension, family history of PCOS, diabetes | Fasting glucose, insulin and lipid profile, blood pressure, ultrasound positive for multiple ovarian cysts |
Hyperandrogenism with hirsutism, normal ovulation | 6.8% | Regular menses, acne, hirsutism without detectable endocrine cause | Elevated androgen levels and normal serum progesterone in luteal phase |
Idiopathic hirsutism | 4.7% | Regular menses, hirsutism, possible overactive 5 alpha-reductase activity in skin and hair follicle | Normal androgen levels, normal serum progesterone in luteal phase |
Hyperandrogenic insulin-resistant acanthosis nigricans (HAIR-AN) | 3.1% | Brown velvety patches of skin (acanthosis nigricans), obesity, hypertension, hyperlipidemia, strong family history of diabetes | Fasting glucose and lipid profile, BP, fasting insulin level >80 μIU/mL or insulin level >300 on 3-hour glucose tolerance test |
21-hydroxylase non-classic adrenal hyperplasia (late-onset CAH) | 1.6% | Severe hirsutism or virilization, strong family history of CAH, short stature, signs of defeminization, more common in Ashkenazi Jews and Eastern European decent | 17-HP level before and after ACTH stimulation test >10 ng/dL, CYP21 genotyping. |
21-hydroxylase-deficient congenital adrenal hyperplasia | 0.7% | See Late-onset CAH. Congenital virilization | 17-HP levels >30 ng/dL |
Hypothyroidism | 0.7%* | Fatigue, weight gain, history of thyroid ablation and untreated hypothyroidism, amenorrhea | TSH |
Hyperprolactinemia | 0.3%† | Amenorrhea, galactorrhea, infertility | Prolactin |
Androgenic secreting neoplasm | 0.2% | Pelvic masses, rapid-onset hirsutism or virilization, over age 30 with onset of symptoms | Pelvic ultrasound or abdomen/pelvic CT scan |
Cushing’s syndrome | 0%‡ | Hypertension, buffalo hump, purple striae, truncal obesity | Elevated blood pressure, positive dexamethasone suppression test |
*Five patients were previously diagnosed with hypothyroidism and 1 patient was diagnosed as part of the work-up for a total prevalence of 6 in 873 or 0.7% although the de novo incidence was only 0.1%. | |||
†Two patients were previously diagnosed with hyperprolactinemia and 1 was detected during the work-up for a total prevalence of 3 in 873 or 0.3% although the de novo incidence was 0.1%. | |||
‡No patients were identified with Cushing’s syndrome in this study. Other published reports vary from 0-1% (3). | |||
Source: Azziz et al, J Clin Endocrinol Metab 20042; Azziz, Obstet Gynecol 2003.8 |
Evidence summary
Hirsutism is the presence of excess terminal hairs in androgen-dependent areas on a female, and can be measured objectively using a scoring system such as the modified Ferriman-Gallway (mF-G) score. This test is done by adding hair scores (0=none, 4=frankly virile) in 9 different body locations. A total score >8 is considered hirsute. The incidence of hirsutism in the US is about 8%, based on a prospective study of 369 consecutive women of reproductive age seeking pre-employment physicals in the southeastern US using the mF-G criteria.1
The causes of clinically apparent androgen excess, including acne and hirsutism, were evaluated in 1281 consecutive patients presenting to a university endocrinology clinic.2 Researchers excluded 408 subjects due to the inability to assess hormone status or ovulatory function. The remaining 873 women were assessed by clinical exam, mF-G score, serum total and free testosterone, DHEAS, and 17-hydroxyprogesterone (17-HP). Hyperandrogenism was defined as an androgen value above the 95th percentile of 98 healthy control women (total testosterone ≥88 ng/dL, free testosterone ≥0.75 ng/dL, or DHEAS ≥2750 ng/dL). Those with a 17-HP level >2 ng/mL had either a repeat 17-HP or adrenocorticotropic hormone (ACTH) stimulation test. Those with at least 2 total testosterone levels above 250 ng/dL or those with signs of an androgen-secreting neoplasm (eg, virilization) underwent a transvaginal sonogram and a CT scan of the adrenals. Patients with ovulatory dysfunction had a thyroid-stimulating hormone (TSH) and prolactin level drawn. If Cushing’s syndrome was suspected clinically, the subjects underwent an overnight 1-mg dexamethasone suppression test (TABLE). Of 873 patients, 75.5% had hirsutism and 77.8% had hyperandrogenemia. An identifiable disorder of androgen excess was found in 7%; functional androgen excess (principally PCOS) was identified in the remainder.
The incidence of endocrine disorders among patients presenting with hirsutism or androgenic alopecia was evaluated during a prospective study of 350 consecutive patients referred to an endocrine clinic in the UK.3 Testing included serum total testosterone, androstenedione, 17-HP, and DHEAS on 2 occasions. Patients also underwent high-resolution pelvic ultrasound. Further investigations were done only for those with abnormal hormone levels or clinical findings suggestive of a tumor. Of 350 women tested, 13 had a markedly elevated serum total testosterone level >5 nmol/L (150 ng/dL). A single total testosterone test identified 6 of 8 patients with an underlying endocrine disorder. The other 2 had either acromegaly or prolactinoma. The researchers concluded that clinical assessment and a single serum total testosterone level were sufficient to exclude enzyme deficiencies and virilizing tumors.
A retrospective study of 84 consecutive women presenting to an endocrinology clinic in the Netherlands was conducted to determine hormone level sensitivity and specificity to identify virilizing adrenal tumors.4 Hormone levels of 14 women with either an adrenal carcinoma (n=12) or an adrenal adenoma (n=2) were compared with the hormone levels of the women with hirsutism (n=73) as well as to the controls (n=31). Serum levels of total testosterone, androstenedione, DHEAS, DHEA, and cortisol were measured. A 24-hour urinary 17-ketosteroid excretion was also measured. A 5-day dexamethasone suppression study was conducted and a urinary sample was obtained between 8 and 9 A.M. on Day 6. An elevated basal total testosterone (normal range, 29–84 ng/dL) or DHEAS level (normal range, 118–431 ng/dL) detected all 14 women with adrenal carcinomas or adenomas and 36 of 73 women with hirsutism of non-neoplastic origin. The combined test sensitivity was 100% (95% confidence interval [CI], 77–100) and specificity was 50% (95% CI, 38–62) for the detection of adrenal tumors.
A prospective study of the incidence of late-onset CAH among hirsute women evaluated 83 consecutive patients with hirsutism from an endocrinology clinic in California with an ACTH stimulation test.5 They found 1 patient with late-onset CAH. Because CAH had an incidence of only 1.2% (95% CI, 0.0–3.4), the authors concluded that routine testing with the ACTH stimulation test is not cost-effective for the evaluation of hirsutism.
Recommendations from others
The American College of Obstetrics and Gynecology 1995 technical bulletin recommended using the clinical examination to guide the evaluation, and laboratory testing to rule out androgen-producing tumors including a serum total testosterone and DHEAS.6 The Society of Obstetricians and Gynaecologists of Canada advised using the clinical examination to guide the assessment, and a total serum testosterone level and a DHEAS level.7
Referral is recommended in the presence of virilism or if the total testosterone or DHEAS level is over twice the upper limit of normal or if there are signs of Cushing’s disease.
Early work on expectations by physician and patient leads to a better outcome
Tim Huber, MD
Naval Hospital, Camp Pendleton, Calif
Primary care physicians field questions about nonspecific findings on a day-to-day basis. Hirsutism is a common complaint and physical finding in women. Most diagnoses related to hirsutism are not life-threatening and have a relatively straightforward workup. There is the occasional patient with a zebra-type diagnosis that demands more detailed evaluation. As with most physical findings that have a large subjective component, I find that early management of expectations both on the part of the physician and patient leads to a better outcome whether or not a million-dollar workup shows any definitive pathology.
1. Knochenhauer ES, Key TJ, Kahsar-Miller M, Waggoner W, Boots LR, Azziz R. Prevalence of the polycystic ovary syndrome in unselected black and white women of the southeastern United States: a prospective study. J Clin Endocrinol Metab 1998;83:3078-3082.
2. Azziz R, Sanchez A, Knochenhauer ES, et al. Androgen excess in women: experience with over 1000 consecutive patients. J Clin Endocrinol Metab 2004;89:453-462.
3. O’Driscoll JB, Mamtora H, Higginson J, Pollack A, Kane J, Anderson DC. A prospective study of the prevalence of clear-cut endocrine disorders and polycystic ovaries in 350 patients presenting with hirsutism or androgenic alopecia. Clin Endocrinol (Oxf) 1994;41:231-236.
4. Derksen J, Nagesser SK, Meinders AE, Haak HR, van de Velde CJH. Identification of virilizing adrenal tumors in hirsute women. N Engl J Med 1994;331:968-973.
5. Chetkowski RJ, DeFazio J, Shamonki I, Juss HL, Chang RJ. The incidence of late-onset congenital adrenal hyperplasia due to 21-hydroxylase deficiency among hirsute women. Clin Endocrinol Metab 1984;58:595-598.
6. ACOG technical bulletin. Evaluation and treatment of hirsute women. Int J Gynecol Obstet 1995;49:341-346.
7. Claman P, Graves GR, Kredentser JV, Sagle MA, Tummon TS, Fluker M. SOGC Clinical Practice Guidelines. Hirsutism: Evaluation and treatment. J Obstet Gynaecology Canada 2002;24:62-73.
8. Azziz R. The evaluation and management of hirsutism. Obstet Gynecol 2003;101:995-1006.
1. Knochenhauer ES, Key TJ, Kahsar-Miller M, Waggoner W, Boots LR, Azziz R. Prevalence of the polycystic ovary syndrome in unselected black and white women of the southeastern United States: a prospective study. J Clin Endocrinol Metab 1998;83:3078-3082.
2. Azziz R, Sanchez A, Knochenhauer ES, et al. Androgen excess in women: experience with over 1000 consecutive patients. J Clin Endocrinol Metab 2004;89:453-462.
3. O’Driscoll JB, Mamtora H, Higginson J, Pollack A, Kane J, Anderson DC. A prospective study of the prevalence of clear-cut endocrine disorders and polycystic ovaries in 350 patients presenting with hirsutism or androgenic alopecia. Clin Endocrinol (Oxf) 1994;41:231-236.
4. Derksen J, Nagesser SK, Meinders AE, Haak HR, van de Velde CJH. Identification of virilizing adrenal tumors in hirsute women. N Engl J Med 1994;331:968-973.
5. Chetkowski RJ, DeFazio J, Shamonki I, Juss HL, Chang RJ. The incidence of late-onset congenital adrenal hyperplasia due to 21-hydroxylase deficiency among hirsute women. Clin Endocrinol Metab 1984;58:595-598.
6. ACOG technical bulletin. Evaluation and treatment of hirsute women. Int J Gynecol Obstet 1995;49:341-346.
7. Claman P, Graves GR, Kredentser JV, Sagle MA, Tummon TS, Fluker M. SOGC Clinical Practice Guidelines. Hirsutism: Evaluation and treatment. J Obstet Gynaecology Canada 2002;24:62-73.
8. Azziz R. The evaluation and management of hirsutism. Obstet Gynecol 2003;101:995-1006.
Evidence-based answers from the Family Physicians Inquiries Network